Posts

Showing posts from 2017

Things to note for Local SEO Optimization

Image
Things to note for Local SEO optimization SEO for local business is really important and it’s been proved over the last couple of years. From the last couple of years, it has become really easy to find neighborhood businesses or products nearby, thereby creating a shift in user’s intent and search pattern. Things to keep in Mind for Local SEO A couple of years ago, for a person searching for the coffee shop from Jubilee hills, would have been “Coffee shops in Jubilee hill” in Google. But of late people have stopped giving in their location and trend of “near me” search query has risen. Now the same person searching for a coffee shop would have been searching for “Coffee shops near me” in Google. And Google fetches Coffee shop nearby his location. Hence it validates the point, now is the time to optimize both off site and on site strategies for local SEO. Here are few things to keep in mind for optimization for local SEO: Title and Meta Tags: Title and Meta tags stil

Face detection with Python

Image
Here is how you can implement Face detection with Python and  OpenCV  in less than 25 lines of code: Face detection with Python and OpenCV Install OpenCV. Now download the code from  repo Now Let's break down the code # Get user supplied values imagePath   =   sys . argv [ 1 ] cascPath   =   sys . argv [ 2 ] The above lines takes image and cascade as input. The default cascade will help in detecting image with OpenCV. # Create the haar cascade faceCascade   =   cv2 . CascadeClassifier ( cascPath ) Now we create a cascade, this loads the face cascade into memory for its use. Cascade is just an XML which contains data to detect faces. # Read the image image   =   cv2 . imread ( imagePath ) gray   =   cv2 . cvtColor ( image ,   cv2 . COLOR_BGR2GRAY ) Here we read the image and convert it into Grayscale. A lot of operations in OpenCV are done in GrayScale. # Detect faces in the image faces   =   faceCascade . detectMultiScale (   gray ,  

Face detection with OpenCV

Image
Face detection with OpenCV OpenCV is the most popular library for computer vision. Originally written in C/C++, it now provides bindings for Python. OpenCV uses machine learning algorithms to search for faces within a picture. For something as complicated as a face, there isn’t one simple test that will tell you if it found a face or not. Instead, there are thousands of small patterns/features that must be matched. The algorithms break the task of identifying the face into thousands of smaller, bite-sized tasks, each of which is easy to solve. These tasks are also called  classifiers . For something like a face, you might have 6,000 or more classifiers, all of which must match for a face to be detected (within error limits, of course). But therein lies the problem: For face detection, the algorithm starts at the top left of a picture and moves down across small blocks of data, looking at each block, constantly asking, “ Is this a face? … Is this a face? … Is this a face ?” Si

Launching a new website: Your SEO Checklist

Image
Launching a new website yet, then hold on for a second. Here is the SEO checklist for new launching website: SEO Checklist for website Keyword and Url map:  The first thing to do would be keyword research and create a list which maps all the keywords to that content present in targeting URLs. The URL /seo-checklist-for-new-website, target keyword would be "SEO checklist" and page title would be "SEO checklist for the new website". If you have an important keyword which you have not targeted or if you have an URL which you have not targeted then do so. This will help when you do rank tracking and also on your on-page optimization too. Accessibility and UX: Firstly, one needs to check if all the pages and content of the website is accessible to search engines or not. You can use Google search console or OnPage.org to do a basic check on all the pages and make sure that you don't have duplicate content, that you don't have pages that have no c

Top 5 mistakes to avoid when writing Apache Spark applications

Image
Top 5 Mistakes to Avoid When Writing Apache Spark Applications Spark is one of the big data engines that are trending in recent times. One of the main reasons is that is because of its ability to process real-time streaming data. Its advantages over traditional MapReduce are: It is faster than MapReduce Well equipped with Machine Learning abilities. Supports multiple programming languages. However, in spite of having all these advantages over Hadoop, we often get stuck in certain situations which arise due to inefficient codes are written for applications. The situations and their solutions are discussed below: Always try to use reducebykey instead of groupbykey Reduce should be lesser than TreeReduce Always try to lower the side of maps as much as possible Try not to shuffle more Try to keep away from Skews as well as partitions too Do not let the jobs to slow down: When the application is shuffled, it takes more time around 4 long hours to run. This makes th

Why is Apache Spark is faster than MapReduce?

Image
Why is Apache Spark getting all the attention when it comes to the Big Data space? Why is Apache Spark 100x faster than MapReduce and how is it possible is the question for many in this space. This blog post is my way to answer this question. Why is Apache Spark getting attention in Big Data Space? Well, the answer is, for the scenarios where parallel processing is required and have many interdependent tasks, Apache Spark in memory processing offers the best big data processing platform. Hence the attention. Why is Apache Spark faster than MapReduce? Data processing requires computer resource like the memory, storage, etc. In Apache Spark, the data needed is loaded into the memory as Resilient Distributed Dataset (RDD) and processed in parallel by performing various transformation and action on it. In some cases, the output RDD from one task is used as input to another task, creating a lineage of RDDs which are inter-dependent on each other. However, in traditional MapReduce,

Redis Hash Datatype for .NET developers

Redis Hash Datatype are similar to Dictionary in C#. Redis Hash datatype is a mapping of key as string and value as a string. Redis hashes are memory optimized. var hashKey = “hashKey” ;    HashEntry [ ] redisMerchantDetails = {      new HashEntry ( “Name” , “Ravindra Naik” ) ,      new HashEntry ( “Age” , 26 ) ,      new HashEntry ( “Profession” , “Software Engineer” )    } ;    redis . HashSet ( hashKey , redisMerchantDetails ) ;    if ( redis . HashExists ( hashKey , “Age” ) )    {      var age = redis . HashGet ( hashKey , “Age” ) ; //Age is 26    }    var allHash = redis . HashGetAll ( hashKey ) ;    //get all the items    foreach ( var item in allHash )    {      //output      //key: Name, value: Ravindra Naik      //key: Age , value: 26      //key: Profession, value: Software Engineer      Console . WriteLine ( string . Format ( “key : {0}, value : {1}” , item . Name , item . Value ) ) ;    }    //get all t

Why Redis?

Why Redis? Redis is an open source, in-memory data structure store, used as database, cache and message broker. It also provides high performance and low latency, simplicity while developing complex functionality and compatibility with a wide variety of systems and languages. It is blazingly fast and is written in C and is completely in-memory and is optimized to handle millions of operations in a second with less than 1 ms latency in a single server. It also gives in pre-built data structures for database operations which include: Lists, sets, sorted sets, hashes, hyperloglogs, bitmaps and geospatial indexes. It has client libraries in almost every language and active developer community and contributors. Best cases to use Redis would include Real-time analytics, High-speed transactions, High-speed data ingest, messaging queues, session storage, in-app social functionality, application job management, geo search and caching. Redis is also rated as the fastest growing database sinc

Using Radis Hashes .NET

Redis Hashes are essentially maps between string fields and string values. Here I have published on how to get started using Redis with .NET . Once you have installed StackExchange.Redis and have setup console script as per above link, here are the things to do for using Redis Hashes with .NET. Create a new visual studio console project. Then right click select on Manage Nu-Get packages and select online option and search for “StackExchange” in the search field and install  StackExchange.Redis . Now to get connected to Redis and its database here is a snippet.   // This should be stored and reused var redis = ConnectionMultiplexer.Connect(“localhost”); // IDatabase is simple object which is cheap to build IDatabase db = redis.GetDatabase(); Lets create a data object.           var readObject = new ReadObject(); readObject._id = “3”; readObject.Name =”Ravi”; readObject.Age = “26”; readObject.Address = “Bangalore, India”; Now lets convert this object into Hash entry list.