Skip to main content

Missing Value Approximation


What do we do when the data you have has a missing values? There are very few options like:

1. Delete the data
2. Make the missing value zero
3. Approximate
4. Ignore and so on

Now obviously, options 2 and 4 are not the likely way of dealing with the missing data. 

When we make the missing value ZERO, you are unknowingly creating a new data which may be either valid or not. For example, what if the ZERO value is one of the likely possible value of the variable that you are dealing with? In that case we have a wrong set of pattern. 

Further, we simply cannot ignore the data. Ignoring is some how similar to deleting.

When we delete the data, we are decreasing our sample number, hence decreasing the power of our statistical tools. As we know, the statistics work best when there are large number of data. In addition, there are other difficulties with deleting a data, which we will get to latter on.

Therefore, it seems very probable that we approximate the value. But we have to be very careful while doing so. This time, we will be looking at a simple algorithm to approximate certain typical set of data. There are commercial software available for the purpose like SPPS, SAS etc. To note, you can use any programming software to approximate. I have used java to read the data from excel and then approximate the value. Its your choice. 

Let me first give a very simple algorithm to approximate:

1.  Read the whole data in an array
2. Move along the array and find a missing value
3. When you find, take a difference between the values of the consecutive data over certain range
4. Take an average of the difference
5. Add that average to the data next to the missing value
6. The new value is the approximated value
7.  Repeat the process for all the missing values

In mathematical terms:


where,


d = data array
n +1 =  location of the missing data
N = total number of difference taken




Now, in these variables we can play around to get the required approximated value. These variables control the trend of the approximation. 

When you are approximating, make sure to plot the data to look how your approximation has turned out. This will give you better control of your result. The reason for doing this is that no two data sets have the similar trend, therefore particular algorithm may work for one but not for other.

Lets take some  examples and see where there is places for errors and wrong approximation.

In this approximation, everything went very well. The data from 1 to 928 was missing. The trend looks very simple - the values are rising and not decreasing. Therefore, the approximation worked very well. Try for other data you have and you may be satisfied with the result. But can we justify our approximation. What if the values initial were peaking and not around zero. Therefore make sure, your approximation is also logical. For that try to understand the nature of the variable that you are handling. This way the margin for error is reduced.

One very important thing to remember is that, make sure you are not trying to approximate missing values where the missing values make about 75 or 80 percent of your data set. This can really go very very wrong. 

For example:


The above data looks very damaged. The trend is somehow unclear and missing values are almost above 70 percent of all. When we run the same algorithm for the above data we get the result as below:


What can we say about the above result. Very unlikely and unrealistic. Well the algorithm was the same but it did not turn out as we expected. In this case, we need more of an intelligent approximation that can look at the trend and decide how to approximate. May be some adaptive algorithm. But we can also argue, if we can really approximate the above data with complete satisfaction as the number of missing values are in large number.

Another example:

Data :


Result:


Wow!! How is that trend possible? You see, not always is the algorithm doing a job for us. Therefore, we have to be really very careful. In this case, our data was very unpredictable: we had many rise and fall. The algorithm samples only the consecutive data for predicting the next values, therefore we run into wrong prediction. From this we can say, the algorithm does not work with the random data. To clarify this point lets zoom into next data :




In the above graphs, we can clearly see that the data are fluctuating in a random manner. Remember the above graph represents the same data set, but in different level of zoom. In this case its virtually impossible to predict the missing value using the above algorithm. Therefore, we must come up with a better algorithm.

It think I have almost covered most of the fundamentals on missing values. So, if there is any suggestions or any observation on my analysis, please feel free to comment on it. 

Comments

Popular posts from this blog

Good Ad Versus Bad Ad

At InRev , for past few days we have been working on our new project. The project is about collaborative blogging, from the blogger around the world. As Bhupendra and I have been blogging for long time, with his immense experience, our team has been working to revamp the globalthoughtz and create a new blogging experience. In this site, we are planning to add a section where people can add their advertisement. We thought lets start by our own :), with already so many products on the line, it was obvious. So there goes an effort to create a small 128 px by 128 px logo. If you are a designer, you know that designing is a very time consuming process. For hours, you just test around with colors. Remember there are 256 X 256 X 256 combination of colors!! Its hard job selecting one. Lets start with F-Cube : In short F-cube is a Free Economic Reporting Site. Let me exhibit some Ad made for it and its pros and cons. (Please don't click on the image. Wait till the end. Thank You) OK, h...

Bruce Lee

Yes. Bruce Lee . Remember all the punches and kicks in his all time hits movies. But why am I talking about him. Am I going to talk about his movies? No, I am not. I am going to say about his philosophies which have been so many times projected in his fighting styles. By the way, he also wrote a book. Many may not know but he had a unique way of thinking about the life and his art of fighting. Let me put forwards his thinking, in my own words.   He believed in efficiency and rejected any amount of redundancy that affected the performance. He believed that there is no one style which is correct. Everyone has their own unique way and must harness those skills rather waste their time in doing something that is not efficient at all.  There are many fighting styles carrying their own norms and values. Every style starts by assuming something. But slowly those assumption are taken as a Gospel truth, and people make them into rules which cannot be broken. But, without understanding the key me...

Being an Artist and Being an Entrepreneur is almost similar

Most people flock to big cities to pursue their dreams and struggle for years. Some people succeed but many still find themselves waiting for the opportunities. They consider that opportunity is the biggest thing that can actually leverage their talent or skill. But I believe that it is not always so. There are couple of examples that proves that opportunity along is not everything. Moreover in today’s media hyped world, opportunities are everywhere from a reality TV shows to your own youtube channel, twitter account or facebook profile. If you look around in youtube, there are lots of artists doing great with their skills and scaling them to offline. This gives a lot of encouragement to all those people who want to pursue their passion. But there is a reality beyond pursuing your dreams that everyone considers once in a while. It’s how far you can take yourself with what you have. Most people get stuck on it and lose their passion in mid way, and some people simply cannot go beyon...