Have you come across a situation when a Chief Marketing Officer of a company tells you – “Help me understand our customers better so that we can market our products to them in a better manner!”
I did and the analyst in me was completely clueless what to do! I was used to getting specific problems, where there is an outcome to be predicted for various set of conditions. But I had no clue what to do in this case. If the person would have asked me to calculate Life Time Value (LTV) or propensity of Cross-sell, I wouldn’t have blinked. But this question looked very broad to me.
This is usually the first reaction when you come across an unsupervised learning problem for the first time! You are not looking for specific insights for a phenomena, but what you are looking for are structures with in data with out them being tied down to a specific outcome.
The method of identifying similar groups of data in a dataset is called clustering. It is one of the most popular techniques in data science. Entities in each group are comparatively more similar to entities of that group than those of the other groups. In this article, I will be taking you through the types of clustering, different clustering algorithms and a comparison between two of the most commonly used clustering methods.
Let’s get started.
Clustering is the task of dividing the population or data points into a number of groups such that data points in the same groups are more similar to other data points in the same group than those in other groups. In simple words, the aim is to segregate groups with similar traits and assign them into clusters.
Let’s understand this with an example. Suppose, you are the head of a rental store and wish to understand preferences of your costumers to scale up your business. Is it possible for you to look at details of each costumer and devise a unique business strategy for each one of them? Definitely not. But, what you can do is to cluster all of your costumers into say 10 groups based on their purchasing habits and use a separate strategy for costumers in each of these 10 groups. And this is what we call clustering.
Now, that we understand what is clustering. Let’s take a look at the types of clustering.
Broadly speaking, clustering can be divided into two subgroups :
Since the task of clustering is subjective, the means that can be used for achieving this goal are plenty. Every methodology follows a different set of rules for defining the ‘similarity’ among data points. In fact, there are more than 100 clustering algorithms known. But few of the algorithms are used popularly, let’s look at them in detail:
Now I will be taking you through two of the most popular clustering algorithms in detail – K Means clustering and Hierarchical clustering. Let’s begin.
K means is an iterative clustering algorithm that aims to find local maxima in each iteration. This algorithm works in these 5 steps :
Here is a live coding window where you can try out K Means Algorithm using scikit-learn library.
Hierarchical clustering, as the name suggests is an algorithm that builds hierarchy of clusters. This algorithm starts with all the data points assigned to a cluster of their own. Then two nearest clusters are merged into the same cluster. In the end, this algorithm terminates when there is only a single cluster left.
The results of hierarchical clustering can be shown using dendrogram. The dendrogram can be interpreted as:
At the bottom, we start with 25 data points, each assigned to separate clusters. Two closest clusters are then merged till we have just one cluster at the top. The height in the dendrogram at which two clusters are merged represents the distance between two clusters in the data space.
The decision of the no. of clusters that can best depict different groups can be chosen by observing the dendrogram. The best choice of the no. of clusters is the no. of vertical lines in the dendrogram cut by a horizontal line that can transverse the maximum distance vertically without intersecting a cluster.
In the above example, the best choice of no. of clusters will be 4 as the red horizontal line in the dendrogram below covers maximum vertical distance AB.
Two important things that you should know about hierarchical clustering are:
Clustering has a large no. of applications spread across various domains. Some of the most popular applications of clustering are:
Clustering is an unsupervised machine learning approach, but can it be used to improve the accuracy of supervised machine learning algorithms as well by clustering the data points into similar groups and using these cluster labels as independent variables in the supervised machine learning algorithm? Let’s find out.
Let’s check out the impact of clustering on the accuracy of our model for the classification problem using 3000 observations with 100 predictors of stock data to predicting whether the stock will go up or down using R. This dataset contains 100 independent variables from X1 to X100 representing profile of a stock and one outcome variable Y with two levels : 1 for rise in stock price and -1 for drop in stock price.
The dataset is available here : Download
Let’s first try applying randomforest without clustering.
#loading required libraries
library('randomForest') library('Metrics') #set random seed set.seed(101) #loading dataset data<-read.csv("train.csv",stringsAsFactors= T) #checking dimensions of data dim(data) ## [1] 3000 101 #specifying outcome variable as factor data$Y<-as.factor(data$Y) #dividing the dataset into train and test train<-data[1:2000,] test<-data[2001:3000,] #applying randomForest model_rf<-randomForest(Y~.,data=train) preds<-predict(object=model_rf,test[,-101]) table(preds) ## preds ## -1 1 ## 453 547 #checking accuracy auc(preds,test$Y) ## [1] 0.4522703
So, the accuracy we get is 0.45. Now let’s create five clusters based on values of independent variables using k-means clustering and reapply randomforest.
#combing test and train all<-rbind(train,test) #creating 5 clusters using K- means clustering Cluster <- kmeans(all[,-101], 5) #adding clusters as independent variable to the dataset. all$cluster<-as.factor(Cluster$cluster) #dividing the dataset into train and test train<-all[1:2000,] test<-all[2001:3000,] #applying randomforest model_rf<-randomForest(Y~.,data=train) preds2<-predict(object=model_rf,test[,-101]) table(preds2) ## preds2 ## -1 1 ##548 452 auc(preds2,test$Y) ## [1] 0.5345908
Whoo! In the above example, even though the final accuracy is poor but clustering has given our model a significant boost from accuracy of 0.45 to slightly above 0.53.
This shows that clustering can indeed be helpful for supervised machine learning tasks.
In this article, we have discussed what are the various ways of performing clustering. It find applications for unsupervised learning in a large no. of domains. You also saw how you can improve the accuracy of your supervised machine learning algorithm using clustering.
Although clustering is easy to implement, you need to take care of some important aspects like treating outliers in your data and making sure each cluster has sufficient population. These aspects of clustering are dealt in great detail in this article.
Did you enjoyed reading this article? Do share your views in the comment section below.
Lorem ipsum dolor sit amet, consectetur adipiscing elit,
10 Nov 23 • 08:00pm
Very nice tutorial Saurav!
Good to see you liked it. Thank you!
Nice, post! Please correc the last link - it is broken - thanks!
Hi, Richard. Glad you liked it ! Thanks for pointing out. Its fixed now!
I accept that clustering may help in improving the supervised models. But here in the above: Clustering is performed on sample points (4361 rows). Is that right.? But I think correct way is to cluster features (X1-X100) and to represent data using cluster representatives and then perform supervised learning. Can you please elaborate further? Why samples are being clustered in the code (not independent variables)?
Hey, Sai. So, Yes. I have clustered the observations ( or rows, 3000 in total). Consider all these data points ( observations) in data space with all the features (x1-x100) as dimensions. What I'm doing is to cluster these data points into 5 groups and store the cluster label as a new feature itself. Clustering the 100 independent variables will give you 5 groups of independent variables. Going this way, how exactly do you plan to use these cluster labels for supervised learning?
Nice article! How would you handle a clustering problem when there are some variables with many missing values (let's say...around 90% of each column). These missing values are not random at all, but even they have a meaning, the clustering output yields some isolated (and very small) groups due to these missing values. Thanks in advance!
Hi Luis. Thanks. 1. Since the missing values are as high as 90%, you can consider dropping these variables. 2. As you said, these missing values are not completely meaningless, try imputing them (might not yield good results with this high percentage of missing values.) 3. If the pattern in missing values is something like say… values are missing because students didn’t took a certain test otherwise that column contains the scores of that test. You can try replacing the variable with another variable having 0 for missing values and 1 for some valid value.
Hello Saurav, Your article and related explanation on clustering and the two most used methods was very insightful. However, please do enlighten us by telling how does one interpret cluster output for both these methods - K-means and Hierarchical. Also, it would be nice if you could let the reader know when could one use K-means versus say something like K-median. In what scenario does the former work and in which one does the latter??? It would also be a great idea to: 1. Discuss the ways to implement a density based algorithm and a distribution based one 2. Maybe show an actual example of market segmentation You have done a good job of showing how clustering could in sense preclude a following classification method but if the problem is such that it is only limited to clustering, then how would you explain the output to an uninitiated audience? Maybe some thoughts for your second article in the clustering article. But great job. I enjoyed reading your piece.
Hi Kunal, I'm happy that you liked the article. Actually, clustering is a very wide topic to be completely covered in a single article. For some of the things that you mentioned like when to use which method out of two , you can refer to differences between two. For interpretation of Clusters formed using say Hierarchical clustering is depicted using dendrograms. Apart from these, things like using density based and distribution based clustering methods, market segmentation could definitely be a part of future articles on clustering. Thank you for your thoughts.
Also Saurav, It might be a good idea to suggest which clustering algorithm would be appropriate to use when: 1. All variables are continuous 2. All variables are categorical - many times this could be the case 3. All variables are count - maybe sometimes 4. A mix of continuous and categorical - this could be possibly the most common 5. Similarly a mix of continuous, categorical and count To be more precise, if I had one or more scenarios above, and was using a distance based method to calculate distances between points, what distance calculation method works where. Any insights would be great!!
So, to understand this, its important to understand how categorical variables behave in clustering. If the levels of your categorical variables are in sequence like : Very bad, bad, Average, Good, Very Good. You can try encoding labels say with 0,1,2,3 and 4 respectively. If there is no sequence in levels like : red, green and orange , you can try one hot encoding. Also, there is no one definite best distance metric to cluster your data. It depends on various factors like the ones you mentioned : type of variables. Also, things like the scales of variables , no. of clusters you want are important while deciding the best distance metric.
Hi Saurav, Since we are classifying assets in this tutorial, don't you think corelation based distance should give us better results than eucledian distances (which k-means normally uses)?
Hi Nikunj, Intuitively speaking, its definitely worth a shot. Good suggestion.
Hi It 's a good post on covering a broad topic like Clustering. However, I'm not so convinced about using Clustering for aiding Supervised ML. For me, Clustering based approaches tend to be more 'exploratory' in nature to understand the inherent data structure, segments et al. Dimensionality Reduction techniques like PCA are more intuitive approaches (for me) in this case, quite simple because you don't get any dimensionality reduction by doing clustering and vice-versa, yo don't get any groupings out of PCA like techniques. my distinction of the two, PCA is used for dimensionality reduction / feature selection / representation learning e.g. when the feature space contains too many irrelevant or redundant features. The aim is to find the intrinsic dimensionality of the data. K-means is a clustering algorithm that returns the natural grouping of data points, based on their similarity. I'd like to point to the excellent explanation and distinction of the two on Quora : https://www.quora.com/What-is-the-difference-between-factor-and-cluster-analyses my question to you how would you fit / cluster the same groupings (you obtained out of clustering the training set) onto a unseen test set? or would you apply clustering to it again? typically, you perform PCA on a training set and apply the same loadings on to a new unseen test set and not fit a new PCA to it..
Really nice article Saurav , this helped me understand some of the basic concepts regarding clustering. I was hoping if you can post similar articles on Fuzzy, DBSCAN, Self Organizing Maps. Aditya
Hi Saurav Kaushik, I am new to this area, but I am in search of help to understand it deeper. One of my personal projects involves analysing data for creating a "predictive model" based on some information collected about previous historical data which I have in a spreadsheet (or in .txt file if it is bette). Could you recommend a simple package (in Python or in Delphi) that can help me do something like this? My spreadsheet has (for example), 1500 lines which represent historical moments (Test 1, Test2...Test1500). On the columns, I have the Labels and Values for each of 1000 characteristics I analyse separately at each Test. What I would like to do with this? To be able to "predict" some 10 ou 20 values for 10 or 20 characteristics for the next Test1501. Do you think it is possible? If you are involved in this kind of project, what would it cost me to have your help in building a tool for doing that? I can send you an example file, if you would be interested in helping me. My direct contact : dixiejoelottolex at gmail dot com
Hi and thank you for your article. Running your example I am running in a series of issues. The first one being the result of preds<-predict(object=model_rf,test[,-101]) head(table(preds)) preds -0.192066666666667 -0.162533333333333 -0.120533333333333 -0.0829333333333333 -0.0793333333333333 1 1 1 1 1 -0.079 1 Then auc(preds,test$Y) [1] NaN The second exemple with the added cluster produces the same result. Any idea why my result is so different than yours?
Hey Laurent, 1. Make sure your outcome variable in categorical and so are your predictions. 2. Make sure you have loaded the Metrics package as auc() is the function defined in that package. Hope this will resolve your query.
Hey Saurav, Could you please give a code for Python? Thanks
Hi Saurav, It is Good for understanding but add the elbow method