Hierarchical Clustering - Agglomerative, Divisive & Dendogram

hierarchical clustering

Hierarchical Clustering

A hierarchical clustering method works by grouping data objects into a tree of clusters.

Hierarchical clustering methods can be further classified into agglomerative and divisive hierarchical clustering, depending on whether the hierarchical decomposition is formed in a bottom-up or top-down fashion.


(Partitional Clustering - read here)

(Density-based Clustering - read here)

(Grid-based Clustering - read here)  

(AGNES)Agglomerative Hierarchical Clustering: 

This bottom-up strategy starts by placing each object in its own cluster and then merges these atomic clusters into larger and larger clusters until all of the objects are in a single cluster or until certain termination conditions are satisfied.

(DIANA)Divisive Hierarchical Clustering: 

This top-down strategy does the reverse of agglomerative clustering by starting with all objects in one cluster. 

It subdivides the cluster into smaller and smaller pieces until each object forms a cluster on its own or until it satisfies certain termination conditions such as a desired number of clusters is obtained or the distance between the two closest clusters is above a certain threshold distance.

hierarchical clustering
 

Data set of five objects a, b, c, d. Initially, AGNES places each object into a cluster of its own.

The clusters are then merged step-by-step according to some criterion.

For example, clusters C1 and C2 may be merged if an object in C1 and an object in C2 form the minimum Euclidean distance between any two objects from different clusters.

This is a single-linkage approach in that each cluster is represented by all of the objects in the cluster, and the similarity between two clusters is measured by the similarity of the closest pair of data points belonging to different clusters.

The cluster merging process repeats until all of the objects are eventually merged to form one cluster.

In DIANA, all of the objects are used to form one initial cluster.

The cluster is split according to some principle, such as the maximum Euclidean distance between the closest neighboring objects in the cluster.

The cluster splitting process repeats until, eventually, each new cluster contains only a single object.

In either agglomerative or divisive hierarchical clustering, the user can specify the desired number of clusters as a termination condition.

A tree structure called a dendrogram is commonly used to represent the process of hierarchical clustering.


Decompose data objects into several levels of nested partitioning (tree of clusters), called a dendrogram.

A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster.

It shows how objects are grouped together step by step.

The figure shows a dendrogram for the five objects presented in previous Fig, where l = 0 shows the five objects as singleton clusters at level 0.

 At l = 1, objects a and b are grouped together to form the first cluster, and they stay together at all subsequent levels. 

We can also use a vertical axis to show the similarity scale between clusters. For example, when the similarity of two groups of objects, {a, b} and {c, d, e} is roughly 0.16, they are merged together to form a single cluster.

Agglomerative and Divisive
 
Four widely used measures for finding distance between clusters are, where |p-p'| is the distance between two object points p and p', p and p' belongs to Ci and Cj respectively, mi is the mean for cluster Ci and ni is the no of objects in Ci.

Minimum distance: dmin(Ci, Cj) = min|p-p'|

Maximum distance: dmax(Ci, Cj) = max|p-p'|  

Mean distance: dmean(Ci, Cj) = |mi-mj| 
 

Agglomerative Nesting(AGNES)

This clustering method was Introduced by Kaufmann and Rousseeuw (1990).

It is generally implemented in statistical analysis packages, e.g., Splus.

It uses the Single-Link method and the dissimilarity matrix.  

It merge nodes that have the least dissimilarity this process goes on in a non-descending fashion and eventually all nodes belong to the same cluster.

Agglomerative Clustering

Divisive Analysis(DIANA)

It was also introduced by Kaufmann and Rousseeuw (1990).

It is also similar to that of Agglomerative Clustering but the way of application is different i.e in the Reverse way of AGNES.

It is also implemented in statistical analysis packages, e.g., Splus.

(DIANA) Divisive Clustering

Disadvantages Of Hierarchical Clustering

The hierarchical clustering method, though simple, often encounters difficulties regarding the selection of merge or split points. 

Such a decision is critical because once a group of objects is merged or split, the process at the next step will operate on the newly generated clusters.

It will neither undo what was done previously nor perform object swapping between clusters.

Thus merge or split decisions, if not well chosen at some step, may lead to low-quality clusters. 

Moreover, the method does not scale well, because each decision to merge or split requires the examination and evaluate a good number of objects or clusters.

More On Hierarchical Methods Are

BIRCH (1996): Uses CF-tree and incrementally adjusts the quality of sub-clusters.

CURE (1998): Selects well-scattered points from the cluster and then shrinks them towards the center of the cluster by a specified fraction.

CHAMELEON (1999): Hierarchical clustering using dynamic modeling.

 

Conclusion

Here is a brief explanation of Hierarchical Clustering Methods which are often used in the Data Mining Process.

Subscribe us for more content on Data. 

    

Post a Comment

0 Comments