GMM stands for Gaussian Mixture Model, which is a distribution-based clustering algorithm that assumes the data is composed of a mixture of Gaussian distributions. The goal of the algorithm is to find the parameters of the distribution of each cluster, such as the mean, covariance and mixing coefficient.
Working
Step 1: Choose k as the number of Gaussian distributions (or clusters) in the data. Initialize the parameters of each distribution randomly, such as the mean, covariance and mixing coefficient.
Step 2: Calculate the probability of each data point belonging to each cluster using the current parameters and the Gaussian probability density function. This is also known as the expectation step, where the algorithm assigns a soft membership to each data point based on its likelihood.
Step 3: Update the parameters of each cluster using the probabilities calculated in step 2 and the maximum likelihood estimation method. This is also known as the maximization step, where the algorithm maximizes the log-likelihood function with respect to the parameters.
Step 4: Repeat steps 2 and 3 until the parameters converge or a maximum number of iterations is reached.
Pseudocode
# Input: data points X, number of clusters k, maximum number of iterations max_iter
# Output: cluster assignments C, cluster parameters M
# Step 1: Initialize k cluster parameters randomly
M = random_parameters(X, k)
# Initialize cluster assignments
C = empty_array(X.size)
# Initialize number of iterations
iter = 0
# Loop until convergence or maximum iterations
while iter < max_iter:
# Step 2: Calculate the probability of each data point belonging to each cluster
P = probability(X, M)
# Step 3: Update the parameters of each cluster using maximum likelihood estimation
M = update_parameters(X, P)
# Increment the number of iterations
iter = iter + 1
# Assign each data point to the cluster with the highest probability
for i in range(X.size):
C[i] = argmax(P[i])
# Return the final cluster assignments and cluster parameters
return C, M
Advantages
- It can model clusters with different shapes, sizes, densities and orientations, unlike k-means which assumes spherical clusters.
- It can assign soft memberships to data points, meaning that a data point can belong to more than one cluster with different degrees of probability.
- It can handle outliers or noise points by assigning them low probabilities.
Disadvantages
- It requires choosing k in advance, which can be difficult or arbitrary.
- It is sensitive to initialization and may converge to local optima.
- It assumes that the data follows a Gaussian distribution, which may not be true for some datasets.
- It can have difficulty with high dimensional data, as the covariance matrix becomes large and complex.
References and Further Reading
- Gaussian Mixture Model - GeeksforGeeks
- Gaussian Mixture Models | Machine Learning | Google for Developers
- Gaussian Mixture Models - Towards Data Science
This will be a multipart series which will follow up with more clustering algorithms with their working, pseudocode, advantages and disadvantages.
Please stay tuned for more such content.
If you liked this post, please share it with your friends and fellow developers. And donโt forget to follow us for more programming tutorials and examples! ๐
And also,
have a look๐ @ my Portfolio
code๐จโ๐ป together @ Github
connect๐ @ LinkedIn
Top comments (0)