Segmentation is a fundamental technique in image analysis that allows us to divide an image into meaningful parts based on objects, shapes, or colors. It plays a pivotal role in applications such as object detection, computer vision, and even artistic image manipulation. But how can we achieve segmentation effectively? Fortunately, OpenCV (cv2) provides several user-friendly and powerful methods for segmentation.
In this tutorial, we’ll explore three popular segmentation techniques:
- Canny Edge Detection – perfect for outlining objects.
- Watershed Algorithm – great for separating overlapping regions.
- K-Means Color Segmentation – ideal for clustering similar colors in an image.
To make this tutorial engaging and practical, we’ll use satellite and aerial images from Osaka, Japan, focusing on the ancient kofun burial mounds. You can download these images and the corresponding sample notebook from the tutorial's GitHub page.
Canny Edge Detection to Contour Segmentation
Canny Edge Detection is a straightforward yet powerful method to identify edges in an image. It works by detecting areas of rapid intensity change, which are often boundaries of objects. This technique generates "thin edge" outlines by applying intensity thresholds. Let’s dive into its implementation using OpenCV.
Example: Detecting Edges in a Satellite Image
Here, we use a satellite image of Osaka, specifically a kofun burial mound, as a test case.
import cv2
import numpy as np
import matplotlib.pyplot as plt
files = sorted(glob("SAT*.png")) #Get png files
print(len(files))
img=cv2.imread(files[0])
use_image= img[0:600,700:1300]
gray = cv2.cvtColor(use_image, cv2.COLOR_BGR2GRAY)
#Stadard values
min_val = 100
max_val = 200
# Apply Canny Edge Detection
edges = cv2.Canny(gray, min_val, max_val)
#edges = cv2.Canny(gray, min_val, max_val,apertureSize=5,L2gradient = True )
False
# Show the result
plt.figure(figsize=(15, 5))
plt.subplot(131), plt.imshow(cv2.cvtColor(use_image, cv2.COLOR_BGR2RGB))
plt.title('Original Image'), plt.axis('off')
plt.subplot(132), plt.imshow(gray, cmap='gray')
plt.title('Grayscale Image'), plt.axis('off')
plt.subplot(133), plt.imshow(edges, cmap='gray')
plt.title('Canny Edges'), plt.axis('off')
plt.show()
The output edges clearly outline parts of the kofun burial mound and other regions of interest. However, some areas are missed due to the sharp thresholding. The results depend heavily on the choice of min_val and max_val, as well as the image quality.
To enhance edge detection, we can preprocess the image to spread out pixel intensities and reduce noise. This can be achieved using histogram equalization (cv2.equalizeHist()) and Gaussian blurring (cv2.GaussianBlur()).
use_image= img[0:600,700:1300]
gray = cv2.cvtColor(use_image, cv2.COLOR_BGR2GRAY)
gray_og = gray.copy()
gray = cv2.equalizeHist(gray)
gray = cv2.GaussianBlur(gray, (9, 9),1)
plt.figure(figsize=(15, 5))
plt.subplot(121), plt.imshow(gray, cmap='gray')
plt.title('Grayscale Image')
plt.subplot(122)
_= plt.hist(gray.ravel(), 256, [0,256],label="Equalized")
_ = plt.hist(gray_og.ravel(), 256, [0,256],label="Original",histtype='step')
plt.legend()
plt.title('Grayscale Histogram')
This preprocessing evens out the intensity distribution and smooths the image, which helps the Canny Edge Detection algorithm capture more meaningful edges.
Edges are useful, but they only indicate boundaries. To segment enclosed areas, we convert edges into contours and visualize them.
# Edges to contours
contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Calculate contour areas
areas = [cv2.contourArea(contour) for contour in contours]
# Normalize areas for the colormap
normalized_areas = np.array(areas)
if normalized_areas.max() > 0:
normalized_areas = normalized_areas / normalized_areas.max()
# Create a colormap
cmap = plt.cm.jet
# Plot the contours with the color map
plt.figure(figsize=(10, 10))
plt.subplot(1,2,1)
plt.imshow(gray, cmap='gray', alpha=0.5) # Display the grayscale image in the background
mask = np.zeros_like(use_image)
for contour, norm_area in zip(contours, normalized_areas):
color = cmap(norm_area) # Map the normalized area to a color
color = [int(c*255) for c in color[:3]]
cv2.drawContours(mask, [contour], -1, color,-1 ) # Draw contours on the image
plt.subplot(1,2,2)
The above method highlights detected contours with colors representing their relative areas. This visualization helps verify if the contours form closed bodies or merely lines. However, in this example, many contours remain unclosed polygons. Further preprocessing or parameter tuning could address these limitations.
By combining preprocessing and contour analysis, Canny Edge Detection becomes a powerful tool for identifying object boundaries in images. However, it works best when objects are well-defined and noise is minimal. Next, we’ll explore K-Means Clustering to segment the image by color, offering a different perspective on the same data.
KMean Clustering
K-Means Clustering is a popular method in data science for grouping similar items into clusters, and it’s particularly effective for segmenting images based on color similarity. OpenCV's cv2.kmeans
function simplifies this process, making it accessible for tasks like object segmentation, background removal, or visual analysis.
In this section, we will use K-Means Clustering to segment the kofun burial mound image into regions of similar colors.
To start, we apply K-Means Clustering on the RGB values of the image, treating each pixel as a data point.
# Kmean color segmentation
use_image= img[0:600,700:1300]
#use_image = cv2.medianBlur(use_image, 15)
# Reshape image for k-means
pixel_values = use_image.reshape((-1, 3)) if len(use_image.shape) == 3 else use_image.reshape((-1, 1))
pixel_values = np.float32(pixel_values)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 3
attempts=10
ret,label,center=cv2.kmeans(pixel_values,K,None,criteria,attempts,cv2.KMEANS_PP_CENTERS)
centers = np.uint8(center)
segmented_data = centers[label.flatten()]
segmented_image = segmented_data.reshape(use_image.shape)
plt.figure(figsize=(10, 6))
plt.subplot(1,2,1),plt.imshow(use_image[:,:,::-1])
plt.title("RGB View")
plt.subplot(1,2,2),plt.imshow(segmented_image[:,:,[2,1,0]])
plt.title(f"Kmean Segmented Image K={K}")
In the segmented image, the burial mound and surrounding regions are clustered into distinct colors. However, noise and small variations in color lead to fragmented clusters, which can make interpretation challenging.
To reduce noise and create smoother clusters, we can apply a median blur before running K-Means.
# Apply median blur to smooth the image
use_image_blurred = cv2.medianBlur(use_image, 15)
The blurred image results in smoother clusters, reducing noise and making the segmented regions more visually cohesive.
To better understand the segmentation results, we can create a color map of the unique cluster colors, using matplotlib plt.fill_between
;
# View colors of segmented image
colors=np.unique(segmented_image[:,:,::-1].reshape(-1,3),axis=0)
plt.figure(figsize=(10, 2))
for i, color in enumerate(colors):
plt.fill_between([i, i+1], 0, 1, color=color/255.0) # Normalize RGB to [0,1] range
plt.text(i+0.5, 0.45, f" {color}", ha='center', va='bottom')
plt.axis('off')
This visualization provides insight into the dominant colors in the image and their corresponding RGB values, which can be useful for further analysis. AS we could mask and select area my color code now.
The number of clusters (K) significantly impacts the results. Increasing K creates more detailed segmentation, while lower values produce broader groupings. To experiment, we can iterate over multiple K values.
ks = [2,3,5,7,9,12,15]
results = []
plt.figure(figsize=(15, 6))
pixel_values = use_image.reshape((-1, 3)) if len(use_image.shape) == 3 else use_image.reshape((-1, 1))
pixel_values = np.float32(pixel_values)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
attempts=10
for i,k in enumerate(ks):
print(k)
ret,label,center=cv2.kmeans(pixel_values.copy(),k,None,criteria,attempts,cv2.KMEANS_PP_CENTERS)
segmented_data = center[label.flatten()]
segmented_image = segmented_data.reshape(use_image.shape)
results.append(segmented_image)
segmented_image=np.array(segmented_image,dtype=np.uint8)
plt.subplot(1,len(ks),i+1)
plt.imshow(segmented_image[:,:,[2,1,0]])
plt.title(f"Kmean K={k}")
plt.axis('off')
The clustering results for different values of K reveal a trade-off between detail and simplicity:
・Lower K values (e.g., 2-3): Broad clusters with clear distinctions, suitable for high-level segmentation.
・Higher K values (e.g., 12-15): More detailed segmentation, but at the cost of increased complexity and potential over-segmentation.
K-Means Clustering is a powerful technique for segmenting images based on color similarity. With the right preprocessing steps, it produces clear and meaningful regions. However, its performance depends on the choice of K, the quality of the input image, and the preprocessing applied. Next, we’ll explore the Watershed Algorithm, which uses topographical features to achieve precise segmentation of objects and regions.
Watershed Segmentation
The Watershed Algorithm is inspired by topographic maps, where watersheds divide drainage basins. This method treats grayscale intensity values as elevation, effectively creating "peaks" and "valleys." By identifying regions of interest, the algorithm can segment objects with precise boundaries. It’s particularly useful for separating overlapping objects, making it a great choice for complex scenarios like cell segmentation, object detection, and distinguishing densely packed features.
The first step is preprocessing the image to enhance the features, followed by applying the Watershed Algorithm.
# Read and preprocess the image
img=cv2.imread(files[0])
use_image= img[0:600,700:1300].copy()
# Convert to grayscale and apply threshold for enhanced feature detection
gray = cv2.cvtColor(use_image, cv2.COLOR_BGR2GRAY)
# binary image inversion, this ensures that foreground objects are white, and the background is black.
_, thresh = cv2.threshold(gray, 40, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)
# Noise removal using morphological opening, this reduced segmentation artifacts
kernel = np.ones((3, 3), np.uint8)
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=2)
# Distance transform to highlight object centers, this indicated distance from nearest pixel boundary
dist_transform = cv2.distanceTransform(opening, cv2.DIST_L2, 5)
# Thresholding identifies "sure foreground" regions, what is the areas most likely part of objects.
_, sure_fg = cv2.threshold(dist_transform, 0.7 * dist_transform.max(), 255, 0)
sure_fg = np.uint8(sure_fg)
# Identify the sure background and unknown regions
sure_bg = cv2.dilate(opening, kernel, iterations=3)
unknown = cv2.subtract(sure_bg, sure_fg)
# Create markers for watershed segmentation
markers = cv2.connectedComponents(sure_fg.astype(np.uint8))[1]
markers = markers + 1
markers[unknown == 255] = 0
# Apply Watershed, Pixels marked -1 represent boundary lines, which are visually highlighted.
markers = cv2.watershed(use_image, markers)
use_image[markers == -1] = [255, 255, 255] # Mark boundaries in red
The segmented regions and boundaries can be visualized alongside intermediate processing steps.
#ploting process
plt.figure(figsize=(15, 6))
plt.subplot(151),plt.imshow(dist_transform)
plt.title('Distance Transform'),plt.axis('off')
plt.subplot(152),plt.imshow(sure_fg)
plt.title('Sure Foreground'),plt.axis('off')
plt.subplot(153),plt.imshow(sure_bg)
plt.title('Sure Background'),plt.axis('off')
plt.subplot(154),plt.imshow(markers)
plt.title('Watershed Segmentation'),plt.axis('off')
plt.subplot(155),plt.imshow(cv2.cvtColor(use_image, cv2.COLOR_BGR2RGB))
plt.title('Watershed Segmentation'),plt.axis('off')
plt.show()
The algorithm successfully identifies distinct regions and draws clear boundaries around objects. In this example, the kofun burial mound is segmented accurately. However, the algorithm’s performance depends heavily on preprocessing steps like thresholding, noise removal, and morphological operations.
Adding advanced preprocessing, such as histogram equalization or adaptive blurring, can further enhance the results. For instance:
# Enhanced preprocessing
gray = cv2.equalizeHist(gray)
gray = cv2.GaussianBlur(gray, (9, 9), 1)
# Repeat the thresholding and watershed steps as before
_, thresh = cv2.threshold(gray, 40, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)
With these adjustments, more regions can be accurately segmented, and noise artifacts can be minimized.
The Watershed Algorithm excels in scenarios requiring precise boundary delineation and separation of overlapping objects. By leveraging preprocessing techniques, it can handle even complex images like the kofun burial mound region effectively. However, its success depends on careful parameter tuning and preprocessing.
Conclusion
Segmentation is an essential tool in image analysis, providing a pathway to isolate and understand distinct elements within an image. This tutorial demonstrated three powerful segmentation techniques—Canny Edge Detection, K-Means Clustering, and Watershed Algorithm—each tailored for specific applications. From outlining the ancient kofun burial mounds in Osaka to clustering urban landscapes and separating distinct regions, these methods highlight the versatility of OpenCV in tackling real-world challenges.
Now go and apply some of there method to an application of your choose and comment and share the results. Also if you know any other simple segmentation methods please share too
Top comments (0)