The main purpose of the code is to detect the squares of the chessboard, draw lines around them, and count how many of these squares are black and white based on their average pixel intensity. Here’s a detailed explanation of the code's purpose and functionality.
-
Import Libraries:
- The code begins by importing necessary libraries: OpenCV (
cv2
) for image processing, NumPy for numerical operations, Matplotlib for displaying images, and Pandas.
- The code begins by importing necessary libraries: OpenCV (
import cv2
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from google.colab.patches import cv2_imshow
-
Load and Prepare the Image:
- The original image is loaded using
cv2.imread()
. - The image is converted from BGR (OpenCV default) to RGB format for proper color representation.
- A grayscale version of the image is created for processing.
- The original image is loaded using
original_image = cv2.imread('original.png')
rgb_image = cv2.cvtColor(original_image, cv2.COLOR_BGR2RGB)
gray_image = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
-
Image Preprocessing:
- Gaussian Blur: A Gaussian blur is applied to the grayscale image to reduce noise and improve edge detection.
- Otsu's Thresholding: Otsu's method is used to convert the blurred grayscale image into a binary image, where pixels are classified as either black or white based on their intensity.
gaussian_blur = cv2.GaussianBlur(gray_image, (5, 5), 0)
ret,otsu_binary = cv2.threshold(gaussian_blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
-
Edge Detection:
- The Canny edge detection algorithm is applied to the binary image to find edges.
canny = cv2.Canny(otsu_binary,20,255)
kernel = np.ones((7, 7), np.uint8)
img_dilation = cv2.dilate(canny, kernel, iterations=1)
-
Morphological Operations:
- Dilation is performed on the Canny image to enhance the detected edges, making it easier to identify lines in the next step.
-
Line Detection:
- The Hough Line Transform (
cv2.HoughLinesP
) is used to detect lines in the dilated image. Detected lines are drawn on the image for visualization.
- The Hough Line Transform (
lines = cv2.HoughLinesP(img_dilation, 1, np.pi/180, threshold=200, minLineLength=100, maxLineGap=50)
if lines is not None:
for i, line in enumerate(lines):
x1, y1, x2, y2 = line[0]
# draw lines
cv2.line(img_dilation, (x1, y1), (x2, y2), (100,100,255), 2)
kernel = np.ones((3, 3), np.uint8)
img_dilation_2 = cv2.dilate(img_dilation, kernel, iterations=1)
plt.imshow(img_dilation_2,cmap="gray")
-
Contour Detection:
- Contours of potential squares (chessboard cells) are found using
cv2.findContours()
. This helps in identifying rectangular shapes that represent squares on the chessboard.
- Contours of potential squares (chessboard cells) are found using
board_contours, hierarchy = cv2.findContours(img_dilation_2, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
-
Filtering Rectangles:
- Each contour is analyzed, and only those with an area between 4000 and 40000 pixels are considered (to filter out noise).
- The contours are approximated to polygons, and only those with four vertices (quadrilaterals) are retained as valid squares.
if 4000 < cv2.contourArea(contour) < 40000:
# Approximate the contour to a simpler shape
epsilon = 0.02 * cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, epsilon, True)
# Ensure the approximated contour has 4 points (quadrilateral)
if len(approx) == 4:
pts = [pt[0] for pt in approx] # Extract coordinates
# Define the points explicitly
pt1 = tuple(pts[0])
pt2 = tuple(pts[1])
pt4 = tuple(pts[2])
pt3 = tuple(pts[3])
x, y, w, h = cv2.boundingRect(contour)
center_x=(x+(x+w))/2
center_y=(y+(y+h))/2
-
Storing Square Centers:
- The centers of valid squares are calculated and stored along with their corner points for further processing.
square_centers.append([center_x,center_y,pt2,pt1,pt3,pt4])
-
Sorting Coordinates:
- The detected square centers are sorted by their y-coordinates (row-wise) and grouped based on proximity in their y-values.
sorted_coordinates = sorted(square_centers, key=lambda x: x[1], reverse=True)
groups = []
current_group = [sorted_coordinates[0]]
-
Handling Undetected Squares:
- Additional logic checks for undetected squares between detected ones based on their coordinates and draws lines to connect them if they fall within certain criteria.
for num in range(len(sorted_coordinates)-1):
if abs(sorted_coordinates[num][1] - sorted_coordinates[num+1][1])< 100 :
if sorted_coordinates[num+1][0] - sorted_coordinates[num][0] > 200:
x=(sorted_coordinates[num+1][0] + sorted_coordinates[num][0])/2
y=(sorted_coordinates[num+1][1] + sorted_coordinates[num][1])/2
p1=sorted_coordinates[num][5]
p2=sorted_coordinates[num+1][4]
p3=sorted_coordinates[num+1][3]
p4=sorted_coordinates[num][2]
cv2.line(otsu_binary, p1, p2, (255, 255, 0), 7)
cv2.line(otsu_binary, p1, p4, (255, 255, 0), 7)
cv2.line(otsu_binary, p2, p3, (255, 255, 0), 7)
cv2.line(otsu_binary, p3, p4, (255, 255, 0), 7)
sorted_coordinates.insert(num+1,[x,y,p1,p2,p3,p4])
-
Counting Black and White Squares:
- For each detected square, its bounding box is extracted from the binary image.
- The average pixel intensity of each square is calculated: if it's greater than 127, it’s counted as white; otherwise, it’s counted as black.
for coordinate in sorted_coordinates:
points = coordinate[2:] # Get only the tuple points
# Extract x and y coordinates from the points
x_coords = [point[0] for point in points]
y_coords = [point[1] for point in points]
# Determine the bounding box of the rectangle
x_min = int(min(x_coords))
x_max = int(max(x_coords))
y_min = int(min(y_coords))
y_max = int(max(y_coords))
# Extract the rectangle from the binary image
rectangle = otsu_binary[y_min:y_max, x_min:x_max]
# Calculate the average color of the rectangle
avg_color = np.mean(rectangle)
# Count based on average color
if avg_color > 127: # Assuming average color > 127 is white
white_count += 1
else:
black_count += 1
print(white_count)
print(black_count)
-
Output Results:
- Finally, the counts of black and white squares are printed, and the processed binary image with drawn contours is displayed using Matplotlib.
Custom Control
- The code can be customized based on specific requirements or characteristics of different chessboards or images.
For example:
- Adjusting thresholds in Otsu's method or Canny edge detection can help in better detecting edges depending on lighting conditions.
- Modifying area constraints when filtering contours can help include or exclude certain sizes of detected squares.
- Additional features such as color detection could be implemented if colored chessboards are being analyzed rather than just black-and-white ones.
Conclusion
This code effectively demonstrates how to use OpenCV for complex image processing tasks such as detecting shapes (in this case, chessboard squares), analyzing their properties, and counting them based on color criteria. It serves as a practical example of applying computer vision techniques in Python for real-world applications like game analysis or board recognition.
Top comments (0)