k-means algorithm applied to image classification and processing

Classification

Machine Learning applies automatic data-driven learning methods to obtain accurate predictions from observations with previous data.

Data automated classification is one of the aims of machine learning. We can consider three types of classification algorithms:

  • Supervised classification: We have a dataset (i.e., handwritten digit images) that we will call training data where each datum is associated with a label (that tell us which digit, 0, 1,..,9, corresponds to that image). In the training stage, we build a model with this dataset (training dataset) using the labels, that helps us to assess the correct or incorrect classification of an image while building the model. Once we have the model, we can use it to classify new data. At this stage we do not need the labels, unless we want to assess the accuracy of the classification.

  • Unsupervised classification: The dataset comes without labels (or we will not use them) and the data is classified using their inner structure (properties, characteristics).

  • Semi-supervised classification: we can apply it when some data comes with labels, but not all of them. This is typical when our data consist of images: we have access to many images but they are mostly untagged. These algorithms can be considered a variant of the supervised classification with a strategy to overcome the lack of labels for part of the data.

In this lab we will see some application examples of the unsupervised classification algorithm k-means for image classification and processing.

k-means algorithm

K-means is an unsupervised classification algorithm, also called clusterization, that groups objects into k groups based on their characteristics. The grouping is done minimizing the sum of the distances between each object and the group or cluster centroid. The distance usually used is the quadratic or euclidean distance.

The algorithm has three steps:

  1. Initialization: once the number of groups, k has been chosen, k centroids are established in the data space, for instance, choosing them randomly.
  2. Assignment of objects to the centroids: each object of the data is assigned to its nearest centroid.
  3. Centroids update: The position of the centroid of each group is updated taking as the new centroid the average position of the objects belonging to said group.

Repeat steps 2 and 3 until the centroids do not move, or move below a threshold distance in each step.

alt text

The k-means algorithm solves an optimization problem and the function to be optimized (minimized) is the sum of the quadratic distances from each object to its cluster centroid.

The objects are represented with $d$ dimension vectors $\left(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}\right)$ and the algorithm k-means builds $k$ groups where the sum of the distances of the objects to its centroid is minimized within each group $\mathbf{S}=\left\{ S_{1},S_{2},\ldots,S_{k}\right\}$. The problem can be formulated:

$$ \underset{\mathbf{S}}{\mathrm{min}}\; E\left(\boldsymbol{\mu_{i}}\right)=\underset{\mathbf{S}}{\mathrm{min}}\sum_{i=1}^{k}\sum_{\mathbf{x}_{j}\in S_i}\left\Vert \mathbf{x}_{j}-\boldsymbol{\mu}_{i}\right\Vert ^{2} \quad (1)$$

where $\mathbf{S}$ is the dataset whose elements are the objects $\mathbf{x}_{j}$ represented by vectors, where each of its elements represents a characteristic or attribute. We will have $k$ groups or clusters with their corresponding centroid $\boldsymbol{\mu_{i}}$.

In each centroid update, from the mathematical point of view, we impose the extreme (minimum, in this case) necessary condition to the function $E\left(\boldsymbol{\mu_{i}}\right)$ that, for this quadratic function $(1)$ is

$$ \frac{\partial E}{\partial\boldsymbol{\mu}_{i}}=0\;\Longrightarrow\;\boldsymbol{\mu}_{i}^{(t+1)}=\frac{1}{\left|S_{i}^{(t)}\right|}\sum_{\mathbf{x}_{j}\in S_{i}^{(t)}}\mathbf{x}_{j} $$

and the solution is to take each group element average as a new centroid. We have used the gradient descent method.

The main advantages of the k-means method are that it is simple and fast. But it is necessary to decide the value of $k$ and the final result depends on the initialization of the centroids. Also, it does not necessarily converge to the global minimum but to a local minimum.


Exercise 1

Write a script that uses the k-means method to classify two dimensional data. Classify the data generated below in three groups.

alt text

To insert the created figures in this notebook

In [1]:
%matplotlib inline

We import the following libraries

In [2]:
import numpy as np
import matplotlib.pyplot as plt

We generate 2D random data for three clusters and represent them

In [3]:
np.random.seed(7)

x1 = np.random.standard_normal((100,2))*0.6+np.ones((100,2))
x2 = np.random.standard_normal((100,2))*0.5-np.ones((100,2))
x3 = np.random.standard_normal((100,2))*0.4-2*np.ones((100,2))+5
X = np.concatenate((x1,x2,x3),axis=0)

plt.plot(X[:,0],X[:,1],'k.')
plt.show()
In [4]:
%run Exercise1.py
***********   iteration  0
***********   iteration  1
***********   iteration  2
***********   iteration  3
***********   iteration  4
***********   iteration  5
***********   iteration  6
***********   iteration  7
<matplotlib.figure.Figure at 0x10e15ef28>

We will use KMeans from sklearn library in the other exercises.

In [5]:
from sklearn.cluster import KMeans

We group the points with k-means and $k=3$

In [6]:
n = 3
k_means = KMeans(n_clusters=n)
k_means.fit(X)
Out[6]:
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
    n_clusters=3, n_init=10, n_jobs=1, precompute_distances='auto',
    random_state=None, tol=0.0001, verbose=0)

The result is three centroids around which the points are grouped and each point label that indicate which cluster this point belongs to.

In [7]:
centroids = k_means.cluster_centers_
labels= k_means.labels_

Now we draw the points and the centroids, using a different color for the points of each cluster.

In [8]:
plt.plot(X[labels==0,0],X[labels==0,1],'r.', label='cluster 1')
plt.plot(X[labels==1,0],X[labels==1,1],'b.', label='cluster 2')
plt.plot(X[labels==2,0],X[labels==2,1],'g.', label='cluster 3')

plt.plot(centroids[:,0],centroids[:,1],'mo',markersize=8, label='centroids')

plt.legend(loc='best')
plt.show()

Digit image classification with k-means

Let us classify digits of the database contained in sklearn library of python using the k-means algorithm.

We import the usual libraries

In [9]:
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image

and the library that contains the function k-means

In [10]:
from sklearn.cluster import KMeans

We import the library that contains our dataset

In [11]:
from sklearn.datasets import load_digits

We load the digit images

In [12]:
digits = load_digits()
data = digits.data
In [13]:
print(data.shape)
(1797, 64)

Pixels from the square image of $8\times 8$ pixels have been reshped in a row of $ 64 $ elements. Therefore, each row is an object or data. The characteristics or properties of each object are the gray intensities of each pixel. That is, we have, for each image, $64$ properties.

To improve the visualization, we invert the colors

In [14]:
data = 255-data

We fix the seed to obtain the initial centroids, so the results obtained here are repeatable.

In [15]:
np.random.seed(1)

Since we have 10 different digits (from 0 to 9) we choose to group the images in $10$ clusters

In [16]:
n = 10

We classify the data with k-means

In [17]:
kmeans = KMeans(n_clusters=n,init='random')
kmeans.fit(data)
Z = kmeans.predict(data)

We plot the resulting clusters

In [18]:
for i in range(0,n):

    row = np.where(Z==i)[0]  # row in Z for elements of cluster i
    num = row.shape[0]       #  number of elements for each cluster
    r = np.floor(num/10.)    # number of rows in the figure of the cluster 

    print("cluster "+str(i))
    print(str(num)+" elements")

    plt.figure(figsize=(10,10))
    for k in range(0, num):
        plt.subplot(r+1, 10, k+1)
        image = data[row[k], ]
        image = image.reshape(8, 8)
        plt.imshow(image, cmap='gray')
        plt.axis('off')
    plt.show()
cluster 0
182 elements
cluster 1
156 elements
cluster 2
197 elements
cluster 3
179 elements
cluster 4
180 elements
cluster 5
176 elements
cluster 6
166 elements
cluster 7
242 elements
cluster 8
93 elements
cluster 9
226 elements

Image quantification with k-means

Quantification is a lossy compression technique that consists of grouping a whole range of values into a single one. If we quantify the color of an image, we reduce the number of colors necessary to represent it and the file size decreases. This is important, for example, to represent an image on devices that only support a limited number of colors.

We load the image

In [20]:
I = Image.open("shop.jpg")

We transform it into a numpy array

In [21]:
I1 = np.asarray(I,dtype=np.float32)/255
plt.figure(figsize=(12,12))
plt.imshow(I1)
plt.axis('off')
plt.show()

In this case our objects are the pixels and their properties are the intensities of red, green and blue associated with each one. Therefore we have as many data or objects as pixels and three features or properties for each pixel. We will have as many different colors as different RGB triple. We count the number of different colors.

In [22]:
w, h = I.size
colors = I.getcolors(w * h)
num_colors = len(colors) 
num_pixels = w*h 

print ('Number of pixels = ', num_pixels)
print ('Number of colors = ', num_colors)
Number of pixels =  272640
Number of colors =  172388

To apply k-means, we need an array with as many rows as pixels and for each row/pixel, 3 columns, one for each color intensity (red, green and blue).

We extract the three channels

In [23]:
R = I1[:,:,0]
G = I1[:,:,1]
B = I1[:,:,2]

We convert matrices into one-dimensional matrices and build the three-column matrix described above.

In [24]:
XR = R.reshape((-1, 1))  
XG = G.reshape((-1, 1)) 
XB = B.reshape((-1, 1)) 

X = np.concatenate((XR,XG,XB),axis=1)

We will group the 172388 colors into 60 groups or new colors, which will correspond to the centroids obtained with the k-means

In [25]:
n = 60
k_means = KMeans(n_clusters=n)
k_means.fit(X)
Out[25]:
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
    n_clusters=60, n_init=10, n_jobs=1, precompute_distances='auto',
    random_state=None, tol=0.0001, verbose=0)

The final centroids are the new colors and each pixes has been assigned a label the indicates the cluster it belongs.

In [26]:
centroids = k_means.cluster_centers_
labels = k_means.labels_

We reconstruct the matrix of the image from the labels and the colors (intensities of red, green and blue) of thecentroids.

In [27]:
m = XR.shape
for i in range(m[0]):
    XR[i] = centroids[labels[i]][0] 
    XG[i] = centroids[labels[i]][1] 
    XB[i] = centroids[labels[i]][2] 
XR.shape = R.shape 
XG.shape = G.shape
XB.shape = B.shape 
XR = XR[:, :, np.newaxis]  
XG = XG[:, :, np.newaxis]
XB = XB[:, :, np.newaxis]

Y = np.concatenate((XR,XG,XB),axis=2)

We view the new image with only 60 colors

In [28]:
plt.figure(figsize=(12,12))
plt.imshow(Y)
plt.axis('off')
plt.show()

El número de píxeles es el de la foto inicial, y el número de colores de esta imagen es igual al número de centroides.

In [29]:
print ('Number of pixels = ', num_pixels)
print ('Number of colors = ', n)
Number of pixels =  272640
Number of colors =  60

We save the image in jpg format

In [30]:
Y1 = np.floor(Y*255)
Image.fromarray(Y1.astype(np.uint8)).save("shop2.jpg")

Exercise 2

Count the number of colors in the following image, holi.jpg. Create an image where the number of colors has been reduced to 10 using k-means.

In [44]:
%run Exercise2.py
Number of pixels =  229760
Number of colors =  118708
Number of pixels =  229760
Number of colors =  10

Exercise 3

Obtain the third image from the first one (che-guevara.jpg) shown below.

In [32]:
%run Exercise3.py

Image segmentation with k-means

Segmentation divides an image into regions with coherent internal properties. You can segment an image using color.

The process is similar to image quantization. The difference is the aim of the pixel grouping: in segmentation, we group the pixels to separate the significant elements of an image and thus, be able to extract certain quantitative information. For example, compute the size of a tumor from medical images, the percentage of mica in a granitic rock, the area of a lake from an aerial photo.

Let us start with this last example. If we have the following image of Lake Victoria taken from a satellite and it covers an area of approximately 210000 km$^2$, we can compute, from the percentage of the area of the image, the area of the lake.

In [33]:
I = Image.open("VictoriaLake.jpg")

plt.figure(figsize=(8,8))
plt.imshow(I)
plt.axis('off')
plt.show()

To simplify the problem, we convert the color image to black and white

In [34]:
I1 = I.convert('L')
I2 = np.asarray(I1,dtype=np.float)

plt.figure(figsize=(8,8))
plt.imshow(I2,cmap='gray')
plt.axis('off')
plt.show()

We reshape the matrix to apply k-means. Now it has as many rows as pixels but only one column, the gray intensity.

In [35]:
X = I2.reshape((-1, 1))

We group the pixels into three clusters with k-means

In [36]:
k_means = KMeans(n_clusters=3)
k_means.fit(X) 
Out[36]:
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
    n_clusters=3, n_init=10, n_jobs=1, precompute_distances='auto',
    random_state=None, tol=0.0001, verbose=0)

We extract the centroids and labels for each pixel

In [37]:
centroids = k_means.cluster_centers_
labels = k_means.labels_

We build the image using only the three intensities of the centroids

In [38]:
I2_compressed = np.choose(labels, centroids)
I2_compressed.shape = I2.shape

The rebuilt image

In [39]:
plt.figure(figsize=(8,8))
plt.imshow(I2_compressed,cmap='gray')
plt.axis('off')
plt.show()

We count the number of pixels of black color (the ones with the lowest gray intensity)

In [40]:
I2 = (I2_compressed-np.min(I2_compressed))/(np.max(I2_compressed)-np.min(I2_compressed))*255
I2 = Image.fromarray(I2.astype(np.uint8))
w, h =I2.size
colors = I2.getcolors(w * h)
print (colors)
[(79002, 0), (110746, 161), (47372, 255)]

There are 79002 pixels with intensity 18. We calculate the percentage with respect to the total number of pixels in the photo and we have the percentage of the area of the lake with respect to the total area represented by the photo

In [41]:
print ('Area = ',  float(210000)*float(colors[0][0])/float(w*h), 'km2')
Area =  69966.34615384616 km2

Exercise 4

Calculate the percentage of mica (it is the darkest colored mineral) in the granite rock whose section shows the photo. Execute 10 times the k-means function and keep the iteration that gives the least error with error = k_means.inertia_.

In [45]:
%run Exercise4.py
    Error     Mica percentage in the granite
 374889848      22.18
 374889848      22.18
 374794828      21.89
 374889848      22.18
 374889848      22.18
 374844787      21.62
 374889848      22.18
 374889848      22.18
 374889848      22.18
 374844787      21.62
[(165788, 0), (230659, 140), (370529, 255)]
Error min.    Mica percentage in the granite
 374844787      21.62

Image references