Machine Learning applies automatic data-driven learning methods to obtain accurate predictions from observations with previous data.
Data automated classification is one of the aims of machine learning. We can consider three types of classification algorithms:
Supervised classification: We have a dataset (i.e., handwritten digit images) that we will call training data where each datum is associated with a label (that tell us which digit, 0, 1,..,9, corresponds to that image). In the training stage, we build a model with this dataset (training dataset) using the labels, that helps us to assess the correct or incorrect classification of an image while building the model. Once we have the model, we can use it to classify new data. At this stage we do not need the labels, unless we want to assess the accuracy of the classification.
Unsupervised classification: The dataset comes without labels (or we will not use them) and the data is classified using their inner structure (properties, characteristics).
Semi-supervised classification: we can apply it when some data comes with labels, but not all of them. This is typical when our data consist of images: we have access to many images but they are mostly untagged. These algorithms can be considered a variant of the supervised classification with a strategy to overcome the lack of labels for part of the data.
In this lab we will see some application examples of the unsupervised classification algorithm k-means for image classification and processing.
K-means is an unsupervised classification algorithm, also called clusterization, that groups objects into k groups based on their characteristics. The grouping is done minimizing the sum of the distances between each object and the group or cluster centroid. The distance usually used is the quadratic or euclidean distance.
The algorithm has three steps:
Repeat steps 2 and 3 until the centroids do not move, or move below a threshold distance in each step.
The k-means algorithm solves an optimization problem and the function to be optimized (minimized) is the sum of the quadratic distances from each object to its cluster centroid.
The objects are represented with $d$ dimension vectors $\left(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}\right)$ and the algorithm k-means builds $k$ groups where the sum of the distances of the objects to its centroid is minimized within each group $\mathbf{S}=\left\{ S_{1},S_{2},\ldots,S_{k}\right\}$. The problem can be formulated:
$$ \underset{\mathbf{S}}{\mathrm{min}}\; E\left(\boldsymbol{\mu_{i}}\right)=\underset{\mathbf{S}}{\mathrm{min}}\sum_{i=1}^{k}\sum_{\mathbf{x}_{j}\in S_i}\left\Vert \mathbf{x}_{j}-\boldsymbol{\mu}_{i}\right\Vert ^{2} \quad (1)$$
where $\mathbf{S}$ is the dataset whose elements are the objects $\mathbf{x}_{j}$ represented by vectors, where each of its elements represents a characteristic or attribute. We will have $k$ groups or clusters with their corresponding centroid $\boldsymbol{\mu_{i}}$.
In each centroid update, from the mathematical point of view, we impose the extreme (minimum, in this case) necessary condition to the function $E\left(\boldsymbol{\mu_{i}}\right)$ that, for this quadratic function $(1)$ is
$$ \frac{\partial E}{\partial\boldsymbol{\mu}_{i}}=0\;\Longrightarrow\;\boldsymbol{\mu}_{i}^{(t+1)}=\frac{1}{\left|S_{i}^{(t)}\right|}\sum_{\mathbf{x}_{j}\in S_{i}^{(t)}}\mathbf{x}_{j} $$
and the solution is to take each group element average as a new centroid. We have used the gradient descent method.
The main advantages of the k-means method are that it is simple and fast. But it is necessary to decide the value of $k$ and the final result depends on the initialization of the centroids. Also, it does not necessarily converge to the global minimum but to a local minimum.
Write a script that uses the k-means method to classify two dimensional data. Classify the data generated below in three groups.
To insert the created figures in this notebook
%matplotlib inline
We import the following libraries
import numpy as np
import matplotlib.pyplot as plt
We generate 2D random data for three clusters and represent them
np.random.seed(7)
x1 = np.random.standard_normal((100,2))*0.6+np.ones((100,2))
x2 = np.random.standard_normal((100,2))*0.5-np.ones((100,2))
x3 = np.random.standard_normal((100,2))*0.4-2*np.ones((100,2))+5
X = np.concatenate((x1,x2,x3),axis=0)
plt.plot(X[:,0],X[:,1],'k.')
plt.show()
%run Exercise1.py
We will use KMeans
from sklearn
library in the other exercises.
from sklearn.cluster import KMeans
We group the points with k-means and $k=3$
n = 3
k_means = KMeans(n_clusters=n)
k_means.fit(X)
The result is three centroids
around which the points are grouped and each point label
that indicate which cluster this point belongs to.
centroids = k_means.cluster_centers_
labels= k_means.labels_
Now we draw the points and the centroids, using a different color for the points of each cluster.
plt.plot(X[labels==0,0],X[labels==0,1],'r.', label='cluster 1')
plt.plot(X[labels==1,0],X[labels==1,1],'b.', label='cluster 2')
plt.plot(X[labels==2,0],X[labels==2,1],'g.', label='cluster 3')
plt.plot(centroids[:,0],centroids[:,1],'mo',markersize=8, label='centroids')
plt.legend(loc='best')
plt.show()
Let us classify digits of the database contained in sklearn
library of python using the k-means algorithm.
We import the usual libraries
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
and the library that contains the function k-means
from sklearn.cluster import KMeans
We import the library that contains our dataset
from sklearn.datasets import load_digits
We load the digit images
digits = load_digits()
data = digits.data
print(data.shape)
Pixels from the square image of $8\times 8$ pixels have been reshped in a row of $ 64 $ elements. Therefore, each row is an object or data. The characteristics or properties of each object are the gray intensities of each pixel. That is, we have, for each image, $64$ properties.
To improve the visualization, we invert the colors
data = 255-data
We fix the seed to obtain the initial centroids, so the results obtained here are repeatable.
np.random.seed(1)
Since we have 10 different digits (from 0 to 9) we choose to group the images in $10$ clusters
n = 10
We classify the data with k-means
kmeans = KMeans(n_clusters=n,init='random')
kmeans.fit(data)
Z = kmeans.predict(data)
We plot the resulting clusters
for i in range(0,n):
row = np.where(Z==i)[0] # row in Z for elements of cluster i
num = row.shape[0] # number of elements for each cluster
r = np.floor(num/10.) # number of rows in the figure of the cluster
print("cluster "+str(i))
print(str(num)+" elements")
plt.figure(figsize=(10,10))
for k in range(0, num):
plt.subplot(r+1, 10, k+1)
image = data[row[k], ]
image = image.reshape(8, 8)
plt.imshow(image, cmap='gray')
plt.axis('off')
plt.show()
Quantification is a lossy compression technique that consists of grouping a whole range of values into a single one. If we quantify the color of an image, we reduce the number of colors necessary to represent it and the file size decreases. This is important, for example, to represent an image on devices that only support a limited number of colors.
We load the image
I = Image.open("shop.jpg")
We transform it into a numpy array
I1 = np.asarray(I,dtype=np.float32)/255
plt.figure(figsize=(12,12))
plt.imshow(I1)
plt.axis('off')
plt.show()
In this case our objects are the pixels and their properties are the intensities of red, green and blue associated with each one. Therefore we have as many data or objects as pixels and three features or properties for each pixel. We will have as many different colors as different RGB triple. We count the number of different colors.
w, h = I.size
colors = I.getcolors(w * h)
num_colors = len(colors)
num_pixels = w*h
print ('Number of pixels = ', num_pixels)
print ('Number of colors = ', num_colors)
To apply k-means, we need an array with as many rows as pixels and for each row/pixel, 3 columns, one for each color intensity (red, green and blue).
We extract the three channels
R = I1[:,:,0]
G = I1[:,:,1]
B = I1[:,:,2]
We convert matrices into one-dimensional matrices and build the three-column matrix described above.
XR = R.reshape((-1, 1))
XG = G.reshape((-1, 1))
XB = B.reshape((-1, 1))
X = np.concatenate((XR,XG,XB),axis=1)
We will group the 172388 colors into 60 groups or new colors, which will correspond to the centroids obtained with the k-means
n = 60
k_means = KMeans(n_clusters=n)
k_means.fit(X)
The final centroids
are the new colors and each pixes has been assigned a label
the indicates the cluster it belongs.
centroids = k_means.cluster_centers_
labels = k_means.labels_
We reconstruct the matrix of the image from the labels
and the colors (intensities of red, green and blue) of thecentroids
.
m = XR.shape
for i in range(m[0]):
XR[i] = centroids[labels[i]][0]
XG[i] = centroids[labels[i]][1]
XB[i] = centroids[labels[i]][2]
XR.shape = R.shape
XG.shape = G.shape
XB.shape = B.shape
XR = XR[:, :, np.newaxis]
XG = XG[:, :, np.newaxis]
XB = XB[:, :, np.newaxis]
Y = np.concatenate((XR,XG,XB),axis=2)
We view the new image with only 60 colors
plt.figure(figsize=(12,12))
plt.imshow(Y)
plt.axis('off')
plt.show()
El número de pÃxeles es el de la foto inicial, y el número de colores de esta imagen es igual al número de centroides.
print ('Number of pixels = ', num_pixels)
print ('Number of colors = ', n)
We save the image in jpg
format
Y1 = np.floor(Y*255)
Image.fromarray(Y1.astype(np.uint8)).save("shop2.jpg")
Count the number of colors in the following image, holi.jpg
. Create an image where the number of colors has been reduced to 10 using k-means.
%run Exercise2.py
Obtain the third image from the first one (che-guevara.jpg
) shown below.
%run Exercise3.py
Segmentation divides an image into regions with coherent internal properties. You can segment an image using color.
The process is similar to image quantization. The difference is the aim of the pixel grouping: in segmentation, we group the pixels to separate the significant elements of an image and thus, be able to extract certain quantitative information. For example, compute the size of a tumor from medical images, the percentage of mica in a granitic rock, the area of a lake from an aerial photo.
Let us start with this last example. If we have the following image of Lake Victoria taken from a satellite and it covers an area of approximately 210000 km$^2$, we can compute, from the percentage of the area of the image, the area of the lake.
I = Image.open("VictoriaLake.jpg")
plt.figure(figsize=(8,8))
plt.imshow(I)
plt.axis('off')
plt.show()
To simplify the problem, we convert the color image to black and white
I1 = I.convert('L')
I2 = np.asarray(I1,dtype=np.float)
plt.figure(figsize=(8,8))
plt.imshow(I2,cmap='gray')
plt.axis('off')
plt.show()
We reshape the matrix to apply k-means. Now it has as many rows as pixels but only one column, the gray intensity.
X = I2.reshape((-1, 1))
We group the pixels into three clusters with k-means
k_means = KMeans(n_clusters=3)
k_means.fit(X)
We extract the centroids
and labels
for each pixel
centroids = k_means.cluster_centers_
labels = k_means.labels_
We build the image using only the three intensities of the centroids
I2_compressed = np.choose(labels, centroids)
I2_compressed.shape = I2.shape
The rebuilt image
plt.figure(figsize=(8,8))
plt.imshow(I2_compressed,cmap='gray')
plt.axis('off')
plt.show()
We count the number of pixels of black color (the ones with the lowest gray intensity)
I2 = (I2_compressed-np.min(I2_compressed))/(np.max(I2_compressed)-np.min(I2_compressed))*255
I2 = Image.fromarray(I2.astype(np.uint8))
w, h =I2.size
colors = I2.getcolors(w * h)
print (colors)
There are 79002 pixels with intensity 18. We calculate the percentage with respect to the total number of pixels in the photo and we have the percentage of the area of the lake with respect to the total area represented by the photo
print ('Area = ', float(210000)*float(colors[0][0])/float(w*h), 'km2')
Calculate the percentage of mica (it is the darkest colored mineral) in the granite rock whose section shows the photo. Execute 10 times the k-means function and keep the iteration that gives the least error with error = k_means.inertia_
.
%run Exercise4.py