Archive | August 2013

Image Retrieval: Color Coherence Vector

It’s recommended to have a look at this post. It’s an introduction to Image Retrieval. Some of its terms and expressions are used in this post.

Last post we talked about two common color descriptors Global Color Histogram (GCH) and Local Color Histogram (LCH). Then we discussed the main problem of using GCH that it has no information about color spatial distribution. After that we discussed an attempt to solve this problem using LCH. Finally we showed some drawbacks of using LCH.

Color descriptors are used to differentiate between images and compute their similarities by describing their colors.

Now we’ll discuss one of the most efficient color descriptors that contains information about color spatial distribution which is called Color Coherence Vector (CCV).

Color Coherence Vector

Color Coherence Vector (CCV) is a more complex method than Color Histogram. It classifies each pixel as either coherent or incoherent. Coherent pixel means that it’s part of a big of connected component (CC) of the same color while incoherent pixel is part of a small connected component. Of course first we define the criteria which we use to measure whether a connected component is big or not.

Feature extraction algorithm

1. Blur the image (by replacing each pixel’s value with the average value of the 8 adjacent pixels surrounding that pixel).
2. Discretize the color-space (images’ colors) into n distinct color.
3. Classify each pixel either as coherent or incoherent. This is computed by

  • Find connected components for each discretized color.
  • Determine tau’s value (Tau is a user-specified value (Normally it’s about 1% of image’s size)).
  • Any Connected Component has number of pixels more than or equal to tau then its pixels are considered coherent and the others are incoherent.

4. For each color compute two values (C and N).

  • C is the number of coherent pixels.
  • N is the number of incoherent pixels.

It’s clear that the summation of all color’s C and N = number of pixels.

Matching function

To compare 2 images a, b.
Ci : number of coherent pixels in color i.
Ni : number of incoherent pixels in color i.

1

Let’s take this example to make algorithm’s steps clear.
Assuming that the image has 30 colors instead of 16777216 colors (256*256*256).

2

Now we’ll discretize the colors to only three colors (0:9, 10:19, 20, 29).

3

Assuming that our tau is 4
For color 0 we have 2 CC (8 coherent pixels)
For color 1 we have 1 CC (8 coherent pixels)
For color 2 we have 2 CC (6 coherent pixels and 3 incoherent pixels)
So finally our feature vector is

4

Drawbacks of Color Coherence Vector

Now we see that Color Coherence Vector method considers information about color spatial distribution between pixels in its coherent component. But this method has some drawbacks. The remaining part of this post will discuss two main drawbacks of it.

Coherent pixels in CCV represent the pixels which are inside remarkable components in image. But what if we combined these entire components into one component. We will have only one component the number of its pixels will be obviously equal to the number of pixels in the remarkable components.

To make it clear look at these pictures assuming tau equals to 8.

5

Although they are different pictures but they have the same CCV.
Another problem we may encounter is the positions of these remarkable connected components relative to each other.

These pictures have the same CCV with different appearance.

6

There are many solutions to these problems. Most of them add another dimension in feature vector which is components’ position relative to the others. So this dimension is used in the comparison in order to differentiate between pictures that have the same CCV.

Here you’ll a fast Matlab implementation on Github.

Advertisements

Image Retrieval: Global and Local Color Histogram

In the previous post we talked about Image Retrieval and Image Descriptors. Now we will introduce one of the most common and important descriptors that doesn’t include information about color spatial distribution which is Color Histogram.

Color Histogram

Color Histogram is a representation of the distribution of colors in an image.(From Wikipedia)
Color histogram represents the image but from another perspective. Color Histogram counts similar pixels and store it in bins in order to describe the number of pixels in each range of colors (or bin) independently.
Note: Color Histogram is a color descriptor and as we knew from the previous post that each descriptor contains a feature extraction algorithm and a matching function.

Color Histogram is divided into:

  • Global Color Histogram (GCH).
  • Local Color Histogram (LCH).

Global Color Histogram

GCH is the most known color histogram used to detect similar images.
Feature extraction algorithm:

  1. Discretize your color-space (images’ colors) into n color (You may use just 8*8*8 =512 color instead of 256*256*256=16777216 color).
  2. Create a bin for each color.
  3. Count number of pixels for each color and store it in histogram’s bins.

Matching function:
The most common matching function for this method is Euclidean distance.
To compare 2 images A, B.

A(R,G,B) : represents number of pixels in color = (R,G,B). (for example A(6,2,4) represents the number of discretized pixels of color R=6,G=2 and B=4).
D:  sum Euclidean distances.

Capture

Remember : the larger the distance value, the less similar the images are.

Look at this example

Capture2

Here C has the same color histogram as B but A is different from them.

Using Euclidian distance for these color histograms we found that  D(A,C) = D(A,B) and D(B,C) = 0 but There’s a problem here that  B, C are not similar at all so D(B,C) shouldn’t be zero and D(A,C) should be smaller than D(A,B) because A,C have the same pixels except for only two pixels.

That’s why we call GCH doesn’t include information about color spatial distribution.

There’s an attempt to solve this problem which is the next part of this post.

Local Color Histogram

LCH includes information about color’s distribution in different regions. It’s the same as GCH but at first we divide the image into different block. Where each pair of the blocks (one of them in the first image and the other in the second) will be computed separately using GCH. After that the total distance between the two images will be the sum of all GCH distances between them.

wd

Feature extraction algorithm:

  1. Split image into m blocks
  2. Compute the GCH for each pair of blocks as shown in the figure

Matching function:
To compare 2 images a, b.
All we need to do is to sum up all distances computed by GCH.

D:  sum of Euclidean distances.

Capture2

Using LCH the distances are now more reasonable. D(A,B) = 1.768, D(A,C) = .707, D(B,C)=1.768.

So sometimes LCH is more efficient than GCH. But when the image is rotated we may get a very different output.

Look at this example:

CaptureCapture

In this example the distance between the 2 images using LCH = 0

a Capture

Here the distance between the 2 images using LCH = 4 although they are the same but the second one is rotated and this problem is the main disadvantage of Local Color Histogram.

Image Retrieval

The aim of this post is to talk about:

  1. Image retrieval and its classifications.
  2. Image Descriptors.
  3. Color moment descriptor.

Short overview:

Image retrieval is an old research topic in computer science. It’s about how to retrieve (or search) for image(s) from a database of images by extracting some distinctive features for each image.

Image retrieval is used in image processing and computer vision. One of the most famous applications of this topic is Search by image made by Google.

Image Retrieval is classified into:

  • Tag-based image.
  • Content-based image retrieval.

Tag-based image retrieval (CBIR):
Searching for images relying on metadata and tags that are associated with images. This classification depends on human intervention to provide a description of the image content.
Content-based image retrieval (CBIR):
Searching for images relying on its actual content. Based on their similarities instead of  textual description (List of CBIR systems)

CBIR system is divided into :

  • Data insertion is responsible for extracting features and information from Images.
  • Query processing is responsible for retrieving images depending on specified queries.

Image Descriptors:

Image Descriptor is a descriptor that contains a feature extraction algorithm and a matching function (Matching function is a similarity measure to compare images. Like Euclidean that is, the larger the distance value, the less similar the images are. )

Feature extraction is mapping the image pixels into the feature space (Data Insertion).
Matching function compares a given image with database images (Query processing).

Image Descriptors are classified into:

  • Color descriptors.
  • Shape descriptors.
  • Texture descriptors.

Currently we will talk about Color descriptors

Color descriptors are divided into two groups:

  • Contains information about color spatial distribution.
  • Doesn’t contain information about color spatial distribution.

Color spatial distribution means that the color descriptor is taking into account information about colors’ position in the image.
For example look at these images. Both have similar color distribution (amount) although they have different appearance.
pic

In the next posts the difference between these groups will be more clarified.

The first color descriptor we will talk about is Color Moments

Color Moments

This method depends on some statistical moments like mean, variance and skewness.

Feature extraction algorithm:

  1. Separate the 3 color channels of the image (R,G,B) .
  2. Compute the mean, variance and skewness for each color channels.

The combination of these moments is a good descriptor to differentiate between images’ color distribution.

Matching function:
To compare 2 images a, b.

Assuming that
r : Number of color channels (which is in our case is 3 colors (red, green and blue))
Ei : Mean of color i.
Vi : Variance of color i.
Si: Skewness of color i.
D : Similarity distance.

Capture2

Where W1, W2 and W3 are user specified weights.

Next posts we will talk about more efficient color descriptor. Stay Tuned.