Image Retrieval

The aim of this post is to talk about:

  1. Image retrieval and its classifications.
  2. Image Descriptors.
  3. Color moment descriptor.

Short overview:

Image retrieval is an old research topic in computer science. It’s about how to retrieve (or search) for image(s) from a database of images by extracting some distinctive features for each image.

Image retrieval is used in image processing and computer vision. One of the most famous applications of this topic is Search by image made by Google.

Image Retrieval is classified into:

  • Tag-based image.
  • Content-based image retrieval.

Tag-based image retrieval (CBIR):
Searching for images relying on metadata and tags that are associated with images. This classification depends on human intervention to provide a description of the image content.
Content-based image retrieval (CBIR):
Searching for images relying on its actual content. Based on their similarities instead of  textual description (List of CBIR systems)

CBIR system is divided into :

  • Data insertion is responsible for extracting features and information from Images.
  • Query processing is responsible for retrieving images depending on specified queries.

Image Descriptors:

Image Descriptor is a descriptor that contains a feature extraction algorithm and a matching function (Matching function is a similarity measure to compare images. Like Euclidean that is, the larger the distance value, the less similar the images are. )

Feature extraction is mapping the image pixels into the feature space (Data Insertion).
Matching function compares a given image with database images (Query processing).

Image Descriptors are classified into:

  • Color descriptors.
  • Shape descriptors.
  • Texture descriptors.

Currently we will talk about Color descriptors

Color descriptors are divided into two groups:

  • Contains information about color spatial distribution.
  • Doesn’t contain information about color spatial distribution.

Color spatial distribution means that the color descriptor is taking into account information about colors’ position in the image.
For example look at these images. Both have similar color distribution (amount) although they have different appearance.
pic

In the next posts the difference between these groups will be more clarified.

The first color descriptor we will talk about is Color Moments

Color Moments

This method depends on some statistical moments like mean, variance and skewness.

Feature extraction algorithm:

  1. Separate the 3 color channels of the image (R,G,B) .
  2. Compute the mean, variance and skewness for each color channels.

The combination of these moments is a good descriptor to differentiate between images’ color distribution.

Matching function:
To compare 2 images a, b.

Assuming that
r : Number of color channels (which is in our case is 3 colors (red, green and blue))
Ei : Mean of color i.
Vi : Variance of color i.
Si: Skewness of color i.
D : Similarity distance.

Capture2

Where W1, W2 and W3 are user specified weights.

Next posts we will talk about more efficient color descriptor. Stay Tuned.

Advertisements

5 responses to “Image Retrieval”

  1. ahanii says :

    Hoba B2a !! 😀 😀
    Waiting for the next post 😀

  2. Ali Essam says :

    Nice field to look into 😀
    Maybe i’ll give it a try next year isAllah 😀
    Keep it up (Y)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: