กก

About this demo (To demo directly)

กก

This demonstration follows the model of CBIR with user feedback in Figure 1. 

 

Figure 1. The model of CBIR with user feedback

1. Image database

    In this demo, the image database includes 1,102 general color images collected from VisTex of MIT and Corel Stock Photos. The content of these images can be roughly categorized into race car, war plane, flower, Euro building, natural scene, firework, and so on. The size of each class varies from 10 to 100.

2. Visual features

    A perceptually uniform color space, CIE-Lab, is used to represent general color images. Based on this space, two feature vectors are defined as follows: (1) color moments is constructed. It consists of the mean, variance, and skewness of the pixel values of an image along L, a, and b axes, respectively. The dimension of this feature vector is 3x3 (9 in total); (2) Gabor based texture feature is constructed. By utilizing a bank of Gabor filters (6 orientations and 4 scales), the first and second order moments of the values of the Gabor filtered image along L, a, and b axes are extracted to form a feature vector. This feature vector has a dimension of 6x4x2x3 (144 in total).

3. Retrieval methods

    There are five retrieval methods in total. They are (1) Euclidean search with query-mean; (2) Support Vector Machines(SVMs); (3) SVM + Prior knowledge; (4) SVM + Active learning; (5) SVM + Active learning + Prior knowledge. One of them will be selected before launching retrieval.

4. The mean of evaluating the displayed images

    In CBIR with relevance feedback, the displayed images are subjectively evaluated by the user. In this demo, the commonly used bi-level evaluation is adopted due to its simplicity and convenience. That is, a user labels all or part of the displayed images as "relevant" or "irrelevant" by ticking one of the three boxes under each image shown in the following figure. After that, press the "submit" button to submit the evaluations and get the refined retrieval result in seconds.

Figure 1. The three buttons for user evaluation