Abstract: In this paper we present a new method for object re- trieval starting from multiple query images. The use of mul- tiple queries allows for a more expressive formulation of the query object including, e.g., different viewpoints and/or viewing conditions. This, in turn, leads to more diverse and more accurate retrieval results. When no query images are available to the user, they can easily be retrieved from the internet using a standard image search engine. In particular, we propose a new method based on pattern mining. Using the minimal description length principle, we derive the most suitable set of patterns to describe the query object, with patterns corresponding to local feature config- urations. This results in a powerful object-specific mid-level image representation. The archive can then be searched efficiently for similar images based on this representation, using a combination of two inverted file systems. Since the patterns already encode local spatial information, good results on several standard image retrieval datasets are obtained even without costly re-ranking based on geometric verification.
Please contact KU Leuven VISICS research group regarding the ownership of the data. You are not allowed to redistribute any image collected by KU Leuven VISICS research group which is listed in the ground truth (gt) folder of this collection. You are allowed to use this data for academic research purposes only. The distractor images are owned by the respective parties under the terms of MIRFLICKR-1M dataset.
The “gt” folder consists of query images and respective relevant good and ok images. You can use the protocol and software of oxbuildings for the evaluations. Any questions please contact : basura.fernando at esat.kuleuven.be
@InProceedings{Fernando_2013_ICCV, author = {Basura Fernando and Tinne Tuytelaars}, title = {Mining Multiple Queries for Image Retrieval: On-the-Fly Learning of an Object-Specific Mid-level Representation}, booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, month = {December}, year = {2013} }
The authors acknowledge the support of the EC FP7 project AXES and iMinds Impact project Beeldcanon.