Open MIC (Open Museum Identification Challenge) contains photos of exhibits captured in 10 distinct exhibition spaces of several museums which showcase paintings, timepieces, sculptures, glassware, relics, science exhibits, natural history pieces, ceramics, pottery, tools and indigenous crafts. The goal of Open MIC is to stimulate research in domain adaptation, egocentric recognition and few-shot learning by providing a testbed complementary to the famous Office 31 dataset which reaches ~90% accuracy.

INTRODUCTION

EXHIBITIONS

Open MIC contains 10 distinct source-target subsets of images from 10 different kinds of museum exhibition spaces. They include:

BASELINES

To demonstrate the intrinsic difficulty of the Open MIC dataset, we provide the community with baseline accuracies obtained from:

DOMAIN ADAPTATION

We include the following evaluation protocols for Domain Adaptation (see the cited below ECCV'18 paper for more details):

FEW-SHOT LEARNING

We include the following evaluation protocols for One-shot Learning:

PUBLICATIONS

For more details on the data, protocols, evaluatinons and algorithms, see the following publication. We would ask you to kindly cite the following paper when using our dataset:

REQUEST FORM

Our dataset license follows mostly the fair use regulations making it available for the academic non-commercial use only. The license assumes royalty-free, non-exclusive, non-transferable, attribution, 'no derivatives' rights. Please read carefully the license and fill in below the requested details. We will verify the request and send you an e-mail with a password. The access to the data expires automatically after 30 days. If you have any questions or concerns, if you do not receive an access to the data within 48h upon your request or you need the access immediately, send us an e-mail to Open MIC.
  • all fields needed,
    plus a valid e-mail for your password after approval.

DATASET/DOWNLOAD (SMALL SIZE)

Once you will have obtained a valid passwrd, you will be able to instantly downlaod our files (enter e-mail as login followed by the password from e-mail).

Firstly, go through the following 'readme' file for tdetails of what is contained in which folders of our archives:
Below we provide versions of our dataset in resolution 256, 512 and 1024px. You can choose the quality needed for your experiments but we expect that 256 or 512px should be sufficient if you work with CNNs. The following archives contain full images and crops. We used crops in our ECCV'18 paper as well as for one-shot learning:

DATASET/DOWNLOAD (LARGE CROPS)

Below are crops (3 per image) in high resolution of approximately 2048x2048px. Note that each exhibition archive is large, e.g. 1-3GB per file, and to evaluate your algorithm on any of the protocols lsited above, you will need to download all 10 following files:

DATASET/DOWNLOAD (FULL IMAGES)

Below are full resolution whole images (over 2048px). Note that each exhibition archive is large, e.g. 1-3GB per file:

ADDITIONAL LABELS

Below are labels with multiple annotations per image in the target data (some of our ECCV'18 experiments use them) as well as the lists of source and target background images (labelled as -1). Moreover, we also provide annotations for the geometric and photometric distortions observed in target images (the latter file).