Skip to content

The simultaneous embedding and hashing framework

Notifications You must be signed in to change notification settings

dangkhoasdc/sah

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

This is sah, the hashing framework which simultaneously learn embedding image representation and hashing code. It also implements relaxed-ba, an improved of Binary Autoencoder. We also provide pre-trained hashing model for following datasets:

Citation

The work will be presented in CVPR 2017.

If you use the code in your paper, then please cite it as:

@InProceedings{Do_2017_CVPR,
    author = {Do, Thanh-Toan and Le Tan, Dang-Khoa and Pham, Trung T. and Cheung, Ngai-Man},
    title = {Simultaneous Feature Aggregating and Hashing for Large-Scale Image Search},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {July},
    year = {2017}
}

Requirement & Data

The code requires Matlab version 2014 or higher.

Data

All training data in the code are extracted from conv5 layer in the network VGG-16. All data should be put into the data folder. The models have to be put on workdir. We provide following data for training/testing:

The pre-trained models are provided as the following format:

sah_{dataset-name}_c{code-length}

Testing models

In test_retrieval.m, please change dataset to holidays/oxford5k to test on Holidays and Oxford5k , respectively. You also need to change code_length to 8/16/24/32 in order to achieve the result as which are reported in the paper. By default, please put datasets on the data folder and pre-trained sah models into the workdir folder. Finally, run test_retrieval.

The table shows the mAP results in Oxford5k and Holidays datasets:

Dataset 8 16 24 32
Oxford5k 8.44 12.70 15.26 18.05
Holidays 7.52 21.15 33.02 39.18

Training models

The training function is implemented on train.m. Similar to test_retrieval, please change the dataset name and code length in dataset and code_length, respectively. To finetune parameters, you can consider these following variables:

  • lambda.
  • beta.
  • gamma.
  • mu

For more detail, please check the paper. After training, the model is saved in the workdir folder as the format I have mentioned before. Please noting that the model is only saved when it achieves higher result than max_map.

About

The simultaneous embedding and hashing framework

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages