Nearest neighbor (NN) search plays a fundamental role in machine learning and related areas, such as information retrieval and data mining. Hence, there has been increasing interest in NN search in massive (large-scale) data sets in this big data era. In many real applications, it's not necessary for an algorithm to return the exact nearest neighbors for every possible query. Hence, in recent years approximate nearest neighbor (ANN) search algorithms with improved speed and memory saving have received more and more attention from researchers.
Due to its low storage cost and fast query speed, hashing has been widely adopted for ANN search in large-scale datasets. The essential idea of hashing is to map the data points from the original feature space into binary codes in the hashcode space with similarities between pairs of data points preserved. The advantage of binary codes representation over the original feature vector representation is twofold. Firstly, each dimension of a binary code can be stored using only 1 bit while several bytes are typically required for one dimension of the original feature vector, leading to a dramatic reduction in storage cost. Secondly, by using binary codes representation, all the data points within a specific Hamming distance to a given query can be retrieved in constant or sub-linear time regardless of the total size of the dataset. Hence, hashing has become one of the most effective methods for big data retrieval and mining.
To get effective hashing codes, most methods adopt machine learning techniques for hashing function learning. Hence, learning to hash, which tries to design effective machine learning methods for hashing, has recently become a very hot research topic with wide applications in many big data areas. This tutorial will provide a systematic introduction of learning to hash, including the motivation, models, learning algorithms, and applications. Firstly, we will introduce the challenges faced by us when performing retrieval and mining with big data, which are used to well motivate the adoption of hashing. Secondly, we will give a comprehensive coverage of the foundations and recent developments on learning to hash, including unsupervised hashing, supervised hashing, multimodal hashing, etc. Thirdly, quantization methods, which are used to turn the real values into binary codes in many hashing methods, will be presented. Fourthly, a large variety of applications with hashing will also be introduced, including image retrieval, cross-modal retrieval, recommender systems, and so on.
 Peichao Zhang, Wei Zhang, Wu-Jun Li, Minyi Guo. Supervised Hashing with Latent Factor Models. To Appear in Proceedings of the 37th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2014.
 Dongqing Zhang, Wu-Jun Li. Large-Scale Supervised Multimodal Hashing with Semantic Correlation Maximization. To Appear in Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI), 2014.
 Ling Yan, Wu-Jun Li, Gui-Rong Xue, Dingyi Han. Coupled Group Lasso for Web-Scale CTR Prediction in Display Advertising. Proceedings of the 31st International Conference on Machine Learning (ICML), 2014.
 Weihao Kong, Wu-Jun Li. Isotropic Hashing. Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS), 2012.
 Weihao Kong, Wu-Jun Li, Minyi Guo. Manhattan Hashing for Large-Scale Image Retrieval. Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2012.
 Weihao Kong, Wu-Jun Li. Double-Bit Quantization for Hashing. Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence (AAAI), 2012.
Dr. Wu-Jun Li is currently an associate professor of the Department of
Computer Science and Technology at Nanjing University, P. R. China. From
2010 to 2013, he was a faculty member of the Department of Computer Science
and Engineering at Shanghai Jiao Tong University, P. R. China. He received his PhD degree from the Department of Computer Science and Engineering at Hong
Kong University of Science and Technology in 2010. Before that, he received
his M.Eng. degree and B.Sc. degree from the Department of Computer Science
and Technology, Nanjing University in 2006 and 2003, respectively. His main
research interests include machine learning and pattern recognition, especially
in statistical relational learning and big data machine learning (big learning). In these areas he has
published more than 30 peer-reviewed papers, most in prestigious journals such as TKDE
and top conferences such as AAAI, AISTATS, CVPR, ICML, IJCAI, NIPS,
SIGIR. He has served as the PC member of ICML'14, IJCAI'13/'11, NIPS'14, SDM'14,