Nanjing University ID Card Face Dataset (NJU-ID)

R&L Group
Nanjing University


Description

Nanjing University ID Card Face (NJU-ID) Dataset is developed for research on ID card face verification [1][2][3]. The ID card here specifically refers to the second generation of resident identity card of China which was put into trial use in 2004 and is now widely deployed. A non-contact IC chip is embedded in the card. On the chip, a low resolution photo of the card owner is stored and can be obtained by IC card reader.

This dataset includes face images of 256 persons. For each person, there are one low resolution ID card face image and one high resolution face image captured by a digital camera. The ID card face image is of resolution 102X126 and the camera image is of resolution 640X480. The camera image is taken under arbitary conditions (illumination and background). Besides, the interval between the taken time of two matched images is also arbitary, making the ID card face verification task a challenging one.


Examples of Aligned Face Images

The first and third rows are the low resolution aligned ID card face images and the second and fourth rows are corresponding aligned high resolution face images captured by a digital camera. To align the face images, Stasm is used to locate the facial landmarks[4].
card card card card card
camera camera camera camera camera
card card card card card
camera camera camera camera camera


Contents

NJU-ID Dataset includes the following contents:


Download Instructions

To download the dataset:


Copyright Note

The dataset is released for research and educational purposes. We hold no liability for any undesirable consequences of using the dataset. All rights of the NJU-ID Dataset are reserved. Any person or organization is not permitted to distribute, publish, copy or disseminate this dataset.


Contact

Contact Jing Huo for any questions/problems/bug reports/suggestions/etc.


References

  1. Jing Huo, Yang Gao, Yinghuan Shi, Wanqi Yang, Hujun Yin. Heterogeneous Face Recognition by Margin Based Cross-Modality Metric Learning. IEEE Transactions on Cybernetics, 2018, 48(6), 1814-1826.
  2. Jing Huo, Yang Gao, Yinghuan Shi, Hujun Yin. Cross-Modal Metric Learning for AUC Optimization. IEEE Transactions on Neural Networks and Learning Systems, 2017, doi: 10.1109/TNNLS.2017.2769128.
  3. Jing Huo, Yang Gao, Yinghuan Shi, Wanqi Yang, Hujun Yin. Ensemble of Sparse Cross-Modal Metrics for Heterogeneous Face Recognition. ACM Conference on Multimedia, 2016: 1405-1414.
  4. S. Milborrow and F. Nicolls. Active shape models with SIFT descriptors and MARS. VISAPP, 2014, 1(2): 5.