Robust Depth-based Person Re-identification

Ancong Wu, Wei-Shi Zheng, Jianhuang Lai

Sun Yat-sen University, China

Introduction

Person re-identification (re-id) aims to match people across non-overlapping camera views. So far the RGB-based appearance is widely used in most existing works. However, when people appear in extreme illumination or change clothes, the RGB appearance-based re-id methods tend to fail. To overcome this problem, we propose to exploit depth information to provide more invariant body shape and skeleton information regardless of illumination and color change. More specifically, we exploit depth voxel covariance descriptor and further propose a locally rotation invariant depth shape descriptor called Eigen-depth feature to describe pedestrian body shape. We prove that the distance between any two covariance matrices on the Riemannian manifold is equivalent to the Euclidean distance between the corresponding Eigen-depth features. Furthermore, we propose a kernelized implicit feature transfer scheme to estimate Eigen-depth feature implicitly from RGB image when depth information is not available. We find that combining the estimated depth features with RGB-based appearance features can sometimes help to better reduce visual ambiguities of appearance features caused by illumination and similar clothes. The effectiveness of our models was validated on publicly available depth pedestrian datasets as compared to related methods for person re-identification.


Figure 1. Feature extraction pipeline of depth-based person re-identification.


Figure 2. The kernelized implicit feature transfer scheme for estimating depth features of RGB images.

Download

Feature extraction code: depth_reid_feature_extraction.zip

Citation

Please cite the following paper if you use the code in your research: