HuMMan is the largest known multi-modal human body/gesture dataset in the world. It consists of 1000 subjects, and 500 actions designed based on human muscle groups. It contains 8 modalities, 400K video sequences and 60M frames of data. Data was collected via a sensor suite of both RGB-D cameras and a mobile phone. It supports studies on action recognition, human pose and shape estimation, and textured mesh reconstruction.
Key Features:
Multiple Modalities: HuMMan provides a basket of data and annotation modalities
Mobile Device for data collection: a mobile device is included in the sensor suite.
Action Set: a complete and unambiguous set of 500 actions
Multiple Tasks: HuMMan supports various human sensing and modelling tasks