This page provides the datasets for the paper Learning to Train with Synthetic Humans. Our datasets provide 2D multi-person pose annotation, camera blur parameters, the camera matrix, the depth map, gender tags, normal maps, object Id maps, the SMPL+H pose coefficients, 3D joint locations, an occlusion label for each joint (heuristic), a scale parameter, body part segmentation maps, SMPL+H shapes, global translation for each synthetic human and the z-rotation of each synthetic human. A more detailed description of these ground truth modalities can be found here.
The paper contributes 3 datasets. A purely synthetic multi-person 2D pose dataset that is composed of synthetic humans in front of random backgrounds. A version of the MPII Human Pose Dataset which is augmented with synthetic humans and a stylized version of the latter. To download these datasets please register on this website. After logging in you will find the links in the download section.
26 March 2020
- The augmented version of the MPII multi-person pose dataset (mixed dataset) has been added.
- The stylized version of the mixed dataset is now available for downlaod.