Skip to content

Instantly share code, notes, and snippets.

@adujardin
Created June 2, 2022 11:56
Show Gist options
  • Save adujardin/618143ae4202178c79c34ca6793b5553 to your computer and use it in GitHub Desktop.
Save adujardin/618143ae4202178c79c34ca6793b5553 to your computer and use it in GitHub Desktop.
Unity-Technologies/PeopleSansPeople for kp training

Notes about using Unity-Technologies/PeopleSansPeople in COCO format for kp detection

DATA Generation

Setup

The data cannot be download like a reference dataset, it needs to be generated. On Linux here's a gist of the command used (more details https://github.com/Unity-Technologies/PeopleSansPeople/tree/main/peoplesanspeople_binaries#running-the-linux-binary):

git clone https://github.com/Unity-Technologies/PeopleSansPeople.git
cd peoplesanspeople_binaries
wget https://storage.googleapis.com/peoplesanspeople-gha-binaries/StandaloneLinux64_39ff5eb9ab4ce79440a3f743ebeb4f7b3c967024.zip
unzip StandaloneLinux64_39ff5eb9ab4ce79440a3f743ebeb4f7b3c967024.zip
# Install the nvidia drivers and vulkan stuff
sudo apt install vulkan-utils

Generation

bash run.sh -t Linux -d build/StandaloneLinux64 -f scenarioConfiguration.json -l build/StandaloneLinux64/log.txt

By default it creates a dataset of 100 images, this number can be changed in scenarioConfiguration.json, with the field "totalIterations" The output goes into ~/.config/unity3d/DefaultCompany/HDRP\ RenderPeople\ 2020.1.17f1/<UUID>

For 100 images it takes around 50MB of disk spaces, and it writes it on the home partition so be careful ! (Generating the 500K images dataset is about ~250GB)

Viz

The data can be viewed using a notebook (https://github.com/Unity-Technologies/com.unity.perception/blob/main/com.unity.perception/Documentation~/Tutorial/DatasetInsights.md)

docker run -p 8888:8888 -v "~/.config/unity3d/DefaultCompany/HDRP RenderPeople 2020.1.17f1/<UUID>":/data -t unitytechnologies/datasetinsights:latest

Go to http://localhost:8888 then datasetinsights/notebooks then Perception_Statistics.ipynb

Annotation

The output annotation format is NOT COCO compatible, it uses "Unity Perception format" https://github.com/Unity-Technologies/com.unity.perception/blob/main/com.unity.perception/Documentation~/Schema/Synthetic_Dataset_Schema.md

There's a conversion script that exists for COCO:

Conversion tool setup

Need to setup the python package from source since the pypi version is too old and does not includes the conversion bit

If needed install poetry (https://python-poetry.org/docs/):

curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -

Then the conversion tool:

git clone https://github.com/Unity-Technologies/datasetinsights.git
cd datasetinsights/datasetinsights
# MAKE SURE YOU HAVE PYTHON > 3.7
poetry install

Conversion

https://github.com/Unity-Technologies/datasetinsights#convert-datasets

datasetinsights convert -i <input-directory> -o <output-directory> -f COCO-Instances

or

datasetinsights convert -i <input-directory> -o <output-directory> -f COCO-Keypoints

I guess it should work but at that point I encounter an error

datasetinsights.datasets.unity_perception.exceptions.DefinitionIDError: Can't find annotations records associate with the given definition id {'id': '1ccebeb4-5886-41ff-8fe0-f911fa8cbcdf', 'name': 'instance segmentation', 'description': 'pixel-wise instance segmentation label', 'format': 'PNG', 'spec': [{'label_id': 1, 'label_name': 'person'}]}.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment