SignAvatar is a transformer-based framework that reconstructs and generates expressive 3D sign language motions at the word level, enhanced by curriculum learning and supported by our newly introduced ASL3DWord dataset.
🔗 Paper: SignAvatar: Sign Language 3D Motion Reconstruction and Generation
This repository contains the official implementation of the above paper.
SignAvatar/
├── blender_app/ # Blender visualization code
├── checkpoints/ # Model checkpoints - Download separately
├── dataset/
│ ├── data/ # Dataset files - Download separately
│ │ ├── ASL3DWord/
│ │ │ ├── test/
│ │ │ └── train/
│ │ ├── word_projection/ # Word projection mappings
│ │ └── WLASL_v0.3.json
│ ├── dataset.py # Dataset classes and utilities
│ ├── ......
├── evaluate/ # Model evaluation and metrics
├── generate/ # Sequence generation scripts
├── models/ # Model architectures
│ ├── architectures/
│ ├── modeltype/
│ ├── smplx/ # SMPLX body - Download separately
│ │ ├── SMPLX_NEUTRAL.npz
│ │ └── kin_pose53_smplx.pkl
│ ├── ......
......
mkdir -p dataset/data/ASL3DWord
To request access to the ASL3DWord dataset, please send your request via email: [email protected]
When sending your request, kindly include the following information:
- Name
- Institution / Organization
- Research Purpose
- Which resource are you requesting (Dataset / Checkpoints / Both)
- Statement of Agreement: I confirm that this resource will be used for research and educational purposes only.Required files:
- SMPLX_NEUTRAL.npz files
- kin_pose53_smplx.pkl
# Download SMPLX_NEUTRAL from: https://smpl-x.is.tue.mpg.de/
# Download kin_pose53_smplx.pkl from previous link# 1. Clone the repository
git clone https://github.com/dongludeeplearning/SignAvatar.git
cd SignAvatar
# 2. Create conda environment
conda env create -f environment.yaml
conda activate signavatar
# 3. Download required files (see section above)
# 4. Verify installation
python -c "import torch; print('PyTorch version:', torch.__version__)"# Train CVAE model
bash run_train_cvae.sh
# Train STGCN model
bash run_train_stgcn.sh# Generate pose sequences
bash run_generation.sh
# Generate 3D meshes
python generate/generate_sequences_mesh.py# Evaluate CVAE model
bash run_evaluate_cvae.shIf you use this code or dataset in your research, please cite our accompanying paper:
@inproceedings{dong2024signavatar,
title={Signavatar: Sign language 3d motion reconstruction and generation},
author={Dong, Lu and Chaudhary, Lipisha and Xu, Fei and Wang, Xiao and Lary, Mason and Nwogu, Ifeoma},
booktitle={2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG)},
pages={1--10},
year={2024},
organization={IEEE}
}This project is released under the CC BY-NC 4.0 License — for research and educational use only.