pdf (Open Access)
Authors: Trung Thanh Nguyen, Yasutomo Kawanishi, Vijay John ,Takahiro Komamizu, Ichiro Ide
The article has been accepted for publication in ACM Transactions on Multimedia Computing, Communications, and Applications (ACM TOMM).
This repository contains the implementation of MMASL on the MM-Office dataset.
The paper is currently under review, and the source code will be made available upon acceptance.
For more details, please contact nguyent[at]cs.is.i.nagoya-u.ac.jp.
The Python code is developed and tested in the environment specified in environment.yml.
Experiments on the MM-Office dataset were conducted on a single NVIDIA RTX A6000 GPU with 48 GB of GPU memory.
You can adjust the batch_size to accommodate GPUs with smaller memory.
Download the MM-Office dataset here and place it in the dataset/MM-Office directory.
To train the model, execute the following command:
bash ./scripts/train_MM_ViT_Transformer.sh
To perform inference, use the following command:
bash ./scripts/infer_MM_ViT_Transformer.sh
This work was partly supported by Japan Society for the Promotion of Science (JSPS) KAKENHI JP21H03519 and JP24H00733. The computation was carried out using the General Projects on the supercomputer "Flow" with the Information Technology Center, Nagoya University.
If you find this code useful for your research, please cite the following paper:
@inproceedings{nguyen2025MMASL,
title={Action Selection Learning for Weakly Labeled Multi-view and Multi-modal Action Recognition},
author={Trung Thanh Nguyen, Yasutomo Kawanishi, Vijay John, Takahiro Komamizu, Ichiro Ide},
year={2025}
}
@inproceedings{nguyen2024MultiASL,
title={Action Selection Learning for Multilabel Multiview Action Recognition},
author={Nguyen, Trung Thanh and Kawanishi, Yasutomo and Komamizu, Takahiro and Ide, Ichiro},
booktitle={ACM Multimedia Asia 2024},
pages={1--7},
year={2024},
}