Today we released the paper, together with code and modes, of our AAAI 2018 publication “Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition“.
This work is one of my most liked papers in recent years. It provides an elegant way to deal with temporal dynamic graphs such as skeleton sequences. Without any feature or architecture engineering, it already outperforms a bunch of best performing models on skeleton-based action recognition. It also extends the formulation of graph NN to the spatial-temporal scenarios. Graph NN is a hot topic in recent years and we are glad to see its successful application in computer vision. Below is an excerpt from the meta-review (AC’s decision message) of our AAAI submission.
After reading the paper, reviews and the rebuttal, I feel the paper has enough novelty and will spur more research along the lines proposed in the paper. Dealing with temporal data using deep networks is a challenging task!
We implement the code in PyTorch. All models are trained with the open-sourced framework and released to the public. I hope to see its application in many related tasks.
After all, I would like to congratulate Sijie Yan on his excellent work in this paper. It is my privilege to have worked with bright and hardworking young scholars like Sijie and many others in the MMLAB@CUHK.
The code is on Github: https://github.com/yysijie/st-gcn
Paper Arxiv Preprint: https://arxiv.org/abs/1801.07455