File(s) not publicly available
Integrating vertex and edge features with Graph Convolutional Networks for skeleton-based action recognition
Methods based on Graph Convolutional Networks (GCN) for skeleton-based action recognition have achieved great success due to their ability to exploit graph structural information from skeleton data. Recently, the bone information has attracted considerable attention as an effective modality which complements the more conventional joint information for action recognition. However, most existing GCN-based methods extract the joint and bone features with two separate GCN networks, ignoring the dependencies between them. In this paper, a novel GCN model is proposed to exploit the information across joints, bones and their relationship collaboratively on a single undirected graph instead of two separate networks. We call the proposed model Vertex-Edge Graph Convolutional Network (VE-GCN) since it conducts the graph convolution operation on the sampling area containing the designated vertexes from joints and edges from bones, respectively. In addition to conducting the Vertex-Edge graph convolution based on the physical connections of the skeleton, we further apply the Vertex-Edge graph convolution to the non-physical joint-joint and joint-bone connections to capture the distal dependencies, and then the convolution results on the non-physical connections are incorporated into the VE-GCN. Moreover, the Conditional Random Field (CRF) is adopted as the loss function to achieve the task of action recognition. Experimental results on four challenging benchmarks (NTU RGB+D, NTU RGB+D 120, N-UCLA, SYSU) show that the proposed model achieves state-of-the-art performance.