- The category name, bbox, segmentation and track_id compatible with that TAO dataset.
- Training set: [Train annotations]
- Validation set: [Validation annotations]
Visual interactivity understanding within visual scenes presents a significant challenge in computer vision. Existing methods focus on complex interactivities while leveraging a simple relationship model. These methods, however, struggle with a diversity of appearance, situation, position, interaction, and relation in videos. This limitation hinders the ability to fully comprehend the interplay within the complex visual dynamics of subjects. In this paper, we delve into interactivities understanding within visual content by deriving scene graph representations from dense interactivities among humans and objects. To achieve this goal, we first present a new dataset containing Appearance-Situation-Position-Interaction-Relation predicates, named ASPIRe, offering an extensive collection of videos marked by a wide range of interactivities. Then, we propose a new approach named Hierarchical Interlacement Graph (HIG), which leverages a unified layer and graph within a hierarchical structure to provide deep insights into scene changes across five distinct tasks. Our approach demonstrates superior performance to other methods through extensive experiments conducted in various scenarios.
We introduce the new ASPIRe dataset to Visual Interactivity Understanding. The diversity of the ASPIRe dataset is showcased through its wide range of scenes and settings, distributed in seven scenarios.
1,488 videos | 167,751 annotations | 4,549 interactivities |
---|