Multimodal LLMs have advanced vision-language tasks but still struggle with understanding video scenes. To bridge this gap, Video Scene Graph Generation (VidSGG) has emerged to capture multi-object relationships across video frames. However, prior methods rely on pairwise connections, limiting their ability to handle complex multi-object interactions and reasoning. To this end, we propose Multimodal LLMs on a Scene HyperGraph (HyperGLM), promoting reasoning about multi-way interactions and higher-order relationships. Our approach uniquely integrates entity scene graphs, which capture spatial relationships between objects, with a procedural graph that models their causal transitions, forming a unified HyperGraph. Significantly, HyperGLM enables reasoning by injecting this unified HyperGraph into LLMs. Additionally, we introduce a new Video Scene Graph Reasoning (VSGR) dataset featuring 1.9M frames from third-person, egocentric, and drone views and supports five tasks: Scene Graph Generation, Scene Graph Anticipation, Video Question Answering, Video Captioning, and Relation Reasoning. Empirically, HyperGLM consistently outperforms state-of-the-art methods across five tasks, effectively modeling and reasoning complex relationships in diverse video scenes.
3.7K videos | 1.9M frames | 61.1K reasoning tasks |
---|
@inproceedings{nguyen2024hig,
title={HIG: Hierarchical Interlacement Graph Approach to Scene Graph Generation in Video Understanding},
author={Nguyen, Trong-Thuan and Nguyen, Pha and Luu, Khoa},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={18384--18394},
year={2024}
}
@inproceedings{nguyen2024cyclo,
title={CYCLO: Cyclic Graph Transformer Approach to Multi-Object Relationship Modeling in Aerial Videos},
author={Nguyen, Trong-Thuan and Nguyen, Pha and Li, Xin and Cothren, Jackson and Yilmaz, Alper and Luu, Khoa},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024}
}