FALCON Icon: Fairness Learning via Contrastive Attention Approach to Continual Semantic Scene Understanding

1CVIU Lab, University of Arkansas     2Google DeepMind     3CMU    
4MBZUAI     5GEOS Department, University of Arkansas    

🎉 Accepted to CVPR 2025 🎉

Highlights

  • Contrastive Clustering Paradigm for Continual Learning. Introduce a novel approach to addressing catastrophic forgetting in continual semantic segmentation by leveraging contrastive clustering.
  • Fairness Contrastive Clustering Loss. Develop a fairness-driven contrastive loss to mitigate bias in continual learning, improving model fairness on imbalanced data.
  • Attention-based Visual Grammar. Propose a new attention-based framework to model feature distributions and topological structures, effectively handling background shift and unknown classes.
fail

Abstract

Continual Learning in semantic scene segmentation aims to continually learn new unseen classes in dynamic environments while maintaining previously learned knowledge. Prior studies focused on modeling the catastrophic forgetting and background shift challenges in continual learning. However, fairness, another major challenge that causes unfair predictions leading to low performance among major and minor classes, still needs to be well addressed. In addition, prior methods have yet to model the unknown classes well, thus resulting in producing non-discriminative features among unknown classes. This paper presents a novel Fairness Learning via Contrastive Attention Approach to continual learning in semantic scene understanding. In particular, we first introduce a new Fairness Contrastive Clustering loss to address the problems of catastrophic forgetting and fairness. Then, we propose an attention-based visual grammar approach to effectively model the background shift problem and unknown classes, producing better feature representations for different unknown classes. Through our experiments, our proposed approach achieves State-of-the-Art (SOTA) performance on different continual learning benchmarks, i.e., ADE20K, Cityscapes, and Pascal VOC. It promotes the fairness of the continual semantic segmentation model.

fail

Experimental Results

ADE20K Benchmarks
fail

Pascal VOC Benchmarks
fail

Cityscapes Benchmarks
fail
Qualitative Results
fail

Our Related Work


[1] Thanh-Dat Truong, Chi Nhan Duong, Ngan Le, Son Lam Phung, Chase Rainwater, and Khoa Luu. "BiMaL: Bijective Maximum Likelihood Approach to Domain Adaptation in Semantic Scene Segmentation." In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8548-8557. 2021.

[2]Thanh-Dat Truong, Ngan Le, Bhiksha Raj, Jackson Cothren, and Khoa Luu. "FREDOM: Fairness Domain Adaptation Approach to Semantic Scene Understanding." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 19988-19997. 2023.

[3] Thanh-Dat Truong, Hoang-Quan Nguyen, Bhiksha Raj, and Khoa Luu. "Fairness Continual Learning Approach to Semantic Scene Understanding in Open-world Environments." Advances in Neural Information Processing Systems (NeurIPS), pp. 65456-65467. 2023.

[4] Thanh-Dat Truong, Pierce Helton, Ahmed Moustafa, Jackson David Cothren, and Khoa Luu. "CONDA: Continual Unsupervised Domain Adaptation Learning in Visual Perception for Self-driving Cars." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 5642-5650. 2024.

[5] Thanh-Dat Truong, Utsav Prabhu, Bhiksha Raj, Jackson Cothren, and Khoa Luu. "FALCON: Fairness Learning via Contrastive Attention Approach to Continual Semantic Scene Understanding." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2025.

BibTex


      @inproceedings{truong2023falcon,
        title={FALCON: Fairness Learning via Contrastive Attention Approach to Continual Semantic Scene Understanding},
        author={Truong, Thanh-Dat and Prabhu, Utsav and Raj, Bhiksha and Cothren, Jackson and Luu, Khoa},
        booktitle={The IEEE/CVF Conference on Computer Vision and Pattern Recognition},
        year={2025}
      }