Hi, I am a third-year Ph.D. student at the University of New South Wales (UNSW Sydney), supervised by Dr. Dong Gong, Dr. Alan Blair, and Prof. Lina Yao. Before that, I received my MPhil degree from the Australian Institute for Machine Learning (AIML), the University of Adelaide, supervised by A/Prof. Qi Wu and Dr. Yuankai Qi. I obtained my Bachelor’s degree from Beijing Jiaotong University, where I was advised by Prof. Runmin Cong.

My research currently focuses on Large Vision Language Models.

News

  • 2026.02    DyMoE is accepted to CVPR 2026. Thanks to all collaborators.
  • 2025.06    SAME is accepted to ICCV 2025. Congratulations to Gengze! Thanks to all collaborators.
  • 2025.06    New preprint is out: Little By Little: Continual Learning via Incremental Mixture of Rank-1 Associative Memory Experts.
  • 2024.12    New preprint is out: Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-selective LoRA.
  • 2024.12    New preprint is out: Learning Mamba as a Continual Learner: Meta-learning Selective State Space Models for Efficient Continual Learning.
  • 2023.12    WebVLN is accepted to AAAI 2024. Thanks to all collaborators.
  • 2023.07    MG-VLN is accepted to ACM MM 2023. Thanks to all collaborators.

Research

On Token’s Dilemma: Dynamic MoE with Drift-Aware Token Assignment for Continual Learning of Large Vision Language Models
Chongyang Zhao, Mingsong Li, Haodong Lu, Dong Gong
CVPR 2026
[Demo Page / Paper / Code]
Mind the Gap: Improving Success Rate of Vision-and-Language Navigation by Revisiting Oracle Success Routes
Chongyang Zhao, Yuankai Qi, Qi Wu
ACM MM 2023
WebVLN: Vision-and-Language Navigation on Websites
Qi Chen*, Dileepa Pitawela*, Chongyang Zhao*, Gengze Zhou, Hsiang-Ting Chen, Qi Wu
AAAI 2024
SAME: Learning Generic Language-Guided Visual Navigation with State-Adaptive Mixture of Experts
Gengze Zhou, Yicong Hong, Zun Wang, Chongyang Zhao, Mohit Bansal, Qi Wu
ICCV 2025
Learning Mamba as a Continual Learner: Meta-learning Selective State Space Models for Efficient Continual Learning
Chongyang Zhao, Dong Gong
Preprint
Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-selective LoRA
Haodong Lu, Chongyang Zhao, Dong Gong
Preprint
Little By Little: Continual Learning via Incremental Mixture of Rank-1 Associative Memory Experts
Chongyang Zhao, Jason Xue, Lina Yao, Kristen Moore, Dong Gong
Preprint