About Me 🌲

My research interests lie in multimodal learning and continual learning, with a focus on continually improving LLMs/LVLMs to acquire new knowledge while mitigating forgetting. I am particularly interested in dynamic mixture of experts (Dynamic MoE) architectures and parameter-efficient fine-tuning (PEFT) for building scalable and adaptive LVLMs. My work also extends to vision-language pretraining, embodied AI, and multimodal reasoning.

Currently, I am exploring unified multimodal models (UMMs) that integrate visual understanding and generation.

News 🔥

  • May 2026    MoRAM is accepted to ICML 2026. Congratulations to Jeff! Thanks to all collaborators!
    Continual learning via incremental sparse Mixture of Rank-1 Associative Memory experts.
  • Feb 2026    DyMoE is accepted to CVPR 2026. Thanks to all collaborators!
    “On Token’s Dilemma” — Continual learning Dynamic MoE for LVLMs w/ token-level filtering.
  • Jun 2025    SAME is accepted to ICCV 2025. Congratulations to Gengze! Thanks to all collaborators!
    State-adaptive MoE for generic language-guided visual navigation.
  • Dec 2024    New preprint! Check CoDyRA on arXiv!
    Dynamic rank-selective LoRA for knowledge retention in continual learning VLMs.
  • Dec 2024    New preprint! Check MambaCL on arXiv!
    Meta-learning selective state space models (Mamba) for efficient continual learning.
  • Dec 2023    WebVLN is accepted to AAAI 2024. Thanks to all collaborators!
    Vision-and-language navigation on shopping websites.
  • Jul 2023    MG-VLN is accepted to ACM MM 2023. Thanks to all collaborators!
    Backtracking to passed correct locations via first-person video grounding for navigation.
Zoomed image

Research 🌴

CVPR
On Token’s Dilemma: Dynamic MoE with Drift-Aware Token Assignment for Continual Learning of Large Vision Language Models
Chongyang Zhao, Mingsong Li, Haodong Lu, Dong Gong
CVPR 2026
Paper Project Page Code
ICML
Little By Little: Continual Learning via Incremental Mixture of Rank-1 Associative Memory Experts
Haodong Lu, Chongyang Zhao, Jason Xue, Lina Yao, Kristen Moore, Dong Gong
ICML 2026
Paper Project Page Code
Preprint
Learning Mamba as a Continual Learner: Meta-learning Selective State Space Models for Efficient Continual Learning
Chongyang Zhao, Dong Gong
Preprint
Paper
Preprint
Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-selective LoRA
Haodong Lu, Chongyang Zhao, Jason Xue, Lina Yao, Kristen Moore, Dong Gong
Preprint
Paper Code
ACM MM
Mind the Gap: Improving Success Rate of Vision-and-Language Navigation by Revisiting Oracle Success Routes
Chongyang Zhao, Yuankai Qi, Qi Wu
ACM MM 2023
Paper
AAAI
WebVLN: Vision-and-Language Navigation on Websites
Qi Chen*, Dileepa Pitawela*, Chongyang Zhao*, Gengze Zhou, Hsiang-Ting Chen, Qi Wu
AAAI 2024
Paper Code
ICCV
SAME: Learning Generic Language-Guided Visual Navigation with State-Adaptive Mixture of Experts
Gengze Zhou, Yicong Hong, Zun Wang, Chongyang Zhao, Mohit Bansal, Qi Wu
ICCV 2025
Paper Code

Services

Conference Reviewer

NeurIPS '24 '25 ICML '24 '25 '26 Gold Reviewer ICLR '25 '26
CVPR '24 '25 '26 ICCV '25 ECCV '24 '26
AAAI '25 ACM MM '24 Outstanding Reviewer ICDM '24

Journal Reviewer

IJCV IEEE-TNNLS IEEE-TCSVT Neurocomputing CAAI-TIT