Bio - Dong Gong

Dr. Dong Gong is a Senior Lecturer and ARC DECRA Fellow (2023-2026) at the School of Computer Science and Engineering (CSE), The University of New South Wales (UNSW). He is also holding an adjunct position at the Australian Institute for Machine Learning (AIML) at The University of Adelaide. After obtaining a PhD degree in Dec 2018, Dong worked as a Research Fellow at the AIML until Jan 2022. He is running a research group at UNSW. His research aims to develop reliable, efficient, and practical AI for open-ended scenarios. Recently, he has been focusing on machine learning challenges with dynamic requirements in an open-ended realistic world, including continual learning, test-time adaptation, etc, and their applications on LLMs, VLMs, and diffusion generative models. He has been actively serving the research community as Area Chair or reviewers for conferences such as CVPR, NeurIPS, ICML, ICCV, ACM MM, WACV, etc, and was awarded as outstanding reviewers for NeurIPS’18 and outstanding AC for ACM MM’24.


I am a Senior Lecturer and ARC DECRA Fellow at the School of Computer Science and Engineering, The University of New South Wales (UNSW Sydney), Australia. I also hold an adjunct position with the Australian Institute for Machine Learning (AIML) of The University of Adelaide. My research aims to develop reliable, efficient, and practical AI for open-ended scenarios. After obtaining a PhD degree in Dec 2018 (supervised by Prof. Yanning Zhang, Prof. Anton van den Hengel, and Prof. Javen Qinfeng Shi), I worked at the Australian Institute for Machine Learning (AIML), The University of Adelaide, until joining UNSW as a Lecturer in Jan 2022. At AIML, I was a Research Fellow, a Principal Researcher at Centre for Augmented Reasoning (CAR), working with Prof. Anton van den Hengel, Prof. Qinfeng (Javen) Shi and Prof. Chunhua Shen.


Main research topics:

My research aims to develop reliable, practical, and efficient self-improving AI across open-ended, continuously changing environments. Recent main research topics include:

  • Continual learning
    • Forgetting mitigation in training-time learning (regularization, replay, architecture)
    • Agentic & online learning with minimised/controlled forgetting
    • Continual learning for LLMs and MLLMs
  • Foundation model adaptation, post-training, test-time learning
  • Generative models (image & video generation)
  • Deep learning model design
    • Memory model & mechanism
    • (Dynamic) mixture-of-experts
  • High-level visual perception & low-level vision
    • Semantic and depth prediction
    • Image restoration and enhancement