【讲座】 Embracing Changes in Deep Learning: Continual Learning with Augmented and Modularized Memory

  • 人工智能学院
  • 日期:2024-07-28
  • 422

标题(TITLE): Embracing Changes in Deep Learning: Continual Learning with Augmented and Modularized Memory


讲者(SPEAKER): Dr. Dong Gong (巩东)


时间(TIME): 7月29日(周一), 10:00am


地点(VENEU): #腾讯会议:762-982-217(线上报告)

 

报告摘要(ABSTRACT):
Deep learning (DL) has been successful in many applications. However, the conventional DL approaches focus on the end results on fixed datasets/scenarios and fail to handle the dynamically raised novel requirements in the real world. Continual learning (CL) aims to train deep neural networks (DNNs) to efficiently accumulate knowledge on dynamically arriving data and task streams like humans. The main challenges include how to enable DNNs to learn on data and task streams with non-stationary distributions without catastrophic forgetting, requiring the balance between stability and plasticity. To ensure DNNs effectively retain past knowledge while accommodating future tasks, we explore CL techniques from the viewpoint of augmenting and modularizing the memorization of DNNs. We also delve into performing continual learning with or for pre-trained models. Moreover, the requirement of enabling models to detect changes presents a significant challenge in real-world DL applications. In response, part of our research focuses on relevant tasks such as out-of-distribution/anomaly detection to address these demands.

 

讲者简介(BIO):
Dong Gong is a Senior Lecturer and ARC DECRA Fellow (2023-2026) at the School of Computer Science and Engineering (CSE), The University of New South Wales (UNSW). He is also holding an adjunct position at the Australian Institute for Machine Learning (AIML) at The University of Adelaide. Previously, after obtaining PhD degree in Dec 2018, Dong worked as a Research Fellow at the AIML, and a Principal Researcher at the Centre for Augmented Reasoning (CAR) at the University of Adelaide. His research interests are in computer vision and machine learning. Recently, he has been focusing on learning tasks with dynamic requirements and non-ideal supervision.