Machine learning techniques have traditionally been designed to learn from a training dataset, assuming that the test set conforms to the same distribution (IID) [1]. However, the IID assumption often does not hold in real-world scenarios, as the environment in which the model operates is more likely to change over time than to remain fixed. This means that the model will encounter data from diverse domains over time, and its performance might decline if the data is from a domain that differs substantially from its training set.

To address this, two primary approaches have been developed: domain generalization and domain adaptation. Domain generalization [2,3] seeks to enhance a model’s ability to generalize to unseen domains. However, collecting a large volume of labeled data from various domains for a specific task can be challenging. Conversely, domain adaptation [4,5] seeks to optimize model performance for the current target domain, but it can suffer from performance degradation on new target domains due to the absence of mechanisms designed for unseen domains.

In this research, Prof. Kim’s team (Wonguk Cho and Jinha Park) tackled a challenging problem that simulates real-world scenarios where models continually face domain shifts and no labeled data are available for these new domains. This challenge is referred to as unsupervised continual domain shift learning. In such a setting, the model must continuously adapt to new domains (domain adaptation) while preserving its generalization capability for forthcoming, unseen domains (domain generalization) without supervision.

To address this problem, the team introduced the CoDAG framework, which combines domain adaptation and domain generalization in a complementary manner. This framework facilitates a synergistic relationship between the two distinct approaches, where they complement each other, leading to improved performance for both. It’s noteworthy that their research is one of the first attempts to probe the possible synergies between domain adaptation and domain generalization methodologies, which were primarily studied independently. This paradigm shift represents not just a novel approach, but one with profound practical implications in computer vision and various other fields of machine learning.

  1. Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. The elements of statistical learning: data mining, inference, and prediction, volume 2. Springer, 2009.
  2. indong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip Yu. Generalizing to unseen domains: A survey on domain generalization. IEEE Transactions on Knowledge and Data Engineering, 2022.
  3. Fengchun Qiao, Long Zhao, and Xi Peng. Learning to learn single domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12556–12565, 2020.
  4. Mei Wang and Weihong Deng. Deep visual domain adaptation: A survey. Neurocomputing, 312:135–153, 2018.
  5. Vishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adaptation: A survey of recent advances. IEEE signal processing magazine, 32(3):53–69, 2015.