Our paper on Auto-VirtualNet for multi-task learning is accepted to Neurocomputing

[2021.02.16]

The following paper is accepted to Neurocomputing

  • Auto-VirtualNet: Cost-Adaptive Dynamic Architecture Search for Multi-Task Learning by Eunwoo Kim, Chanho Ahn, and Songhwai Oh
    • Abstract: Multi-task learning (MTL) improves learning efficiency by solving multiple tasks simultaneously compared to multiple instances of individual learning. However, despite its benefits, there still remain several major challenges: first, negative interference can reduce the learning efficiency when the number of tasks is high or the tasks are of limited relevance. Second, exploring an optimal model structure manually is quite restricted. Last but not least, offering cost-adaptive solutions has not been addressed in the MTL regime. In spite of its notable merits, the combined problem has not been well discussed. In this work, we propose a novel MTL approach to address the combinatorial problem while minimizing memory consumption. The proposed method discovers multiple network models dynamically from a pool of candidate models, and produces a set of widely distributed solutions with respect to different computational costs for each task. For the diversity of candidate models, we modularize the given backbone architecture that generates basic building blocks and then construct a hierarchical structure based on the building blocks. The proposed method is trained to optimize both task performance and computational costs of selected models. The proposed method dynamically generates optimal networks for each task and offers significant performance improvements over existing MTL approaches in a range of experiments.