A Comparison of Loss Weighting Strategies for Multi task Learning in Deep Neural Networks
Citations Over TimeTop 10% of 2019 papers
Abstract
With the success of deep learning in a wide variety of areas, many deep multi-task learning (MTL) models have been proposed claiming improvements in performance obtained by sharing the learned structure across several related tasks. However, the dynamics of multi-task learning in deep neural networks is still not well understood at either the theoretical or experimental level. In particular, the usefulness of different task pairs is not known a priori. Practically, this means that properly combining the losses of different tasks becomes a critical issue in multi-task learning, as different methods may yield different results. In this paper, we benchmarked different multi-task learning approaches using shared trunk with task specific branches architecture across three different MTL datasets. For the first dataset, i.e. Multi-MNIST (Modified National Institute of Standards and Technology database), we thoroughly tested several weighting strategies, including simply adding task-specific cost functions together, dynamic weight average (DWA) and uncertainty weighting methods each with various amounts of training data per-task. We find that multi-task learning typically does not improve performance for a user-defined combination of tasks. Further experiments evaluated on diverse tasks and network architectures on various datasets suggested that multi-task learning requires careful selection of both task pairs and weighting strategies to equal or exceed the performance of single task learning.
Related Papers
- → Symmetric Rectified Linear Units for Fully Connected Deep Models(2018)4 cited
- → Deeper into Image Classification(2020)1 cited
- → Training on test data: Removing near duplicates in Fashion-MNIST(2019)3 cited
- → РЕЗУЛЬТАТИ НАЛАШТУВАННЯ ПАРАМЕТРІВ НЕЙРОННИХ ГЛИБОКИХ МЕРЕЖ ЩОДО РОЗПІЗНАВАННЯ FASHION MNIST DATASET(2023)
- → Machine Learning Could be Easier if All Data Were MNIST(2023)