One deep music representation to rule them all? A comparative analysis of different representation learning strategies

Jaehun Kim*, Julián Urbano, Cynthia C.S. Liem, Alan Hanjalic

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

22 Citations (Scopus)
179 Downloads (Pure)

Abstract

Inspired by the success of deploying deep learning in the fields of Computer Vision and Natural Language Processing, this learning paradigm has also found its way into the field of Music Information Retrieval. In order to benefit from deep learning in an effective, but also efficient manner, deep transfer learning has become a common approach. In this approach, it is possible to reuse the output of a pre-trained neural network as the basis for a new learning task. The underlying hypothesis is that if the initial and new learning tasks show commonalities and are applied to the same type of input data (e.g., music audio), the generated deep representation of the data is also informative for the new task. Since, however, most of the networks used to generate deep representations are trained using a single initial learning source, their representation is unlikely to be informative for all possible future tasks. In this paper, we present the results of our investigation of what are the most important factors to generate deep representations for the data and learning tasks in the music domain. We conducted this investigation via an extensive empirical study that involves multiple learning sources, as well as multiple deep learning architectures with varying levels of information sharing between sources, in order to learn music representations. We then validate these representations considering multiple target datasets for evaluation. The results of our experiments yield several insights into how to approach the design of methods for learning widely deployable deep data representations in the music domain.

Original languageEnglish
Pages (from-to)1067-1093
Number of pages27
JournalNeural Computing and Applications
Volume32 (2020)
Issue number4
DOIs
Publication statusPublished - 2019

Keywords

  • Multitask learning
  • Music Information Retrieval
  • Representation learning

Fingerprint

Dive into the research topics of 'One deep music representation to rule them all? A comparative analysis of different representation learning strategies'. Together they form a unique fingerprint.

Cite this