Abstract
The automated recognition of music genres from audio information is a challenging problem, as genre labels are subjective and noisy. Artist labels are less subjective and less noisy, while certain artists may relate more strongly to certain genres. At the same time, at prediction time, it is not guaranteed that artist labels are available for a given audio segment. Therefore, in this work, we propose to apply the transfer learning framework, learning artist-related information which will be used at inference time for genre classification. We consider different types of artist-related information, expressed through artist group factors, which will allow for more efficient learning and stronger robustness to potential label noise. Furthermore, we investigate how to achieve the highest validation accuracy on the given FMA dataset, by experimenting with various kinds of transfer methods, including single-task transfer, multi-task transfer and finally multi-task learning.
Original language | English |
---|---|
Title of host publication | WWW'18 Companion Proceedings of the The Web Conference 2018 |
Place of Publication | Republic and Canton of Geneva, Switzerland |
Publisher | International World Wide Web Conferences Steering Committee |
Pages | 1929-1934 |
Number of pages | 6 |
ISBN (Print) | 978-1-4503-5640-4 |
DOIs | |
Publication status | Published - 2018 |
Event | WWW 2018: The Web Conference - Bridging natural and artificial intelligence worldwide - Lyon, France Duration: 23 Apr 2018 → 27 Apr 2018 https://www2018.thewebconf.org |
Conference
Conference | WWW 2018 |
---|---|
Abbreviated title | WWW 2018 |
Country/Territory | France |
City | Lyon |
Period | 23/04/18 → 27/04/18 |
Internet address |
Keywords
- music information retrieval
- multi-task learning
- transfer learning
- neural network