Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders

Jesse Engel,Cinjon Resnick,Adam Roberts,S. Dieleman,Mohammad Norouzi,D. Eck,K. Simonyan

Published 2017 in International Conference on Machine Learning

ABSTRACT

Generative models in vision have seen rapid progress due to algorithmic improvements and the availability of high-quality image datasets. In this paper, we offer contributions in both these areas to enable similar progress in audio modeling. First, we detail a powerful new WaveNet-style autoencoder model that conditions an autoregressive decoder on temporal codes learned from the raw audio waveform. Second, we introduce NSynth, a large-scale and high-quality dataset of musical notes that is an order of magnitude larger than comparable public datasets. Using NSynth, we demonstrate improved qualitative and quantitative performance of the WaveNet autoencoder over a well-tuned spectral autoencoder baseline. Finally, we show that the model learns a manifold of embeddings that allows for morphing between instruments, meaningfully interpolating in timbre to create new types of sounds that are realistic and expressive.

PUBLICATION RECORD

  • Publication year

    2017

  • Venue

    International Conference on Machine Learning

  • Publication date

    2017-04-05

  • Fields of study

    Mathematics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-33 of 33 references · Page 1 of 1

CITED BY

Showing 1-100 of 719 citing papers · Page 1 of 8