Very Deep Convolutional Networks for Text Classification

Holger Schwenk,Loïc Barrault,Alexis Conneau,Yann LeCun

Published 2016 in Conference of the European Chapter of the Association for Computational Linguistics

ABSTRACT

The dominant approach for many NLP tasks are recurrent neural networks, in particular LSTMs, and convolutional neural networks. However, these architectures are rather shallow in comparison to the deep convolutional networks which have pushed the state-of-the-art in computer vision. We present a new architecture (VDCNN) for text processing which operates directly at the character level and uses only small convolutions and pooling operations. We are able to show that the performance of this model increases with the depth: using up to 29 convolutional layers, we report improvements over the state-of-the-art on several public text classification tasks. To the best of our knowledge, this is the first time that very deep convolutional nets have been applied to text processing.

PUBLICATION RECORD

  • Publication year

    2016

  • Venue

    Conference of the European Chapter of the Association for Computational Linguistics

  • Publication date

    2016-06-06

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-27 of 27 references · Page 1 of 1

CITED BY

Showing 1-100 of 1012 citing papers · Page 1 of 11