Weak semantic context helps phonetic learning in a model of infant language acquisition

Stella Frank,Naomi H Feldman,Sharon Goldwater

Published 2014 in Annual Meeting of the Association for Computational Linguistics

ABSTRACT

Learning phonetic categories is one of the first steps to learning a language, yet is hard to do using only distributional phonetic information. Semantics could potentially be useful, since words with different meanings have distinct phonetics, but it is unclear how many word meanings are known to infants learning phonetic categories. We show that attending to a weaker source of semantics, in the form of a distribution over topics in the current context, can lead to improvements in phonetic category learning. In our model, an extension of a previous model of joint word-form and phonetic category inference, the probability of word-forms is topic-dependent, enabling the model to find significantly better phonetic vowel categories and word-forms than a model with no semantic knowledge.

PUBLICATION RECORD

  • Publication year

    2014

  • Venue

    Annual Meeting of the Association for Computational Linguistics

  • Publication date

    2014-06-01

  • Fields of study

    Linguistics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-49 of 49 references · Page 1 of 1

CITED BY

Showing 1-18 of 18 citing papers · Page 1 of 1