Listeners with hearing loss often struggle to understand speech in noise, even with a hearing aid. To better understand the auditory processing deficits that underlie this problem, we made large-scale brain recordings from gerbils, a common animal model for human hearing, while presenting a large database of speech and noise sounds. We first used manifold learning to identify the neural subspace in which speech is encoded and found that it is low-dimensional and that the dynamics within it are profoundly distorted by hearing loss. We then trained a deep neural network (DNN) to replicate the neural coding of speech with and without hearing loss and analyzed the underlying network dynamics. We found that hearing loss primarily impacts spectral processing, creating nonlinear distortions in cross-frequency interactions that result in a hypersensitivity to background noise that persists even after amplification with a hearing aid. Our results identify a new focus for efforts to design improved hearing aids and demonstrate the power of DNNs as a tool for the study of central brain structures.
Large-scale electrophysiology and deep learning reveal distorted neural signal dynamics after hearing loss
Shievanie Sabesan,A. Fragner,Ciaran Bench,Fotios Drakopoulos,N. Lesica
Published 2023 in eLife
ABSTRACT
PUBLICATION RECORD
- Publication year
2023
- Venue
eLife
- Publication date
2023-05-10
- Fields of study
Medicine
- Identifiers
- External record
- Source metadata
Semantic Scholar, PubMed
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-64 of 64 references · Page 1 of 1
CITED BY
Showing 1-8 of 8 citing papers · Page 1 of 1