Lynne E. Bernstein

Lynne E. Bernstein
Professor of Speech and Hearing Science
Full-time
Contact:
Lynne E. Bernstein is a Professor in the Speech, Language and Hearing Science Department. Her research is funded by the NSF and the NIH. She combines neuroimaging, behavioral, and computational approaches in studying the neural and behavioral bases of speech and multisensory processing. She leads the Communication Neuroscience Laboratory, whose current work includes a focus on perceptual learning. The research is developing advanced methods for improving visual speech perception so as to improve speech perception in noisy face-to-face social contexts, which are frequently difficult for older adults. Her laboratory is also presently investigating the neural bases for vibrotactile perceptual learning through a sensory substitution device. This work continues a research line that has, among other findings, shown neural plasticity for vibrotactile stimulus processing in congenitally deaf adults. The laboratory is mapping out the neural pathways that are activated during lipreading and audiovisual speech perception. It is also carrying out research on memory and attention for clear and vocoded acoustic speech stimuli.
1983-1989 Research Scientist, Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD
1988-1995 Senior Scientist, Center for Auditory and Speech Sciences, Gallaudet University, Washington DC
1995-2009 Senior Scientist, Communication Neuroscience, House Ear Institute, Los Angeles, CA
2003-2005 and 2009-2012 Program Director, Program in Cognitive Neuroscience, National Science Foundation, Arlington, VA
2010-present Professor, Department of Speech, Language, and Hearing Sciences, George Washington University
Grants 2018 - Present
“Speech Perception Training: Advanced Scoring and Feedback Methods.” L. E. BERNSTEIN, PI and Co-I. NIDCD R44 DC015418-02, to SeeHear LLC with subaward to GWU. 9/1/2019-8/31/2022.
- “Collaborative Research: Using Somatosensory Speech and Non-Speech Categories to Test the Brain’s General Principles of Perceptual Learning.” L. E. BERNSTEIN, PI. NSF SBE/BCS, subaward from Georgetown, 9/1/2018-8/31/2019.
- “Collaborative Research: Using Somatosensory Speech and Non-Speech Categories to Test the Brain’s General Principles of Perceptual Learning.” L. E. BERNSTEIN, PI. NSF SBE/BCS-1439339. 9/15/2014-8/31/2019.
- “Visual speech perception training to ameliorate hearing difficulties in older adults.” L. E. BERNSTEIN, PI. NIH/NIDCD R56DC016107, 09/01/2018-08/31/2022.
- “Innatam – Enhancing vibrotactile speech learnability by optimizing how vibrotactile speech interfaces with the brain’s speech systems.” L. E. BERNSTEIN, PI. Subaward from Georgetown University, Industry source. 08/14/17-09/13/18.
- Speech Perception
- Language Processing
- Lip Reading
- Multisensory Processing
- Perceptual Learning
- Deafness
- Computational Modelling and Statistics
- Neuroimaging
I am a cognitive neuroscientist studying speech perception, multisensory processing, and perceptual learning. My research investigates how speech can be perceived by vision (lipreading), hearing, touch, and combinations of perceptual modalities. As a cognitive neuroscientist, I apply multiple methods including functional magnetic resonance imaging (fMRI), electroencephalography (EEG), computational modeling, statistics, and diverse behavioral methods. My research is funded by the National Institute of Health and the National Science Foundation. One of my current projects is focused on developing training methods to improve older adults’ ability to use visual speech information. Another project involves examining how the brain learns novel vibrotactile encodings of speech. A third project has the goal to develop methods to improve the yield of clinical speech perception in noise tests.
Publications (2014-2024)
- Damera, S. R., Malone, P. S., Stevens, B. W., Klein, R. Eberhardt, S. P., Auer, E. T., BERNSTEIN, L. E. & Riesenhuber, M. (2023). Metamodal coupling of vibrotactile and auditory speech processing systems through matched stimulus representations. The Journal of Neuroscience, 43, 4984-4996.
- BERNSTEIN, L. E., Auer, E. T., & Eberhardt, S. P. (2023). Modality-specific perceptual learning of vocoded auditory versus lipread speech: different effects of prior information. Brain Sciences, 13(7), 1008.
- BERNSTEIN, L. E., Auer Jr, E. T., & Eberhardt, S. P. (2022). During lipreading training with sentence stimuli, feedback controls learning and generalization to audiovisual speech in noise. American Journal of Audiology, 31, 57-77.
- BERNSTEIN, L. E., Jordan, N. Auer Jr, E. T., & Eberhardt, S. P. (2022). Lipreading: A review of its continuing importance for speech recognition with an acquired hearing loss and possibilities for effective training.
- PS Malone, SP Eberhardt, ET Auer, R Klein, BERNSTEIN, L. E., & Riesenhuber. (2021). Neural basis of learning to perceive speech through touch using an acoustic-to-vibrotactile speech sensory substitution, bioRxiv, 2021.10. 24.465610.
- Damera, S. R., Malone, P. S., Benson, W. S., Klein, R., Eberhardt, S. P., Auer Jr, E. T., BERNSTEIN, L. E., & Riesenhuber. (2021). Metamodal coupling of vibrotactile and auditory speech processing systems through matched stimulus representations. bioRxiv, 2021.05. 04.442660.
- BERNSTEIN, L. E., Eberhardt, S. P. & Auer Jr, E. T. (2021). Errors on a speech-in-babble sentence recognition test reveal individual differences in acoustic phonetic perception and babble misallocations. Ear & Hearing, 42 (3), 673-690.
- Malone, P.S., Eberhardt, S. P., Sprouse, C., Scholl, C., Auer, E. T., Bokeria, L. Ronkin, J., Jiang, X., BERNSTEIN, L. E., Riesenhuber, M. (2019). Neural mechanisms of vibrotactile category learning. Human Brain Mapping.
- BERNSTEIN, L. E., Besser, J. Maidmont, D. W., & Swanepoel, de W. (2018). Innovation in the context of audiology and in the context of the internet. The American Journal of Audiology, 27(3S): 376-384.
- BERNSTEIN, L. E. (2018). Response errors in females’ and males’ sentence lipreading necessitate structurally different models for predicting lipreading accuracy. Language Learning, 68(S1), 127-158.
- Eberhardt, S. P., Auer, E. T., Jr. & BERNSTEIN, L. E. (2014). Multisensory training can promote or impede visual perceptual learning of speech stimuli: Visual-tactile versus visual-auditory training. Frontiers in Human Neuroscience. 8:829.
- Files BT, Tjan B, Jiang J and BERNSTEIN, L. E. (2015). Visual Speech Discrimination and Identification of Natural and Synthetic Consonant Stimuli. Front. Psychol. 6:878. doi: 10.3389/fpsyg.2015.00878
- BERNSTEIN, L. E., & Liebenthal, E. (2014). Neural pathways for visual speech perception. Frontiers in Neuroscience. 8, 386, 10.3389/fnins.2014.00386 http://journal.frontiersin.org/Journal/10.3389/fnins.2014.00386/abstract
- Eberhardt, S. P., Auer, E. T., Jr. & BERNSTEIN, L. E. (2014). Multisensory training can promote or impede visual perceptual learning of speech stimuli: Visual-tactile versus visual-auditory training. Frontiers in Human Neuroscience. 8:829.
- BERNSTEIN, L. E., Eberhardt, S. P. & Auer, E. T. (2014). Audiovisual spoken word training can promote or impede auditory-only perceptual learning: Results from prelingually deafened adults with late-acquired cochlear implants versusnormal-hearing adults. Frontiers in Psychology, 5, 934.
- Tjan, B., Chao, E. & BERNSTEIN, L. E. (2014). A visual or tactile signal makes speech detection more efficient by reducing uncertainty. European Journal of Neuroscience, 39, 1323-1331.
Ph.D. University of Michigan, Psycholinguistics
Postdoc. Northwestern University, Speech Perception and Psychoacoustics