Lynne E. Bernstein

Alt Text

Lynne E. Bernstein

Professor of Speech and Hearing Science

Full-time


Contact:

Office Phone: (202) 994-7403

Lynne E. Bernstein is a Professor in the Speech, Language and Hearing Science Department. Her research is funded by the NSF and the NIH. She combines neuroimaging, behavioral, and computational approaches in studying the neural and behavioral bases of speech and multisensory processing. She leads the Communication Neuroscience Laboratory, whose current work includes a focus on perceptual learning. The research is developing advanced methods for improving visual speech perception so as to improve speech perception in noisy face-to-face social contexts, which are frequently difficult for older adults. Her laboratory is also presently investigating the neural bases for vibrotactile perceptual learning through a sensory substitution device. This work continues a research line that has, among other findings, shown neural plasticity for vibrotactile stimulus processing in congenitally deaf adults. The laboratory is mapping out the neural pathways that are activated during lipreading and audiovisual speech perception. It is also carrying out research on memory and attention for clear and vocoded acoustic speech stimuli.

1983-1989 Research Scientist, Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD

1988-1995 Senior Scientist, Center for Auditory and Speech Sciences, Gallaudet University, Washington DC

1995-2009 Senior Scientist, Communication Neuroscience, House Ear Institute, Los Angeles, CA

2003-2005 and 2009-2012 Program Director, Program in Cognitive Neuroscience, National Science Foundation, Arlington, VA

2010-present Professor, Department of Speech, Language, and Hearing Sciences, George Washington University

 


"Collaborative Research: Using Somatosensory Speech and Non-Speech Categories to Test the Brain's General Principles of Perceptual Learning." L. E. Bernstein, PI. NSF, BCS-1439339

“Multi-Measure Speech Perception in Noise (MMSPIN) Chart: More Scores, Fewer Tests.” L. E. Bernstein, PI. NIH/NIDCD, $150,000 to SeeHear LLC, with subaward to GWU. NIH/NIDCD R43DC015749.

“Speech Perception Training: Advanced Scoring and Feedback Methods.” L. E. Bernstein, PI. NIH/NIDCD R43DC015418.

“I-Corps: Smart Speech Perception Feedback for Training and Diagnostics.” L. E. Bernstein, PI. NSF IIP1738164.

  • Speech Perception
  • Language Processing
  • Lip Reading
  • Multisensory Processing
  • Perceptual Learning
  • Deafness
  • Computational Modelling and Statistics
  • Neuroimaging

I am a cognitive neuroscientist studying speech perception, multisensory processing, and perceptual learning. My research investigates how speech can be perceived by vision (lipreading), hearing, touch, and combinations of perceptual modalities. As a cognitive neuroscientist, I apply multiple methods including functional magnetic resonance imaging (fMRI), electroencephalography (EEG), computational modeling, statistics, and diverse behavioral methods. My research is funded by the National Institute of Health and the National Science Foundation. One of my current projects is focused on developing training methods to improve older adults’ ability to use visual speech information. Another project involves examining how the brain learns novel vibrotactile encodings of speech. A third project has the goal to develop methods to improve the yield of clinical speech perception in noise tests.

 

BERNSTEIN, L. E. (in press). Response errors in females’ and males’ sentence lipreading necessitate structurally different models for predicting lipreading accuracy. Language Learning.

Liebenthal, E. & BERNSTEIN, L. E. (2017). Editorial: Neural Mechanisms of Perceptual Categorization as Precursors to Speech Perception. Front. Neurosci. 11:69. doi: 10.3389/fnins.2017.00069

Files BT, Tjan B, Jiang J and BERNSTEIN, L. E. (2015). Visual Speech Discrimination and Identification of Natural and Synthetic Consonant Stimuli. Front. Psychol. 6:878. doi: 10.3389/fpsyg.2015.00878

BERNSTEIN, L. E., & Liebenthal, E. (2014). Neural pathways for visual speech perception. Frontiers in Neuroscience. 8, 386, 10.3389/fnins.2014.00386 http://journal.frontiersin.org/Journal/10.3389/fnins.2014.00386/abstrac…

Eberhardt, S. P., Auer, E. T., Jr. & BERNSTEIN, L. E. (2014). Multisensory training can promote or impede visual perceptual learning of speech stimuli: Visual-tactile versus visual-auditory training. Frontiers in Human Neuroscience. 8:829. doi: 10.3389/fnhum.2014.00829 http://journal.frontiersin.org/Journal/10.3389/fnhum.2014.00829/abstrac…

BERNSTEIN, L. E., Eberhardt, S. P. & Auer, E. T. (2014). Audiovisual spoken word training can promote or impede auditory-only perceptual learning: Results from prelingually deafened adults with late-acquired cochlear implants versus normal-hearing adults. Frontiers in Psychology, 5, 934. doi: 10.3389/fpsyg.2014.00934 http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.00934/abstract

Tjan, B., Chao, E. & BERNSTEIN, L. E. (2014). A visual or tactile signal makes speech detection more efficient by reducing uncertainty. European Journal of Neuroscience, 39, 1323-1331. http://onlinelibrary.wiley.com/doi/10.1111/ejn.12471/abstract

Files, B. & BERNSTEIN, L. E. (2013The visual mismatch negativity elicited with visual speech stimuli. Frontiers in Human Neuroscience, 7, 371. http://journal.frontiersin.org/Journal/10.3389/fnhum.2013.00371/abstract

BERNSTEIN, L. E., Auer, E. T., Jr., Eberhardt, S. P., & Jiang, J. (2013). Auditory spoken word recognition with degraded speech can be enhanced by audiovisual training. Frontiers in Neuroscience, 7, 34. http://journal.frontiersin.org/Journal/10.3389/fnins.2013.00034/abstract

BERNSTEIN, L. E. (2012). Commentary: A historical perspective. In B. E. Stein (Ed.), The New Handbook of Multisensory Processing. Cambridge, MA: MIT. Pp. 397-405.

BERNSTEIN, L. E. (2012). Visual speech perception. In G. Bailly, P. Perrier, & E. Vatikiotis-Bateson, (Eds.), Audiovisual Speech Processing. Cambridge University Press.

BERNSTEIN, L. E., Jiang, J., Pantazis, D., Lu, Z.-L., & Joshi, A. (2011). Visual phonetic processing localized using speech and non-speech face gestures in video and point-light displays. Human Brain Mapping, 32(10), 1660-7.

Jiang, J. & BERNSTEIN, L. E. (2011). Psychophysics of the McGurk and other audiovisual speech integration effects. Journal of Experimental Psychology: Human Performance and Perception, 37, 1193-1209.

BERNSTEIN, L. E., Auer, E. T., Jr., & Jiang, J. (2010). Lipreading, the lexicon, and Cued Speech. In C. La Sasso, J. Leybaert, & K. Crain (Eds.), Cued Speech and Cued Language for Hard of Hearing Children. San Diego: Plural Publishing. Pp. 429-44

Ponton, C. W., BERNSTEIN, L. E., & Auer, E. T. (2009). Mismatch negativity with visual-only and audiovisual speech. Brain Topography, 21(3-4), 207-215.

BERNSTEIN, L.E., Lu, Z.-L., Jiang, J. (2008). Quantified acoustic-optical speech signal incongruity identifies cortical sites of audiovisual speech processing. Brain Research, 1242, 172-184.

BERNSTEIN, L.E., Auer, E.T., Jr., Wagner, M., & Ponton, C.W. (2008). Spatio-temporal dynamics of audiovisual speech processing. NeuroImage, 39, 423-435.

BERNSTEIN, L. E., Demorest, M. E., & Tucker, P. E. (2000). Speech perception without hearing. Perception & Psychophysics, 62, 233-252.

PhD Psycholinguistics, University of Michigan

Postdoctoral Fellow, Northwestern University