This interactive brain viewer allows you to see how different speech sounds are represented in the human brain. The top panel shows the brain from one participant along with electrodes from a 256-channel electrocorticography grid implanted as part of surgical treatment for epilepsy. The bottom panel shows a running spectrogram of sound from either the microphone or pre-recorded sounds. The spectrogram shows what frequencies make up a sound from low at the bottom to high at the top, and also shows how they change over time (from left to right).
After you click one of the buttons, brain responses are calculated based on models of how the brain responds to the spectrotemporal information present in speech. These responses are modeled based on real data from one participant who listened to approximately 1 hour of naturally spoken English sentences. The size of the electrode will increase with the amount of activity in the underlying brain area (bigger=more active). Electrodes that respond to different classes of phonemes are colored differently.
Click the microphone button to see predicted responses to your own voice in real time, or click the other buttons to listen to previously recorded example sounds.
For more methodological details, see the short summary here.
Developed by David Chang, Liberty Hamilton, and Morgan Lee at the Laboratory of Edward Chang at UCSF. We thank Roger Anguera Singla of the Gazzaley lab for providing Glass Brain code. Special thanks to Jeramy Morrill and Boris Smus for developing the original Spectrogram code, and a big thank you to the participants who volunteered their time and made this research possible. Participants provided written informed consent, and all procedures were approved by the UCSF Institutional Review Board.