Musical Rain

From CNBH Acoustic Scale Wiki

Jump to: navigation, search
Category:Sounds

Musical Rain is the name for a stimulus that is used in Psychoacoustic experiments. It was developed at the Centre for the Neural Basis of Hearing at the University of Cambridge, UK.


An example of a sequence of Musical Rain

Download MR_sequence.mp3 [54.00 kB]

An example of a sequence of Vowel sounds

Download VW_sequence.mp3 [54.00 kB]


Why it is useful

The main use of Musical rain is as a baseline stimuli in Psychoacoustic experiments. Musical rain does not produce the percept of speech even when you are directed to listen for speech. This is mainly due to the absence of continuous formants in the signal. Musical rain does, however, produce a similar level of BOLD (blood oxygenation level dependent) activation to that of vowels in all centers of the auditory pathway up to and including the primary receiving areas of auditory cortex in Heschl’s gyrus and planum temporale (Uppenkamp et al., 2006). Beyond the primary receiving areas, in secondary auditory regions such as the anterior superior temporal sulcus and superior temporal gyrus, musical rain produces much less activation than the corresponding speech. It is also the case that musical rain tokens cannot be learned as they are generated by random processes and as such they are all unique.


Spectrally rotated speech (RSp) (Blesser, 1972) is not a good baseline because it gives the percept of speech, albeit unintelligible speech (Blesser, 1972; Narain et al 2003). A baseline stimulus should not give the percept of speech as we do not want listeners to try and impose any meaning to the sound. It is not clear whether the effort of trying to understand unintelligible speech would activate other areas of the brain.


How it is generated

The procedure for generating musical rain is described in Uppenkamp et al. (2006). The technique is similar to that used to generate synthetic vowel sounds, i.e. continuous streams of four damped sinusoids are summed together. Typically the frequency of the sinusoids represents the formant frequencies and the repetition rate represents the pitch of the vowel. However, for the production of musical rain, the carrier frequencies and the repetition rates of each of these sinusoids are randomized. The carrier frequency is randomized over about a one octave range. The period of the repetition rate is randomized over a range of 20-ms range.

Envelope modulated Musical Rain

Temporal envelopes can be extracted by taking the absolute value of the Hilbert transform for a speech token. These envelopes can be smoothed by low-pass filtering at 20 Hz. These temporal envelopes can then be used to modulate the envelopes of the corresponding musical rain tokens. The RMS level of the musical rain is matched to that of the speech. This produces stimuli in which the long-term spectro-temporal distribution of energy is matched to that of the corresponding speech stimuli.


Blesser B (1972) Speech perception under conditions of spectral transformation. I. Phonetic characteristics. J Speech Hear Res 15:5–41.

Narain, C., Scott, S.K., Wise, R.J.S., Rosen, S., Leff, A., Iversen, S.D. and Matthews, P.M. (2003) Defining a Left-lateralized Response Specific to Intelligible Speech Using fMRI Cereb Cortex, December 1, 2003; 13(12): 1362 - 1368.

Uppenkamp, S., Johnsrude, I. S., Norris, D., Marslen-Wilson, W. and Patterson, R. D. (2006). “Locating speech-specific processes in human temporal cortex,” NeuroImage 31, 1284-1296.

Personal tools
Namespaces
Variants
Views
Actions
Navigation