From CNBH Acoustic Scale Wiki
The auditory image model (AIM) is a time-domain, functional model of the signal processing performed in the auditory pathway as the system converts a sound wave into the initial perception that we experience when presented with that sound. This representation is referred to as an auditory image by analogy with the visual image of a scene that we experience in response to optical stimulation (Patterson et al., 1992; Patterson et al., 1995). AIM-MAT is an implementation of AIM in which the processing modules, the resource files and the graphical user interface are all written in MATLAB, so that the user has full control of the system at all levels. The original version was referred to as aim2003 which is described in Bleeck, Ives and Patterson (2004). An updated version with the tutorial below appeared as aim2006. The current version is AIM-MAT which can be obtained from the SoundSoftware repository.
Core programming: Stefan Bleeck, Tom Walters
Module contributors: Tim Ives, Ralph van Dinther, Richard Turner, Toshio Irino
Documentation: Martin Vestergaard, Stefan Bleeck, Roy Patterson
Processing stages in AIM
The principle functions of AIM are to describe and simulate:
1. Pre-cochlear processing (PCP) of the sound up to the oval window of the cochlea,
2. Basilar membrane motion (BMM) produced in the cochlea,
3. The neural activity pattern (NAP) observed in the auditory nerve and cochlear nucleus,
4. The identification of maxima in the NAP that strobe temporal integration (STI),
5. The construction of the stabilized auditory image (SAI) that forms the basis of auditory perception,
6. A size invariant representation of the information in the SAI referred to as the Mellin Magnitude Image (MMI).
There are typically several alternative algorithms for performing each stage of processing. The default options provide an advanced version of AIM which uses the dynamic compressive gammachirp filterbank to simulate basilar membrane motion. Menus on the GUI provide access to alternative options, for example, the traditional, linear gammatone filterbank.
New features in aim2006
- The default spectral analysis module (dcgc) uses a ‘dynamic, compressive GammaChirp’ (dcGC) filterbank, which has fast acting compression in the filter, after the module that simulates the passive motion of the basilar membrane, as part of the module that simulates the active process in the cochlea (Irino and Patterson, 2006).
- The default form of neural transduction module (hl) is reduced to half-wave rectification and low-pass filtering, since there is now fast-acting compression in the auditory filter itself (i.e., in the dcGC filterbank).
- There is a new module (mellin) for converting the auditory resonance image into a Mellin Magnitude Image (MMI) and a Mellin Phase Image (MPI). The MMI is a scale-invariant representation of the information in the sound (Irino and Patterson, 2002).
- There are parameter files for all of the modules in aim2006 where the user can control the operation of the module.
- Input signals: All standard wave files can be used to input sounds into aim2006. The sampling rate of the sound file determines the sampling rate of all subsequent calculations, and all sampling rates are supported.
The development of AIM software at the CNBH was supported by the UK Medical Research Council (G990369, G0500221). The development of aim2006 was supported by the European Office for Airforce Research and Development under award number IFT-053043.