Head movements and sound localization
1998
Sign up for access to the world's latest research
Abstract
ҟ xiii Chapter one: Introductionҟ 1 1.1. STATIC LOCALIZATION CUESɁ 5 1.1.1. Classical interaural cues 5 1.1.2. The ambiguity of classical interaural cues 7 1.1.3. Representing auditory space with a convenient co-ordinate system 12 1.1.4. High-frequency puma derived spectral cues 14 1.1.5. Low-frequency shoulder/torso-derived spectral cues 17
Related papers
The Journal of the Acoustical Society of America, 2007
Previous research (Jones & Letowski, 2001) has shown that listeners wearing helmets with greater ear coverage perform worse on localization tasks. However, the helmets that have been studied were selected from those commonly in use and differed in suspension systems and profiles. Three versions of the same helmet design, differing only in the ear coverage (0%, 50%, 100%), were compared in this study. Sounds were presented from 12 azimuthal locations spaced 30 degrees apart in the presence of a white noise masker. Twelve listeners completed a localization task while wearing each of the helmets as well as with no helmet in two different acoustic environments. As ear coverage increased, localization performance decreased. The effects of early reflections (created by placing walls in the experimental space), though small, increased the effect of the helmets. The effect of ear coverage and its interaction with the acoustic environment will be discussed in terms of its significance for real-world environments. In addition, headrelated transfer functions (HRTFs) were measured for each listener in each of the head conditions. A model that uses two binaural inputs to predict sound localization by the ideal observer has been developed. These data will be processed by this model to demonstrate the detrimental effects of ear coverage and reflected sound.
To a first-order approximation, binaural localization cues are ambiguous: a number of source locations give rise to nearly the same interaural differences. For sources more than a meter from the listener, binaural localization cues are approximately equal for any source on a cone centered on the interaural axis (i.e., the well-known "cones of confusion"). The current paper analyzes simple geometric approximations of a listener's head to gain insight into localization performance for sources near the listener. In particular, if the head is treated as a rigid, perfect sphere, interaural intensity differences (liDs) can be broken down into two main components. One component is constant along the cone of confusion (and thus co varies with the interaural time difference, or lTD). The other component is roughly constant for a sphere centered on the interaural axis and depends only on the relative pathlengths from the source to the two ears. This second factor is only large enough to be perceptible when sources are within one or two meters of the listener. These results are not dramatically different if one assumes that the ears are separated by 160 degrees along the surface of the sphere (rather than diametrically opposite one another). Thus, for sources within a meter of the listener, binaural information should allow listeners to locate sources within a volume around a circle centered on the interaural axis, on a "doughnut of confusion." The volume of the doughut of confusion increases dramatically with angle between source and the interaural axis, degenerating to the entire median plane in the limit.
2018
In a real-world scenario, sound sources from directions in the median plane cause neither interaural time nor level differences. The only perceivable directional cue is the spectral coloration, which is caused by the interaction of the direct acoustical propagation path with various reflected and diffracted paths at pinna, head, and torso. In [1] the influence of torso reflections on head related impulse responses (HRIR) is examined. There, it is shown that only for a small number of source directions the influence of the torso will exhibit an energetic notable impact and hence cause perceptible spectral colorations. Further, from a geometrical consideration these interactions are mainly relevant in the frequency range between 1 and 2 kHz. Therefore, it might be obvious that spectral colorations due to different source directions will be mainly caused by the pinna of the listener (complementary see also [2]). However, this coloration is highly individual and depends on the physiogno...
Hearing Research, 1993
The influence of selectively filtering a broadband stimulus on binaural localization was investigated. First, head-related-transfer-functions were obtained by placing a miniature microphone at the entrance of the ear canal and presenting broadband noise bursts from each of 104 loudspeakers arrayed in the listener's left hemifield. The microphone's output was transformed into frequency spectra using a Fast Fourier Transform. The microphone and loudspeaker characteristics were accounted for by repeating the procedure with the microphone suspended in space. The in-ear data were divided by the in-space data thereby providing an account of the pinna's interaction with the incident sound wave. Extracted from these data were the covert peak areas (CPAs) associated with different frequency segments. A CPA was defined as the spatial location of those loudspeakers, which when generating the stimulus, produced a sound pressure level at the ear canal entrance within 1 dB of the maximum level recorded for a particular frequency segment. A series of localization tests was conducted using a bandstop stimulus-one in which differently-centered 2.0~kHz wide frequency segments were filtered from a broadband noise. We predicted that when a given frequency segment was filtered, binaural listeners would less often report a sound as originating from the CPA associated with that segment compared to their performances when the sound was unfiltered. This prediction was substantiated by the data (P < 0.0001). While localization accuracy was decreased for the filtered stimuli, the decrement was significantly greater (P < 0.01) for sounds originating in the CPA. We interpreted the results to mean that monaural spectral cues contribute significantly to the accuracy of binaural localization and that the basis of the contribution is the spatial referents of stimulus frequencies.
Hearing Research, 1999
The two principal binaural cues to sound location are interaural time differences (ITDs), which are thought to be dominant at low frequencies, and interaural level differences (ILDs), which are thought to dominate at mid to high frequencies. The outer ear also filters the sound in a location dependent manner and provides spectral cues to location. In these experiments we have examined the relative contribution of these cues to the auditory localisation performance by humans. Six subjects localised sounds by pointing their face toward the perceived location of stimuli presented in complete darkness in an anechoic chamber. Control stimuli were spectrally flat (400 Hz to 16 kHz), while the relative contribution of location cues in the low frequency channels was determined using noise high passed at 2 kHz and in the high frequency channels using stimuli low passed at 2 kHz. The removal of frequencies below 2 kHz had little effect on either the pattern of systematic errors or the distribution of localisation estimates with the exception of an increase in the size of the standard deviations associated with a few rear locations. This suggests considerable redundancy in the auditory localisation information contained within a broadband sound. In contrast, restricting the target spectrum to frequencies below 2 kHz resulted in a large increase in the cone-of-confusion errors as well as a subject dependent biasing of the front-to-back or back-to-front confusions. These biases and the reduction in localisation accuracy for high pass stimuli at some posterior locations are consistent with a contribution of spectral information at low frequencies. z 1999 Elsevier Science B.V. All rights reserved.
1982
Four groups of eight monaural listeners received practice on locating sounds coming from different segments of the horizontal plane prior to a test in which all sounds originated within the same region. An additional eight monaural listeners were given the final localization test without the pretest practice. Knowledge of results was withheld. The main finding was that positive transfer of training was not equally apparent for all groups. That group for which the pretest and test involved the same ear and the same azimuthal positions of loudspeakers performed best. Practice in locating rearwardly positioned sounds did not benefit the localization of frontally positioned sounds even when the same ear was functioning in both situations. Experience in locating sounds from all segments of the horizontal plane appears to be required in order to build up an adequate internal representation of the acoustic surrounds.
Proceedings of International …
The authors have been involved in a series of studies showing that the fundamental frequency of a complex tone, the center frequency of a noise band, or the cut-off frequencies of simultaneously presented high-and low-passed noise bands, systematically affect the elevation of auditory images -such that high frequencies are associated with high elevations, and low frequencies with low elevations. These studies show this effect not just for median plane localization, but in some circumstances even for sources on the aural axis. This paper reviews the findings of these studies, and considers their implications and applications, including for auditory display.
Frontiers in psychology, 2017
Studies have found that portions of space around our body are differently coded by our brain. Numerous works have investigated visual and auditory spatial representation, focusing mostly on the spatial representation of stimuli presented at head level, especially in the frontal space. Only few studies have investigated spatial representation around the entire body and its relationship with motor activity. Moreover, it is still not clear whether the space surrounding us is represented as a unitary dimension or whether it is split up into different portions, differently shaped by our senses and motor activity. To clarify these points, we investigated audio localization of dynamic and static sounds at different body levels. In order to understand the role of a motor action in auditory space representation, we asked subjects to localize sounds by pointing with the hand or the foot, or by giving a verbal answer. We found that the audio sound localization was different depending on the bo...
Frontiers in Neuroscience, 2014
It is widely acknowledged that individualized head-related transfer function (HRTF) measurements are needed to adequately capture all of the 3D spatial hearing cues. However, many perceptual studies have shown that localization accuracy in the lateral dimension is only minimally decreased by the use of non-individualized head-related transfer functions. This evidence supports the idea that the individualized components of an HRTF could be isolated from those that are more general in nature. In the present study we decomposed the HRTF at each location into average, lateral and intraconic spectral components, along with an ITD in an effort to isolate the sound localization cues that are responsible for the inter-individual differences in localization performance. HRTFs for a given listener were then reconstructed systematically with components that were both individualized and non-individualized in nature, and the effect of each modification was analyzed via a virtual localization test where brief 250 ms noise bursts were rendered with the modified HRTFs. Results indicate that the cues important for individualization of HRTFs are contained almost exclusively in the intraconic portion of the HRTF spectra and localization is only minimally affected by introducing non-individualized cues into the other HRTF components. These results provide new insights into what specific inter-individual differences in head-related acoustical features are most relevant to sound localization, and provide a framework for how future human-machine interfaces might be more effectively generalized and/or individualized.
Advances in Sound Localization, 2011

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.