Hearing aid technology has improved dramatically in the last decade, especially in the ability to adaptively respond to dynamic aspects of background noise. Despite these advancements, however, hearing aid users continue to report difficulty hearing in background noise and having trouble adjusting to amplified sound quality. These difficulties may arise in part from current approaches to hearing aid fittings, which largely focus on increased audibility and management of environmental noise. These approaches do not take into account the fact that sound is processed all along the auditory system from the cochlea to the auditory cortex. Older adults represent the largest group of hearing aid wearers; yet older adults are known to have deficits in temporal resolution in the central auditory system. Here we review evidence that supports the use of the auditory brainstem response to complex sounds (cABR) in the assessment of hearing-in-noise difficulties and auditory training efficacy in older adults.
1. Introduction
In recent years, scientists and clinicians have become increasingly aware of the role of cognition in successful management of hearing loss, particularly in older adults. While it is often said that “we hear with our brain, not just with our ears,” the focus of the typical hearing aid fitting continues to be one of providing audibility. Despite evidence of age-related deficits in temporal processing [1–6], abilities beyond the cochlea are seldom measured. Moreover, when auditory processing is assessed, behavioral measures may be affected by reduced cognitive abilities in the domains of attention and memory [7, 8]; for example, an individual with poor memory will struggle to repeat back long sentences in noise. The assessment and management of hearing loss in older adults would be enhanced by an objective measure of speech processing. The auditory brainstem response (ABR) provides such an objective measure of auditory function; its uses have included evaluation of hearing thresholds in infants, children, and individuals who are difficult to test, assessment of auditory neuropathy, and screening for retrocochlear function [9]. Traditionally, the ABR has used short, simple stimuli, such as pure tones and tone bursts, but the ABR has also been recorded to complex tones, speech, and music for more than three decades, with the ABR’s frequency following response (FFR) reflecting the temporal discharge of auditory neurons in the upper midbrain [10, 11]. Here, we review the role of the ABR to complex sounds (cABR) in assessment and documentation of treatment outcomes, and we suggest a potential role of the cABR in hearing aid fitting.
2. The cABR Approach
The cABR provides an objective measure of subcortical speech processing [12, 13]. It arises largely from the inferior colliculus of the upper midbrain [14], functioning as part of a circuit that interacts with cognitive, top-down influences. Unlike the click-evoked response, which bears no resemblance to the click waveform, the cABR waveform is remarkably similar to its complex stimulus waveform, whether a speech syllable or a musical chord, allowing for fine-grained evaluations of timing, pitch, and timbre representation. The click is short, nearly instantaneous, or approximately 0.1 ms, but the cABR may be elicited by complex stimuli that can persist for several seconds. The cABR’s response waveform can be analyzed to determine how robustly it represents different segments of the speech stimulus. For example, in response to the syllable /da/, the onset of the cABR occurs at approximately 9 ms after stimulus onset, which would be expected when taking into account neural conduction time. The cABR onset is analogous to wave V of the brainstem’s response to a click stimulus, but the cABR has potentially greater diagnostic sensitivity for certain clinical populations. For example, in a comparison between children with learning impairments versus children who are typically developing, significant differences were found for the cABR but not for responses to click stimuli [15]. The FFR comprises two regions: the transition region corresponding to the consonant-vowel (CV) formant transition and the steady-state region corresponding to the relatively unchanging vowel. The CV transition is perceptually vulnerable [16], particularly in noise, and the transition may be more degraded in noise than the steady state, especially in individuals with poorer speech-in-noise (SIN) perception [17].
The cABR is recorded to alternating polarities, and the average response to these polarities is added to minimize the cochlear microphonic and stimulus artifact [18, 19]. Phase locking to the stimulus envelope, which is noninverting, enhances representation of the envelope and biases the response towards the low frequency components of the response. On the other hand, phase locking to the spectral energy in the stimulus follows the inverting phase of the stimulus; therefore, adding responses to alternating polarities cancels out much of the spectral energy [13, 20]. Subtracting responses to alternating polarities, however, enhances the representation of spectral energy while minimizing the response to the envelope. One might choose to use added or subtracted polarities, or both, depending on the hypothetical question. For example, differences between good and poor readers are most prominent in the spectral region corresponding to the first formant of speech and are therefore more evident in subtracted polarities [21]. In contrast, the neural signature of good speech-in-noise perception is in the low frequency component of the response, which is most evident with added polarities [22]. The average response waveform of 17 normal hearing older adults (ages 60 to 67) and its evoking stimulus and stimulus and response spectra (to added and subtracted polarities) are displayed in Figure 1.
The stimulus /da/ (gray) is displayed with its response (black) in time and frequency domains. (a) Time domain. The response represents an average of 17 older adults (ages 60 to 67) all of whom have audiometrically normal hearing. The periodicity of the stimulus is reflected in the response with peaks repeating every ~10 ms (the F0 of the vowel /a/). (b) and (c) Frequency domain. Fast Fourier transforms were calculated over the steady-state region of the response, showing frequency energy at the F0 (100 Hz) and its integer harmonics for responses obtained by adding (b) and subtracting (c) responses to alternating polarities.
The cABR is acoustically similar to the stimulus. That is, after the cABR waveform has been converted to a .wav file, untrained listeners are able to recognize monosyllabic words from brainstem responses evoked by those words [23]. The fidelity of the response to the stimulus permits evaluation of the strength of subcortical encoding of multiple acoustic aspects of complex sounds, including timing (onsets, offsets), pitch (the fundamental frequency, F0), and timbre (the integer harmonics of the F0) [13]. Analyses of the cABR include measurement of latency and amplitude in the time domain and magnitude of the F0 and individual harmonics in the frequency domain. Because of the cABR’s remarkable stimulus fidelity, cross-correlation between the stimulus and the response also provides a meaningful measure [24]. In addition, responses between two conditions can be cross-correlated to determine the effects of a specific condition such as noise on a response [25].
Latency analysis has traditionally relied on picking individual peaks, a subjective task that is prone to error. Phase analysis provides an objective method for assessing temporal precision. Because the brainstem represents stimulus frequency differences occurring above 2000 Hz (the upper limits of brainstem phase locking) through timing [26] and phase representation [27, 28], the phase difference between two waveforms (in radians) can be converted to timing differences and represented in a “phaseogram.” This analysis provides an objective measure of the response timing on a frequency-specific basis. For example, the brainstem’s ability to encode phase differences in the formant trajectories between syllables such as /ba/ and /ga/ can be assessed and compared to a normal standard or between groups in a way that would not be feasible if the analysis was limited to peak picking (Figure 2). Although the response peaks corresponding to the F0 are discernible, the peaks in the higher frequency formant transition region such as in Figure 2 would be difficult to identify, even for the trained eye.
A phaseogram displaying differences in phase (radians, colorbar) in responses to /ba/ and /ga/ syllables, which have been synthesized so that they differ only in the second formant of the consonant-to-vowel transition. The top and bottom groups are children (ages 8 to 12) who differ on a speech-in-noise perception measure, the Hearing in Noise Test (HINT). The red color indicates greater phase difference, with /ga/ preceding /ba/, as expected given cochlear tonotopicity. Note that phase differences are only present in the transition, not in the steady state, during which the syllables are identical. Modified from [27].
In natural speech, frequency components change rapidly, and a pitch tracking analysis can be used to evaluate the ability of the brainstem to encode the changing fundamental frequency over time. From this analysis, a measure of pitch strength can be computed using short-term autocorrelation, a method which determines signal periodicity as the signal is compared to a time-shifted copy of itself. Pitch-tracking error is determined by comparing the stimulus F0 with the response F0 for successive periods of the response [29, 30]. These and other measures produced by the pitch-tracking analysis reveal that the FFR is malleable and experience dependent, with better pitch tracking in individuals who have heard changing vowel contours or frequency sweeps in meaningful contexts, such as in tonal languages or music [24, 31].
Other automated analyses which could potentially be incorporated into a clinical protocol include the assessment of response consistency and phase locking. Response consistency provides a way of evaluating trial-to-trial within-subject variability, perhaps representing the degree of temporal jitter or asynchronous neural firing that might be seen in an impaired or aging auditory system [6]. Auditory neuropathy spectrum disorder would be an extreme example of dyssynchronous neural firing, affecting even the response to the click [32–34]. A mild form of dyssynchrony, however, may not be evident in the results of the typical audiologic or ABR protocol but might be observed in a cABR with poor response consistency. The phase-locking factor is another measure of response consistency, providing a measure of trial-to-trial phase coherence [35, 36]. Phase locking refers to the repetitive neural response to periodic sounds. While response consistency is determined largely by the stimulus envelope, the phase-locking factor is a measure of consistency of the stimulus-evoked oscillatory activity [37].
3. The cABR and Assessment of Hearing Loss and the Ability to Hear in Noise
The cABR may potentially play an important role in assessment of hearing loss and hearing in noise. It has good test-retest reliability [39, 40], a necessity for clinical comparisons and for documentation of treatment outcomes. Just as latency differences of 0.2 ms for brainstem responses to click stimuli can be considered clinically significant when screening for vestibular schwannomas [9], similar differences on the order of fractions of milliseconds in the cABR have been found to reliably separate clinical populations [41, 42]. Banai et al. [41] found that the onset and other peaks in the cABR are delayed 0.2 to 0.3 ms in children who are good readers compared to poor readers. In older adults, the offset latency is a strong predictor of self-assessed SIN perception in older adults, with latencies ranging from 47 to 51 ms in responses to a 40 ms /da/ (formant transition only) [43]. Temporal processing deficits are also seen in children with specific language impairment, who have decreased ability to track frequency changes in tonal sweeps, especially at faster rates [44].
Because of the influence of central and cognitive factors on speech-in-noise perception, the pure-tone audiogram, a largely peripheral measure, does not adequately predict the ability to hear in background noise, especially in older adults [45–47]. Due to the convergence of afferent and efferent transmission in the inferior colliculus (IC) [48, 49], we propose that the cABR is an effective method for assessing the effects of sensory processing and higher auditory function on the IC. While the cABR does not directly assess cognitive function, it is influenced by higher-level processing (e.g., selective attention, auditory training). The cABR is elicited passively without the patient’s input or cooperation beyond maintaining a relaxed state, yet it provides in essence a snapshot in time of auditory processing that reflects both cognitive (auditory memory and attention) and sensory influences.
In a study of hearing-, age-, and sex-matched older adults (ages 60–73) with clinically normal hearing, the older adults with good speech-in-noise perception had more robust subcortical stimulus representation, with higher root-mean-square (RMS) and F0 amplitudes compared to older adults with poor speech-in-noise perception (Figure 3) [38]. Perception of the F0 is important for object identification and stream segregation, allowing us to attend to a single voice from a background of voices [50]; therefore, greater representation of the F0 in subcortical responses may enhance one’s ability to hear in noise. When we added noise (six-talker babble) to the presentation of the syllable, we found that the responses of individuals in the top speech-in-noise group were less degraded than in the bottom speech-in-noise group (Figure 3). These results are consistent with research from more than two decades documenting suprathreshold deficits that cannot be identified by threshold testing [46, 47, 51–58]. Even in normal-hearing young adults, better speech-in-noise perception is related to more robust encoding of the F0 in the cABR [53]. Furthermore, in a study with young adult participants, Ruggles et al. [51] found that spatial selective auditory attention performance correlates with the phase locking of the FFR to the speech syllable /da/. Furthermore, they found that selective attention correlates with the ability to detect frequency modulation but is not related to age, reading span, or hearing threshold.
Responses to the syllable /da/ are more robust in older adults with good speech-in-noise perception compared to those with poor speech-in-noise perception, demonstrated by greater RMS amplitude (a) and amplitude of the F0 in the good speech-in-noise group (b). The responses in the poor speech-in-noise group were more susceptible to the degrading effects of noise, as shown by greater differences in responses to the /da/ in quiet and noise (cross-correlations) (c). Relationship between speech-in-noise perception and the quiet-noise correlation (d). *P<0.05, **P<0.01. Modified from [38].
The cABR provides evidence of age-related declines in temporal and spectral precision, providing a neural basis for speech-in-noise perception difficulties. In older adults, delayed neural timing is found in the region corresponding to the CV formant transition [59, 60], but timing in the steady-state region remains unchanged. Importantly, age-related differences are seen in middle-aged adults as young as 45, indicating that declines in temporal resolution are not limited to the elderly population. Robustness of frequency representation also decreases with age, with the amplitude of the fundamental frequency declining in middle- and in older-aged adults. These results provide neural evidence for the finding of adults having trouble hearing in noise as soon as the middle-aged years [61].
What is the role of the cABR in clinical practice? The cABR can be collected in as little as 20 minutes, including electrode application. Nevertheless, even an additional twenty minutes would be hard to add to a busy practice. To be efficacious, the additional required time must yield information not currently provided by the existing protocol. One of the purposes of an audiological evaluation is to determine the factors that contribute to the patient’s self-perception of hearing ability. To evaluate the effectiveness of possible factors, we used multiple linear regression modeling to predict scores on the speech subtest of the Speech, Spatial, and Qualities Hearing Scale [62]. Pure-tone thresholds, speech-in-noise perception, age, and timing measures of the cABR served as meaningful predictors. Behavioral assessments predicted 15% of the variance in the SSQ score, but adding brainstem variables (specifically the onset slope, offset latency, and overall morphology) predicted an additional 16% of the variance in the SSQ (Figure 4). Therefore, the cABR can provide the clinician with unique information about biological processing of speech [43].
Self-perception of speech, assessed by the Speech Spatial Qualities Hearing scale (SSQ), is predicted by audiologic and cABR measures. The audiometric variables predict 15% of the variance in SSQ; the cABR variables predict an additional 16%. In the multiple linear regression model, only the contributions of the cABR onset time and morphology variables are significant. *P<0.05, ***P<0.01.
4. The cABR is Experience Dependent
As the site of intersecting afferent and efferent pathways, the inferior colliculus plays a key role in auditory learning. Indeed, animals models have demonstrated that the corticocollicular pathway is essential for auditory learning [63, 64]. Therefore, it is reasonable to expect that the cABR reflects evidence of auditory training; in fact, the cABR shows influences of both life-long and short-term training. For example, native speakers of tonal languages have better brainstem pitch tracking to changing vowel contours than speakers of nontonal languages [24]. Bilingualism provides another example of the auditory advantages conferred by language expertise. Bilingualism is associated with enhanced cognitive skills, such as language processing and executive function, and it also promotes experience-dependent plasticity in subcortical processing [65]. Bilingual adolescents, who reported high English and Spanish proficiency, had more robust subcortical encoding of the F0 to a target sound presented in a noisy background than their age-, sex-, and IQ-matched monolingual peers. Within the bilingual group, a measure of sustained attention was related to the strength of the F0; this relation between attention and the F0 was not seen in the monolingual group. Krizman et al. [65] proposed that diverse language experience heightens directed attention toward linguistic inputs; in turn, this attention becomes increasingly focused on features important for speaker identification and stream segregation in noise, such as the F0.
Musicianship, another form of auditory expertise, also extends to benefits of speech processing; musicians who are nontonal language speakers have enhanced pitch tracking to linguistically relevant vowel contours, similar to that of tonal language speakers [31]. Ample evidence now exits for the effects of musical training on the cABR [28, 60, 67–73]. The OPERA (Overlap, Precision, Emotion, Repetition, and Attention) hypothesis has been proposed as the mechanism by which music engenders auditory system plasticity [74]. For example, there is overlap in the auditory pathways for speech and music, explaining in part the musician’s superior abilities for neural speech-in-noise processing. The focused attention required for musical practice and performance results in strengthened sound-to-meaning connections, enhancing top-down cognitive (e.g., auditory attention and memory) influences on subcortical processing [75].
Musicians’ responses to the cABR are more resistant to the degradative effects of noise compared to nonmusicians [68, 73]. Background noise delays and reduces the amplitude of the cABR [76]; however, musicianship mitigates the effects of six-talker babble noise on cABR responses in young adults, with earlier peak timing of the onset and the transition in musicians compared to nonmusicians. Bidelman and Krishnan [73] evaluated the effects of reverberation on the FFR and found that reverberation had no effect on the neural encoding of pitch but significantly degraded the representation of the harmonics. In addition, they found that young musicians had more robust responses in quiet and in most reverberation conditions. Benefits of musicianship have also been seen in older adults; when comparing effects of aging in musicians and nonmusicians, the musicians did not have the expected age-related neural timing delays in the CV transition indicating that musical experience offsets the effects of aging [60]. These neural benefits in older musicians are accompanied by better SIN perception, temporal resolution, and auditory memory [77].
But, what about the rest of us who are not able to devote ourselves full time to music practice—can musical training improve our auditory processing as well? Years of musical training in childhood are associated with more robust responses in adults [67], in that young adults with zero years of musical training had responses closer to the noise floor compared to groups of adults with one to five or six to eleven years of training who had progressively larger signal-to-noise ratios. In a structural equation model of the factors predicting speech-in-noise perception in older adults, two subsets were compared—a group who had no history of musical training and another group who had at least one year of musical training (range 1 year to 45 years). Cognitive factors (memory and attention) played a bigger role in speech-in-noise perception in the group with musical training, but life experience factors (physical activity and socioeconomic status) played a bigger role in the group with no experience. Subcortical processing (pitch encoding, harmonic encoding, and cross-correlations between responses in quiet and noise) accounted for a substantial amount of the variance in both groups [78].
Short-term training can also engender subcortical plasticity. Carcagno and Plack [79] found changes in the FFR after ten sessions of pitch discrimination training that took place over the course of approximately four weeks. Four groups participated in the experiment: three experimental groups (static tone, rising tone, and falling tone) and one control group. Perceptual learning occurred for the three experimental groups, with effects somewhat specific to the stimulus used in training. These behavioral improvements were accompanied by changes in the FFR, with stronger phase locking to the F0 of the stimulus, and changes in phase locking were related to changes in behavioral thresholds.
Just as long-term exposure to tonal language leads to better pitch tracking to changing vowel contours, just eight days of vocabulary training on words with linguistically relevant contours resulted in stronger encoding of the F0 and decreases in the number of pitch-tracking errors [29]. The participants in this study were young adults with no prior exposure to a tonal language. Although the English language uses rising and falling pitch to signal intonation, the use of dipping tone would be unfamiliar to a native English speaker, and, interestingly, the cABR to the dipping tone showed the greatest reduction in pitch-tracking errors.
Training that targets speech-in-noise perception has also shown benefits at the level of the brainstem [80]. Young adults were trained to discriminate between CV syllables embedded in a continuous broad-band noise at a +10 dB signal-to-noise ratio. Activation of the medial olivocochlear bundle (MOCB) was monitored during the five days of training through the use of contralateral suppression of evoked otoacoustic emissions. Training improved performance on the CV discrimination task, with the greatest improvement occurring over the first three training days. A significant increase in MOCB activation was found, but only in the participants who showed robust improvement (learners). The learners showed much weaker suppression than the nonlearners on the first day; in fact, the level of MOCB activation was predictive of learning. This last finding would be particularly important for clinical purposes—a measure predicting benefit would be useful for determining treatment candidacy.
There is renewed clinical interest in auditory training for the management of adults with hearing loss. Historically, attempts at auditory training had somewhat limited success, partly due to constraints on the clinician’s ability to produce perceptually salient training stimuli. With the advent of computer technology and consumer-friendly software, auditory training has been revisited. Computer technology permits adaptive expansion and contraction of difficult-to-perceive contrasts and/or unfavorable signal-to-noise ratios. The Listening and Communication Enhancement program (LACE, Neurotone, Inc., Redwood City, CA) is an example of an adaptive auditory training program that employs top-down and bottom-up strategies to improve hearing in noise. Older adults with hearing loss who underwent LACE training scored better on the Quick Speech in Noise test (QuickSIN) [81] and the hearing-in-noise test (HINT) [82]; they also reported better hearing on self-assessment measures—the Hearing Handicap Inventory for the Elderly/Adults [83] and the Client Oriented Scale of Improvement [84, 85]. The control group did not show improvement on these measures.
The benefits on the HINT and QuickSIN were replicated in young adults by Song et al. [66]. After completing 20 hours of LACE training over a period of four weeks, the participants improved not only on speech-in-noise performance but also had more robust speech-in-noise representation in the cABR (Figure 5). They had training-related increases in the subcortical representation of the F0 in response to speech sounds presented in noise but not in quiet. Importantly, the amplitude of the F0 at pretest predicted training-induced change in speech-in-noise perception. The advantages of computer-based auditory training for improved speech-in-noise perception and neural processing have also been observed in older adults [86]. Based on this evidence, the cABR may be efficacious for documenting treatment outcomes, an important component of evidence-based service.
Young adults with normal hearing have greater representation of the F0 in subcortical responses to /da/ presented in noise after undergoing LACE auditory training. The F0 and the second harmonic have greater amplitudes in the postcondition when calculated over the transition (20–60 ms) (b) and the steady state (60–170 ms) (a). Modified from [66].
5. The cABR and Hearing Aid Fitting
Any clinician who has experience with fitting hearing aids has encountered the patient who continues to report hearing difficulties, no matter which particular hearing aid or algorithm is tried. Although we have not yet obtained empirical evidence on the role of the cABR in the hearing aid fitting, we suggest that implementation of the cABR may enhance hearing aid fittings, especially in these difficult-to-fit cases. The clinician might be guided in the selection of hearing aid algorithms through knowledge of how well the brainstem encodes temporal and spectral information. For example, an individual who has impaired subcortical timing may benefit from slowly changing compression parameters in response to environmental changes.
We envision incorporating the cABR into verification of hearing aid performance. Cortical-evoked potentials have been used for verifying auditory system development after hearing aid or cochlear implant fitting in children [87–89]. In adults, however, no difference is noted in the cortical response between unaided and aided conditions, indicating that the cortical response may reflect signal-to-noise ratio rather than increased gain from amplification [90]. Therefore, cortical potentials may have limited utility for making direct comparisons between unaided and aided conditions in adults. We recently recorded the cABR in sound field and compared aided and unaided conditions and different algorithms in the aided condition. There is a marked difference in the amplitude of the waveform in response to an aided compared to an unaided condition. By performing stimulus-to-response correlations, it is possible to demonstrate that certain hearing aid algorithms resulted in a better representation of the stimulus than others (Figure 6). These preliminary data demonstrate the feasibility and possibility of using this approach. Importantly, these data also demonstrate meaningful differences easily observed in an individual.
Responses were obtained to the stimulus /da/ presented at 80 dB SPL in sound field in aided (blue) versus unaided (black) conditions ((a) and (c)) and different settings in the same hearing aid ((b) and (d)). Responses show greater RMS and F0 amplitudes in aided versus unaided conditions and for setting 1 versus setting 2.
6. Conclusions
With improvements in digital hearing aid technology, we are able to have greater expectations for hearing aid performance than ever before, even in noisy situations [91]. These improvements, however, do not address the problems we continue to encounter in challenging hearing aid fittings that leave us at a loss for solutions. The cABR provides an opportunity to evaluate and manage an often neglected part of hearing—the central auditory system—as well as the biological processing of key elements of sound. We envision future uses of the cABR to include assessment of central auditory function, prediction of treatment or hearing aid benefit, monitoring treatment or hearing aid outcomes, and assisting in hearing aid fitting. Because the cABR reflects both sensory and cognitive processes, we can begin to move beyond treating the ear to treating the person with a hearing loss.
Acknowledgments
The authors thank Sarah Drehobl and Travis White-Schwoch for their helpful comments on the paper. This work is supported by the NIH (R01 DC010016) and the Knowles Hearing Center.
Gordon-SalantS.FitzgibbonsP. J.FriedmanS. A.Recognition of time-compressed and natural speech with selective temporal enhancements by young and elderly listeners2007505118111932-s2.0-3584892927410.1044/1092-4388(2007/082)CasparyJD. M.MilbrandJ. C.HelfertR. H.Central auditory aging: GABA changes in the inferior colliculus1995303-43493602-s2.0-002907300710.1016/0531-5565(94)00052-5TremblayK. L.PiskoszM.SouzaP.Effects of age and age-related hearing loss on the neural representation of speech cues20031147133213432-s2.0-003820453710.1016/S1388-2457(03)00114-7HarrisK. C.EckertM. A.AhlstromJ. B.DubnoJ. R.Age-related differences in gap detection: effects of task difficulty and cognitive ability20102641-221292-s2.0-7795267413410.1016/j.heares.2009.09.017WaltonJ. P.Timing is everything: temporal processing deficits in the aged auditory brainstem20102641-263692-s2.0-7795267193710.1016/j.heares.2010.03.002Pichora-FullerM. K.SchneiderB. A.MacDonaldE.PassH. E.BrownS.Temporal jitter disrupts speech intelligibility: a simulation of auditory aging20072231-21141212-s2.0-3384567337310.1016/j.heares.2006.10.009Shinn-CunninghamB. G.BestV.Selective attention in normal and impaired hearing20081242832992-s2.0-5624910513710.1177/1084713808325306Pichora-FullerM. K.Cognitive aging and auditory information processing200342S226322-s2.0-0642343289HallJ.2007Boston, Mass, USAAllyn & BaconGreenbergS.1980Los Angeles, Calif, USAPhonetics Laboratory, Department of Linguistics, UCLAGreenbergS.MarshJ. T.BrownW. S.SmithJ. C.Neural temporal coding of low pitch. I. Human frequency-following responses to complex tones1987252-3911142-s2.0-0023232498KrausN.Listening in on the listening brain20116464045SkoeE.KrausN.Auditory brain stem response to complex sounds: a tutorial20103133023242-s2.0-7795061272310.1097/AUD.0b013e3181cdb272ChandrasekaranB.KrausN.The scalp-recorded brainstem response to speech: neural origins and plasticity20104722362462-s2.0-7634910924710.1111/j.1469-8986.2009.00928.xSongJ. H.BanaiK.RussoN. M.KrausN.On the relationship between speech- and nonspeech-evoked auditory brainstem responses20061142332412-s2.0-3374510641410.1159/000093058MillerG. A.NicelyP. E.An analysis of perceptual confusions among some English consonants1955272338352AndersonS.SkoeE.ChandrasekaranB.KrausN.Neural timing is linked to speech perception in noise20103014492249262-s2.0-7795060310210.1523/JNEUROSCI.0107-10.2010GorgaM.AbbasP.WorthingtonD.JacobsenJ.Stimulus calibration in ABR measurements1985San Diego, Calif, USACollege Hill Press4962CampbellT.KerlinJ. R.BishopC. W.MillerL. M.Methods to eliminate stimulus transduction artifact from insert earphones during electroencephalography20123311441502-s2.0-7996021009710.1097/AUD.0b013e3182280353AikenS. J.PictonT. W.Envelope and spectral frequency-following responses to vowel sounds20082451-235472-s2.0-5514912468710.1016/j.heares.2008.08.004HornickelJ.AndersonS.SkoeE.YiH. G.KrausN.Subcortical representation of speech fine structure relates to reading ability201223169AndersonS.SkoeE.ChandrasekaranB.ZeckerS.KrausN.Brainstem correlates of speech-in-noise perception in children20102701-21511572-s2.0-7864983797310.1016/j.heares.2010.08.001GalbraithG. C.ArbageyP. W.BranskiR.ComerciN.RectorP. M.Intelligible speech encoded in the human brain stem frequency-following response1995617236323672-s2.0-0029611098KrishnanA.XuY.GandourJ.CarianiP.Encoding of pitch in the human brainstem is sensitive to language experience20052511611682-s2.0-2464451657210.1016/j.cogbrainres.2005.05.004RussoN.NicolT.MusacchiaG.KrausN.Brainstem responses to speech syllables20041159202120302-s2.0-384315049610.1016/j.clinph.2004.04.003HornickelJ.SkoeE.NicolT.ZeckerS.KrausN.Subcortical differentiation of stop consonants relates to reading and speech-in-noise perception20091063113022130272-s2.0-6914909013910.1073/pnas.0901123106SkoeE.NicolT.KrausN.Cross-phaseogram: objective neural index of speech sound differentiation201119623083172-s2.0-7995253350210.1016/j.jneumeth.2011.01.020Parbery-ClarkA.TierneyA.StraitD. L.KrausN.Musicians have fine-tuned neural discrimination of speech syllables20122192111119SongJ. H.SkoeE.WongP. C. M.KrausN.Plasticity in the adult human auditory brainstem following short-term linguistic training20082010189219022-s2.0-5504909551210.1162/jocn.2008.20131RussoN. M.SkoeE.TrommerB.NicolT.ZeckerS.BradlowA.KrausN.Deficient brainstem encoding of pitch in children with Autism Spectrum Disorders20081198172017312-s2.0-4474909438910.1016/j.clinph.2008.01.108WongP. C. M.SkoeE.RussoN. M.DeesT.KrausN.Musical experience shapes human brainstem encoding of linguistic pitch patterns20071044204222-s2.0-3394770097310.1038/nn1872RanceG.Auditory neuropathy/dys-synchrony and its perceptual consequences2005911432-s2.0-2034440107510.1177/108471380500900102StarrA.PictonT. W.SiningerY.HoodL. J.BerlinC. I.Auditory neuropathy199611937417532-s2.0-002988618710.1093/brain/119.3.741KrausN.BradlowA. R.CheathamM. A.Consequences of neural asynchrony: a case of of auditory neuropathy2000113345FellJ.Cognitive neurophysiology: beyond averaging2007374106910722-s2.0-3454853673510.1016/j.neuroimage.2007.07.019AndersonS.Parbery-ClarkA.White-SchwochT.KrausN.Aging affects neural precision of speech encoding201232411415614164Tallon-BaudryC.BertrandO.DelpuechC.PernierJ.Stimulus specificity of phase-locked and non-phase-locked 40 Hz visual responses in human19961613424042492-s2.0-0029945198AndersonS.Parbery-ClarkA.YiH. G.KrausN.A neural basis of speech-in-noise perception in older adults20113267507572-s2.0-7995986831010.1097/AUD.0b013e31822229d3SongJ. H.NicolT.KrausN.Test-retest reliability of the speech-evoked auditory brainstem response201112223463552-s2.0-7865097604210.1016/j.clinph.2010.07.009HornickelJ.KnowlesE.KrausN.Test-retest consistency of speech-evoked auditory brainstem responses in typically-developing children20122841-25258BanaiK.HornickelJ.SkoeE.NicolT.ZeckerS.KrausN.Reading and subcortical auditory function20091911269927072-s2.0-6564911563210.1093/cercor/bhp024WibleB.NicolT.KrausN.Atypical brainstem representation of onset and formant structure of speech sounds in children with language-based learning problems20046732993172-s2.0-384311969210.1016/j.biopsycho.2004.02.002AndersonS.Parbery-ClarkA.KrausN.Auditory brainstem response to complex sounds predicts self-reported speech-in-noise performanceJournal of Speech, Language, and Hearing Research. In pressBasuM.KrishnanA.Weber-FoxC.Brainstem correlates of temporal auditory processing in children with specific language impairment201013177912-s2.0-7224909753210.1111/j.1467-7687.2009.00849.xKillionM.NiquetteP.What can the pure-tone audiogram tell us about a patient’s SNR loss?20005334653SouzaP. E.BoikeK. T.WitherellK.TremblayK.Prediction of speech recognition from audibility in older listeners with hearing loss: effects of age, amplification, and background noise200718154652-s2.0-3384649313510.3766/jaaa.18.1.5HargusS. E.Gordon-SalantS.Accuracy of speech intelligibility index predictions for noise-masked young listeners with normal hearing and for elderly listeners with hearing impairment19953812342432-s2.0-0028888523SchofieldB. R.Projections to the inferior colliculus from layer VI cells of auditory cortex200915912462582-s2.0-6064910633110.1016/j.neuroscience.2008.11.013MuldersW. H. A. M.SeluakumaranK.RobertsonD.Efferent pathways modulate hyperactivity in inferior colliculus20103028957895872-s2.0-7795473441110.1523/JNEUROSCI.2289-10.2010OxenhamA. J.Pitch perception and auditory stream segregation: implications for hearing loss and cochlear implants20081243163312-s2.0-5624913563110.1177/1084713808325881RugglesD.BharadwajH.Shinn-CunninghamB. G.Normal hearing is not enough to guarantee robust encoding of suprathreshold features important in everyday communication2011108371551615521ShammaS. A.Hearing impairments hidden in normal listeners2011108391613916140SongJ. H.SkoeE.BanaiK.KrausN.Perception of speech in noise: neural correlates2011239226822792-s2.0-7996014810310.1162/jocn.2010.21556CruickshanksK. J.WileyT. L.TweedT. S.KleinB. E. K.KleinR.Mares-PerlmanJ. A.NondahlD. M.Prevalence of hearing loss in older adults in Beaver dam, Wisconsin. The epidemiology of hearing loss study199814898798862-s2.0-0032211435Gordon-SalantS.FitzgibbonsP. J.Temporal factors and speech recognition performance in young and elderly listeners1993366127612852-s2.0-0027435023DubnoJ. R.DirksD. D.MorganD. E.Effects of age and mild hearing loss on speech recognition in noise198476187962-s2.0-0021458571KimS.FrisinaR. D.MapesF. M.HickmanE. D.FrisinaD. R.Effect of age on binaural speech intelligibility in normal hearing adults20064865915972-s2.0-3364624276610.1016/j.specom.2005.09.004LeeJ. H.HumesL. E.Effect of fundamental-frequency and sentence-onset differences on speech-identification performance of young and older adults in a competing-talker background2012132317001717Vander WerffK. R.BurnsK. S.Brain stem responses to speech in younger and older adults20113221681802-s2.0-7995231743410.1097/AUD.0b013e3181f534b5Parbery-ClarkA.AndersonS.HittnerE.KrausN.Musical experience offsets age-related delays in neural timing20123371483.e11483.e410.1016/j.neurobiolaging. 2011.12.015HelferK. S.VargoM.Speech recognition and temporal processing in middle-aged women20092042642712-s2.0-7045021729910.3766/jaaa.20.4.6GatehouseS.NobleW.The speech, spatial and qualities of hearing scale (SSQ)200443285992-s2.0-154231604510.1080/14992020400050014BajoV. M.NodalF. R.MooreD. R.KingA. J.The descending corticocollicular pathway mediates learning-induced auditory plasticity20101322532602-s2.0-7554909028310.1038/nn.2466SugaN.MaX.Multiparametric corticofugal modulation and plasticity in the auditory system20034107837942-s2.0-014224636710.1038/nrn1222KrizmanJ.MarianV.ShookA.SkoeE.KrausN.Subcortical encoding of sound is enhanced in bilinguals and relates to executive function advantages20121092078777881SongJ. H.SkoeE.BanaiK.KrausN.Training to improve hearing speech in noise: biological mechanisms20122251180119010.1093/cercor/bhr196SkoeE.KrausN.A little goes a long way: how the adult brain is shaped by musical training in childhood201232341150711510Parbery-ClarkA.SkoeE.KrausN.Musical experience limits the degradative effects of background noise on the neural processing of sound2009294514100141072-s2.0-7044964055310.1523/JNEUROSCI.3256-09.2009StraitD. L.KrausN.SkoeE.AshleyR.Musical experience and neural efficiency—effects of training on subcortical processing of vocal expressions of emotion20092936616682-s2.0-5914909730110.1111/j.1460-9568.2009.06617.xMusacchiaG.SamsM.SkoeE.KrausN.Musicians have enhanced subcortical auditory and audiovisual processing of speech and music20071044015894158982-s2.0-3564894585710.1073/pnas.0701498104LeeK. M.SkoeE.KrausN.AshleyR.Selective subcortical enhancement of musical intervals in musicians20092918583258402-s2.0-6594910481810.1523/JNEUROSCI.6133-08.2009BidelmanG. M.GandourJ. T.KrishnanA.Cross-domain effects of music and language experience on the representation of pitch in the human auditory brainstem20112324254342-s2.0-7865013393210.1162/jocn.2009.21362BidelmanG. M.KrishnanA.Effects of reverberation on brainstem representation of speech in musicians and non-musicians201013551121252-s2.0-7795661887610.1016/j.brainres.2010.07.100PatelA. D.Why would musical training benefit the neural encoding of speech? The OPERA hypothesis20112, article 142KrausN.ChandrasekaranB.Music training for the development of auditory skills20101185996052-s2.0-7795486742310.1038/nrn2882BurkardR. F.SimsD.A comparison of the effects of broadband masking noise on the auditory brainstem response in young and older adults200211113222-s2.0-0036624761Parbery-ClarkA.SkoeE.LamC.KrausN.Musician enhancement for speech-in-noise20093066536612-s2.0-7044946353310.1097/AUD.0b013e3181b412e9AndersonS.Parbery-ClarkA.White-SchwochT.KrausN.Sensory-cognitive interactions predict speech-in-noise perception: a structural equation modeling approachProceedings of the Cognitive Neuroscience Society Annual Meeting2012Chicago, Ill, USACarcagnoS.PlackC. J.Subcortical plasticity following perceptual learning in a pitch discrimination task2011121891002-s2.0-7955159460610.1007/s10162-010-0236-1de BoerJ.ThorntonA. R. D.Neural correlates of perceptual learning in the auditory brainstem: efferent activity predicts and reflects improvement at a speech-in-noise discrimination task20082819492949372-s2.0-4494910537410.1523/JNEUROSCI.0902-08.2008KillionM. C.NiquetteP. A.GudmundsenG. I.RevitL. J.BanerjeeS.Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners2004116423952405NilssonM.SoliS. D.SullivanJ. A.Development of the hearing in noise test for the measurement of speech reception thresholds in quiet and in noise1994952108510992-s2.0-0028012490NewmanC. W.WeinsteinB. E.JacobsonG. P.HugG. A.Amplification and aural rehabilitation. Test-retest reliability of the hearing handicap inventory for adults19911253553572-s2.0-0026015184DillonH.JamesA.GinisJ.Client Oriented Scale of Improvement (COSI) and its relationship to several other measures of benefit and satisfaction provided by hearing aids19978127432-s2.0-0031068001SweetowR. W.SabesJ. H.The need for and development of an adaptive listening and communication enchancement (LACE) program20061785385582-s2.0-3374950280410.3766/jaaa.17.8.2AndersonS.White-SchwochT.Parbery-ClarkA.KrausN.Reversal of age-related neural timing delays with trainingProceedings of the National Academy of Sciences. In pressSharmaA.CardonG.HenionK.RolandP.Cortical maturation and behavioral outcomes in children with auditory neuropathy spectrum disorder2011502981062-s2.0-7925162387410.3109/14992027.2010.542492SharmaA.NashA. A.DormanM.Cortical development, plasticity and re-organization in children with cochlear implants20094242722792-s2.0-6734916400410.1016/j.jcomdis.2009.03.003PearceW.GoldingM.DillonH.Cortical auditory evoked potentials in the assessment of auditory neuropathy: two case studies20071853803902-s2.0-3454774838210.3766/jaaa.18.5.3BillingsC. J.TremblayK. L.MillerC. W.Aided cortical auditory evoked potentials in response to changes in hearing aid gain20115074594672-s2.0-7995877522610.3109/14992027.2011.568011KochkinS.MarkeTrak VIII Mini-BTEs tap new market, users more satisfied201164317182-s2.0-7995265623010.1097/01.HJ.0000395478.70959.b1