Κυριακή 25 Αυγούστου 2019

Clinical Effectiveness of an At-Home Auditory Training Program: A Randomized Controlled Trial
imageObjectives: To investigate the effectiveness of an at-home frequent-word auditory training procedure for use with older adults with impaired hearing wearing their own hearing aids. Design: Prospective, double-blind placebo-controlled randomized trial with three parallel branches: an intervention group who received the at-home auditory training; an active control group who listened to audiobooks using a similar platform at home (placebo intervention); and a passive control group who wore hearing aids and returned for outcomes, but received no intervention. Outcome measures were obtained after a 5-week period. A mixed research design was used with a between-subjects factor of group and a repeated-measures factor of time (pre- and post-treatment) to evaluate the effects of the at-home auditory training program. The intervention was completed in participants’ own homes. Baseline and outcomes measures were assessed at a university research laboratory. The participants were adults, aged 54 to 80 years, with the mild-to-moderate hearing loss. Of the 51 identified eligible participants, 45 enrolled as a volunteer sample and 43 of these completed the study. Frequent-word auditory training regimen completed intervention at home over a period of 5 weeks. The active control group listened to audiobooks (placebo intervention) and the passive control group completed no intervention. The primary outcome measure is a Connected Speech test benefit. The secondary outcome measure is a 66-item self-report profile of hearing aid performance. Results: Participants who received the at-home training intervention demonstrated significant improvements on aided recognition for trained materials, but no generalization of these benefits to nontrained materials was seen. This was despite reasonably good compliance with the at-home training regimen and careful verification of hearing aid function throughout the trial. Based on follow-up post-trial evaluation, the benefits observed for trained materials in the intervention group were sustained for a period of at least 8.5 months. No improvement was seen for supplemental outcome measures of hearing aid satisfaction, hearing handicap, or tolerance of background noise while listening to speech. Conclusions: The at-home auditory training procedure utilizing frequently occurring words was effective for the trained materials used in the procedure. No generalization was seen to nontrained materials or to perceived benefit from hearing aids.
Correlates of Hearing Aid Use in UK Adults: Self-Reported Hearing Difficulties, Social Participation, Living Situation, Health, and Demographics
imageObjectives: Hearing impairment is ranked fifth globally for years lived with disability, yet hearing aid use is low among individuals with a hearing impairment. Identifying correlates of hearing aid use would be helpful in developing interventions to promote use. To date, however, no studies have investigated a wide range of variables, this has limited intervention development. The aim of the present study was to identify correlates of hearing aid use in adults in the United Kingdom with a hearing impairment. To address limitations in previous studies, we used a cross-sectional analysis to model a wide range of potential correlates simultaneously to provide better evidence to aid intervention development. Design: The research was conducted using the UK Biobank Resource. A cross-sectional analysis of hearing aid use was conducted on 18,730 participants aged 40 to 69 years old with poor hearing, based on performance on the Digit Triplet test. Results: Nine percent of adults with poor hearing in the cross-sectional sample reported using a hearing aid. The strongest correlate of hearing aid use was self-reported hearing difficulties (odds ratio [OR] = 110.69 [95% confidence interval {CI} = 65.12 to 188.16]). Individuals who were older were more likely to use a hearing aid: for each additional year of age, individuals were 5% more likely to use a hearing aid (95% CI = 1.04 to 1.06). People with tinnitus (OR = 1.43 [95% CI = 1.26 to 1.63]) and people with a chronic illness (OR = 1.97 [95% CI = 1.71 to 2.28]) were more likely to use a hearing aid. Those who reported an ethnic minority background (OR = 0.53 [95% CI = 0.39 to 0.72]) and those who lived alone (OR = 0.80 [95% CI = 0.68 to 0.94]) were less likely to use a hearing aid. Conclusions: Interventions to promote hearing aid use need to focus on addressing reasons for the perception of hearing difficulties and how to promote hearing aid use. Interventions to promote hearing aid use may need to target demographic groups that are particularly unlikely to use hearing aids, including younger adults, those who live alone and those from ethnic minority backgrounds.
Effects of Age and Hearing Loss on the Recognition of Emotions in Speech
imageObjectives: Emotional communication is a cornerstone of social cognition and informs human interaction. Previous studies have shown deficits in facial and vocal emotion recognition in older adults, particularly for negative emotions. However, few studies have examined combined effects of aging and hearing loss on vocal emotion recognition by adults. The objective of this study was to compare vocal emotion recognition in adults with hearing loss relative to age-matched peers with normal hearing. We hypothesized that age would play a role in emotion recognition and that listeners with hearing loss would show deficits across the age range. Design: Thirty-two adults (22 to 74 years of age) with mild to severe, symmetrical sensorineural hearing loss, amplified with bilateral hearing aids and 30 adults (21 to 75 years of age) with normal hearing, participated in the study. Stimuli consisted of sentences spoken by 2 talkers, 1 male, 1 female, in 5 emotions (angry, happy, neutral, sad, and scared) in an adult-directed manner. The task involved a single-interval, five-alternative forced-choice paradigm, in which the participants listened to individual sentences and indicated which of the five emotions was targeted in each sentence. Reaction time was recorded as an indirect measure of cognitive load. Results: Results showed significant effects of age. Older listeners had reduced accuracy, increased reaction times, and reduced d’ values. Normal hearing listeners showed an Age by Talker interaction where older listeners had more difficulty identifying male vocal emotion. Listeners with hearing loss showed reduced accuracy, increased reaction times, and lower d’ values compared with age-matched normal-hearing listeners. Within the group with hearing loss, age and talker effects were significant, and low-frequency pure-tone averages showed a marginally significant effect. Contrary to other studies, once hearing thresholds were taken into account, no effects of listener sex were observed, nor were there effects of individual emotions on accuracy. However, reaction times and d’ values showed significant differences between individual emotions. Conclusions: The results of this study confirm existing findings in the literature showing that older adults show significant deficits in voice emotion recognition compared with their normally hearing peers, and that among listeners with normal hearing, age-related changes in hearing do not predict this age-related deficit. The present results also add to the literature by showing that hearing impairment contributes additionally to deficits in vocal emotion recognition, separate from deficits related to age. These effects of age and hearing loss appear to be quite robust, being evident in reduced accuracy scores and d’ measures, as well as in reaction time measures.
Measures of Listening Effort Are Multidimensional
imageObjectives: Listening effort can be defined as the cognitive resources required to perform a listening task. The literature on listening effort is as confusing as it is voluminous: measures of listening effort rarely correlate with each other and sometimes result in contradictory findings. Here, we directly compared simultaneously recorded multimodal measures of listening effort. After establishing the reliability of the measures, we investigated validity by quantifying correlations between measures and then grouping-related measures through factor analysis. Design: One hundred and sixteen participants with audiometric thresholds ranging from normal to severe hearing loss took part in the study (age range: 55 to 85 years old, 50.3% male). We simultaneously measured pupil size, electroencephalographic alpha power, skin conductance, and self-report listening effort. One self-report measure of fatigue was also included. The signal to noise ratio (SNR) was adjusted at 71% criterion performance using sequences of 3 digits. The main listening task involved correct recall of a random digit from a sequence of six presented at a SNR where performance was around 82 to 93%. Test–retest reliability of the measures was established by retesting 30 participants 7 days after the initial session. Results: With the exception of skin conductance and the self-report measure of fatigue, interclass correlation coefficients (ICC) revealed good test–retest reliability (minimum ICC: 0.71). Weak or nonsignificant correlations were identified between measures. Factor analysis, using only the reliable measures, revealed four underlying dimensions: factor 1 included SNR, hearing level, baseline alpha power, and performance accuracy; factor 2 included pupillometry; factor 3 included alpha power (during speech presentation and during retention); factor 4 included self-reported listening effort and baseline alpha power. Conclusions: The good ICC suggests that poor test reliability is not the reason for the lack of correlation between measures. We have demonstrated that measures traditionally used as indicators of listening effort tap into multiple underlying dimensions. We therefore propose that there is no “gold standard” measure of listening effort and that different measures of listening effort should not be used interchangeably. When choosing method(s) to measure listening effort, the nature of the task and aspects of increased listening demands that are of interest should be taken into account. The findings of this study provide a framework for understanding and interpreting listening effort measures.
Effects of Reverberation on the Relation Between Compression Speed and Working Memory for Speech-in-Noise Perception
imageObjectives: Previous study has suggested that when listening in modulated noise, individuals benefit from different wide dynamic range compression (WDRC) speeds depending on their working memory ability. Reverberation reduces the modulation depth of signals and may impact the relation between WDRC speed and working memory. The purpose of this study was to examine this relation across a range of reverberant conditions. Design: Twenty-eight older listeners with mild-to-moderate sensorineural hearing impairment were recruited in the present study. Individual working memory was measured using a Reading Span test. Sentences were combined with noise at two signal to noise ratios (2 and 5 dB SNR), and reverberation was simulated at a range of reverberation times (0.00, 0.75, 1.50, and 3.00 sec). Speech intelligibility was measured in listeners when listening to the sentences processed with simulated fast-acting and slow-acting WDRC conditions. Results: There was a significant relation between WDRC speed and working memory with minimal or no reverberation. Consistent with previous research, this relation was such that individuals with high working memory had higher speech intelligibility with fast-acting WDRC, and individuals with low working memory performed better with slow-acting WDRC. However, at longer reverberation times, there was no relation between WDRC speed and working memory. Conclusions: Consistent with previous studies, results suggest that there is an advantage of tailoring WDRC speed based on an individual’s working memory under anechoic conditions. However, the present results further suggest that there may not be such a benefit in reverberant listening environments due to reduction in signal modulation.
Auditory Evoked Responses in Older Adults With Normal Hearing, Untreated, and Treated Age-Related Hearing Loss
imageObjectives: The goal of this study was to identify the effects of auditory deprivation (age-related hearing loss) and auditory stimulation (history of hearing aid use) on the neural registration of sound across two stimulus presentation conditions: (1) equal sound pressure level and (2) equal sensation level. Design: We used a between-groups design, involving three groups of 14 older adults (n = 42; 62 to 84 years): (1) clinically defined normal hearing (≤25 dB from 250 to 8000 Hz, bilaterally), (2) bilateral mild–moderate/moderately severe sensorineural hearing loss who have never used hearing aids, and (3) bilateral mild–moderate/moderately severe sensorineural hearing loss who have worn bilateral hearing aids for at least the past 2 years. Results: There were significant delays in the auditory P1-N1-P2 complex in older adults with hearing loss compared with their normal hearing peers when using equal sound pressure levels for all participants. However, when the degree and configuration of hearing loss were accounted for through the presentation of equal sensation level stimuli, no latency delays were observed. These results suggest that stimulus audibility modulates P1-N1-P2 morphology and should be controlled for when defining deprivation and stimulus-related neuroplasticity in people with hearing loss. Moreover, a history of auditory stimulation, in the form of hearing aid use, does not appreciably alter the neural registration of unaided auditory evoked brain activity when quantified by the P1-N1-P2. Conclusions: When comparing auditory cortical responses in older adults with and without hearing loss, stimulus audibility, and not hearing loss–related neurophysiological changes, results in delayed response latency for those with age-related hearing loss. Future studies should carefully consider stimulus presentation levels when drawing conclusions about deprivation- and stimulation-related neuroplasticity. Additionally, auditory stimulation, in the form of a history of hearing aid use, does not significantly affect the neural registration of sound when quantified using the P1-N1-P2–evoked response.
Masked Sentence Recognition in Children, Young Adults, and Older Adults: Age-Dependent Effects of Semantic Context and Masker Type
imageObjectives: Masked speech recognition in normal-hearing listeners depends in part on masker type and semantic context of the target. Children and older adults are more susceptible to masking than young adults, particularly when the masker is speech. Semantic context has been shown to facilitate noise-masked sentence recognition in all age groups, but it is not known whether age affects a listener’s ability to use context with a speech masker. The purpose of the present study was to evaluate the effect of masker type and semantic context of the target as a function of listener age. Design: Listeners were children (5 to 16 years), young adults (19 to 30 years), and older adults (67 to 81 years), all with normal or near-normal hearing. Maskers were either speech-shaped noise or two-talker speech, and targets were either semantically correct (high context) sentences or semantically anomalous (low context) sentences. Results: As predicted, speech reception thresholds were lower for young adults than either children or older adults. Age effects were larger for the two-talker masker than the speech-shaped noise masker, and the effect of masker type was larger in children than older adults. Performance tended to be better for targets with high than low semantic context, but this benefit depended on age group and masker type. In contrast to adults, children benefitted less from context in the two-talker speech masker than the speech-shaped noise masker. Context effects were small compared with differences across age and masker type. Conclusions: Different effects of masker type and target context are observed at different points across the lifespan. While the two-talker masker is particularly challenging for children and older adults, the speech masker may limit the use of semantic context in children but not adults.
Effects of Phantom Electrode Stimulation on Vocal Production in Cochlear Implant Users
imageObjectives: Cochlear implant (CI) users suffer from a range of speech impairments, such as stuttering and vocal control of pitch and intensity. Though little research has focused on the role of auditory feedback in the speech of CI users, these speech impairments could be due in part to limited access to low-frequency cues inherent in CI-mediated listening. Phantom electrode stimulation (PES) represents a novel application of current steering that extends access to low frequencies for CI recipients. It is important to note that PES transmits frequencies below 300 Hz, whereas Baseline does not. The objective of this study was to explore the effects of PES on multiple frequency-related characteristics of voice production. Design: Eight postlingually deafened, adult Advanced Bionics CI users underwent a series of vocal production tests including Tone Repetition, Vowel Sound Production, Passage Reading, and Picture Description. Participants completed all of these tests twice: once with PES and once using their program used for everyday listening (Baseline). An additional test, Automatic Modulation, was included to measure acute effects of PES and was completed only once. This test involved switching between PES and Baseline at specific time intervals in real time as participants read a series of short sentences. Finally, a subjective Vocal Effort measurement was also included. Results: In Tone Repetition, the fundamental frequencies (F0) of tones produced using PES and the size of musical intervals produced using PES were significantly more accurate (closer to the target) compared with Baseline in specific gender, target tone range, and target tone type testing conditions. In the Vowel Sound Production task, vowel formant profiles produced using PES were closer to that of the general population compared with those produced using Baseline. The Passage Reading and Picture Description task results suggest that PES reduces measures of pitch variability (F0 standard deviation and range) in natural speech production. No significant results were found in comparisons of PES and Baseline in the Automatic Modulation task nor in the Vocal Effort task. Conclusions: The findings of this study suggest that usage of PES increases accuracy of pitch matching in repeated sung tones and frequency intervals, possibly due to more accurate F0 representation. The results also suggest that PES partially normalizes the vowel formant profiles of select vowel sounds. PES seems to decrease pitch variability of natural speech and appears to have limited acute effects on natural speech production, though this finding may be due in part to paradigm limitations. On average, subjective ratings of vocal effort were unaffected by the usage of PES versus Baseline.
Hearing Impairment and Perceived Clarity of Predictable Speech
imageObjectives: The precision of stimulus-driven information is less critical for comprehension when accurate knowledge-based predictions of the upcoming stimulus can be generated. A recent study in listeners without hearing impairment (HI) has shown that form- and meaning-based predictability independently and cumulatively enhance perceived clarity of degraded speech. In the present study, we investigated whether form- and meaning-based predictability enhanced the perceptual clarity of degraded speech for individuals with moderate to severe sensorineural HI, a group for whom such enhancement may be particularly important. Design: Spoken sentences with high or low semantic coherence were degraded by noise-vocoding and preceded by matching or nonmatching text primes. Matching text primes allowed generation of form-based predictions while semantic coherence allowed generation of meaning-based predictions. Results: The results showed that both form- and meaning-based predictions make degraded speech seem clearer to individuals with HI. The benefit of form-based predictions was seen across levels of speech quality and was greater for individuals with HI in the present study than for individuals without HI in our previous study. However, for individuals with HI, the benefit of meaning-based predictions was only apparent when the speech was slightly degraded. When it was more severely degraded, the benefit of meaning-based predictions was only seen when matching text primes preceded the degraded speech. The benefit in terms of perceptual clarity of meaning-based predictions was positively related to verbal fluency but not working memory performance. Conclusions: Taken together, these results demonstrate that, for individuals with HI, form-based predictability has a robust effect on perceptual clarity that is greater than the effect previously shown for individuals without HI. However, when speech quality is moderately or severely degraded, meaning-based predictability is contingent on form-based predictability. Further, the ability to mobilize the lexicon seems to contribute to the strength of meaning-based predictions. Whereas individuals without HI may be able to devote explicit working memory capacity for storing meaning-based predictions, individuals with HI may already be using all available explicit capacity to process the degraded speech and thus become reliant on explicit skills such as their verbal fluency to generate useful meaning-based predictions.
High-Variability Sentence Recognition in Long-Term Cochlear Implant Users: Associations With Rapid Phonological Coding and Executive Functioning
imageObjectives: The objective of the present study was to determine whether long-term cochlear implant (CI) users would show greater variability in rapid phonological coding skills and greater reliance on slow-effortful compensatory executive functioning (EF) skills than normal-hearing (NH) peers on perceptually challenging high-variability sentence recognition tasks. We tested the following three hypotheses: First, CI users would show lower scores on sentence recognition tests involving high speaker and dialect variability than NH controls, even after adjusting for poorer sentence recognition performance by CI users on a conventional low-variability sentence recognition test. Second, variability in fast-automatic rapid phonological coding skills would be more strongly associated with performance on high-variability sentence recognition tasks for CI users than NH peers. Third, compensatory EF strategies would be more strongly associated with performance on high-variability sentence recognition tasks for CI users than NH peers. Design: Two groups of children, adolescents, and young adults aged 9 to 29 years participated in this cross-sectional study: 49 long-term CI users (≥7 years) and 56 NH controls. All participants were tested on measures of rapid phonological coding (Children’s Test of Nonword Repetition), conventional sentence recognition (Harvard Sentence Recognition Test), and two novel high-variability sentence recognition tests that varied the indexical attributes of speech (Perceptually Robust English Sentence Test Open-set test and Perceptually Robust English Sentence Test Open-set test-Foreign Accented English test). Measures of EF included verbal working memory (WM), spatial WM, controlled cognitive fluency, and inhibition concentration. Results: CI users scored lower than NH peers on both tests of high-variability sentence recognition even after conventional sentence recognition skills were statistically controlled. Correlations between rapid phonological coding and high-variability sentence recognition scores were stronger for the CI sample than for the NH sample even after basic sentence perception skills were statistically controlled. Scatterplots revealed different ranges and slopes for the relationship between rapid phonological coding skills and high-variability sentence recognition performance in CI users and NH peers. Although no statistically significant correlations between EF strategies and sentence recognition were found in the CI or NH sample after use of a conservative Bonferroni-type correction, medium to high effect sizes for correlations between verbal WM and sentence recognition in the CI sample suggest that further investigation of this relationship is needed. Conclusions: These findings provide converging support for neurocognitive models that propose two channels for speech-language processing: a fast-automatic channel that predominates whenever possible and a compensatory slow-effortful processing channel that is activated during perceptually-challenging speech processing tasks that are not fully managed by the fast-automatic channel (ease of language understanding, framework for understanding effortful listening, and auditory neurocognitive model). CI users showed significantly poorer performance on measures of high-variability sentence recognition than NH peers, even after simple sentence recognition was controlled. Nonword repetition scores showed almost no overlap between CI and NH samples, and correlations between nonword repetition scores and high-variability sentence recognition were consistent with greater reliance on engagement of fast-automatic phonological coding for high-variability sentence recognition in the CI sample than in the NH sample. Further investigation of the verbal WM–sentence recognition relationship in CI users is recommended. Assessment of fast-automatic phonological processing and slow-effortful EF skills may provide a better understanding of speech perception outcomes in CI users in the clinical setting.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου