Κυριακή 8 Σεπτεμβρίου 2019

Chronic Conductive Hearing Loss Is Associated With Speech Intelligibility Deficits in Patients With Normal Bone Conduction Thresholds
Objectives: The main objective of this study is to determine whether chronic sound deprivation leads to poorer speech discrimination in humans. Design: We reviewed the audiologic profile of 240 patients presenting normal and symmetrical bone conduction thresholds bilaterally, associated with either an acute or chronic unilateral conductive hearing loss of different etiologies. Results: Patients with chronic conductive impairment and a moderate, to moderately severe, hearing loss had lower speech recognition scores on the side of the pathology when compared with the healthy side. The degree of impairment was significantly correlated with the speech recognition performance, particularly in patients with a congenital malformation. Speech recognition scores were not significantly altered when the conductive impairment was acute or mild. Conclusions: This retrospective study shows that chronic conductive hearing loss was associated with speech intelligibility deficits in patients with normal bone conduction thresholds. These results are as predicted by a recent animal study showing that prolonged, adult-onset conductive hearing loss causes cochlear synaptopathy. ACKNOWLEDGMENTS: The authors are grateful to William Goedicke and Dr. Barbara Herrmann for their technical help and logistic support. This research was funded by the National Institutes of Health–National Institute on Deafness and Other Communication Disorders P50 DC015857 (Project Principal Investigator: S. F. M.). The authors have no conflicts of interest to disclose. Received July 6, 2018; accepted June 28, 2019. Address for correspondence: Stéphane F. Maison, Eaton-Peabody Laboratories, Massachusetts Eye & Ear, 243 Charles Street, Boston, MA 02114, USA. E-mail: stephane_maison@meei.harvard.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Subclinical Auditory Neural Deficits in Patients With Type 1 Diabetes Mellitus
Objectives: Diabetes mellitus (DM) is associated with a variety of sensory complications. Very little attention has been given to auditory neuropathic complications in DM. The aim of this study was to determine whether type 1 DM (T1DM) affects neural coding of the rapid temporal fluctuations of sounds, and how any deficits may impact on behavioral performance. Design: Participants were 30 young normal-hearing T1DM patients, and 30 age-, sex-, and audiogram-matched healthy controls. Measurements included electrophysiological measures of auditory nerve and brainstem function using the click-evoked auditory brainstem response, and of brainstem neural temporal coding using the sustained frequency-following response (FFR); behavioral tests of temporal coding (interaural phase difference discrimination and the frequency difference limen); tests of speech perception in noise; and self-report measures of auditory disability using the Speech, Spatial and Qualities of Hearing Scale. Results: There were no significant differences between T1DM patients and controls in the auditory brainstem response. However, the T1DM group showed significantly reduced FFRs to both temporal envelope and temporal fine structure. The T1DM group also showed significantly higher interaural phase difference and frequency difference limen thresholds, worse speech-in-noise performance, as well as lower overall Speech, Spatial and Qualities scores than the control group. Conclusions: These findings suggest that T1DM is associated with degraded neural temporal coding in the brainstem in the absence of an elevation in audiometric threshold, and that the FFR may provide an early indicator of neural damage in T1DM, before any abnormalities can be identified using standard clinical tests. However, the relation between the neural deficits and the behavioral deficits is uncertain. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank the collaborators at the Manchester Diabetes Centre and the Help DiaBEATes campaign in the Salford NHS Foundation Trust and all of the participants in this research. This work was supported by the Deanship of Scientific Research, College of Applied Medical Sciences Research Center at King Saud University, Riyadh, Saudi Arabia, by the Medical Research Council UK (MR/L003589/1), and by the NIHR Manchester Biomedical Research Centre. Portions of this work were presented as posters at the 38th MidWinter Meeting of the Association for Research in Otolaryngology, Baltimore, MD, February 21–25, 2015, and at the 5th Joint Meeting of the Acoustical Society of America and Acoustical Society of Japan, Honolulu, Hawaii, November 28–December 2, 2016. The authors have no conflicts of interest to disclose. Received November 20, 2018; accepted June 19, 2019. Address for correspondence: Arwa AlJasser, Department of Rehabilitation Sciences, College of Applied Medical Sciences, King Saud University, P.O. Box 10219, Riyadh, 11433, Saudi Arabia. E-mail: aljasser@ksu.edu.sa This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Music Is More Enjoyable With Two Ears, Even If One of Them Receives a Degraded Signal Provided By a Cochlear Implant
Objectives: Cochlear implants (CIs) restore speech perception in quiet but they also eliminate or distort many acoustic cues that are important for music enjoyment. Unfortunately, quantifying music enjoyment by CI users has been difficult because comparisons must rely on their recollection of music before they lost their hearing. Here, we aimed to assess music enjoyment in CI users using a readily interpretable reference based on acoustic hearing. The comparison was done by testing “single-sided deafness” (SSD) patients who have normal hearing (NH) in one ear and a CI in the other ear. The study also aimed to assess binaural musical enjoyment, with the reference being the experience of hearing with a single NH ear. Three experiments assessed the effect of adding different kinds of input to the second ear: electrical, vocoded, or unmodified. Design: In experiment 1, music enjoyment in SSD-CI users was investigated using a modified version of the MUSHRA (MUltiple Stimuli with Hidden Reference and Anchor) method. Listeners rated their enjoyment of song segments on a scale of 0 to 200, where 100 represented the enjoyment obtained from a song segment presented to the NH ear, 0 represented a highly degraded version of the same song segment presented to the same ear, and 200 represented enjoyment subjectively rated as twice as good as the 100 reference. Stimuli consisted of acoustic only, electric only, acoustic and electric, as well as other conditions with low pass filtered acoustic stimuli. Acoustic stimulation was provided by headphone to the NH ear and electric stimulation was provided by direct audio input to the subject’s speech processor. In experiment 2, the task was repeated using NH listeners who received vocoded stimuli instead of electric stimuli. Experiment 3 tested the effect of adding the same unmodified song segment to the second ear, also in NH listeners. Results: Music presented through the CI only was very unpleasant, with an average rating of 20. Surprisingly, the combination of the unpleasant CI signal in one ear with acoustic stimulation in the other ear was rated more enjoyable (mean = 123) than acoustic processing alone. Presentation of the same monaural musical signal to both ears in NH listeners resulted with even greater enhancement of the experience compared with presentation to a single ear (mean = 159). Repeating the experiment using a vocoder to one ear of NH listeners resulted in interference rather than enhancement. Conclusions: Music enjoyment from electric stimulation is extremely poor relative to a readily interpretable NH baseline for CI-SSD listeners. However, the combination of this unenjoyable signal presented through a CI and an unmodified acoustic signal presented to a NH (or near-NH) contralateral ear results in enhanced music enjoyment with respect to the acoustic signal alone. Remarkably, this two-ear enhancement experienced by CI-SSD listeners represents a substantial fraction of the two-ear enhancement seen in NH listeners. This unexpected benefit of electroacoustic auditory stimulation will have to be considered in theoretical accounts of music enjoyment and may facilitate the quest to enhance music enjoyment in CI users. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Johanna Boyer for multiple conversations that inspired the research questions investigated in this article. The authors are grateful for the time and effort from each of the participants. Griet Mertens provided demographic information for the participants from Antwerp. This research was funded by the National Institute on Deafness and Other Communication Disorders (R01 DC012152 principal investigator: Landsberger; R01 DC03937 principal investigator: Svirsky; R01 DC011329 principal investigators: Svirsky and Neuman), a MED-EL Hearing Solutions grant (principal investigator: Landsberger), a contract from Cochlear Americas (principal investigator: J. Thomas Roland), and a TOPBOF grant (principal investigator: Van de Heyning) from the University of Antwerp. D.M.L., M.A.S., and K.V. designed the experimental protocol. D.M.L. created the stimuli and modified software for use in this experiment. P.v.d.H. provided testing facilities, patient access, and organized IRB approval for data collection in Belgium. K.V., N.S., A.L., and J.N. collected the data. Figures and statistical analysis were generated by D.M.L. The article was primarily written by D.M.L. and M.A.S. All other authors contributed to drafting the article or revising it critically for important intellectual content. All authors provided final approval of the version to be published. The authors have no conflicts of interest to disclose. Received September 26, 2018; accepted June 5, 2019. Address for correspondence: David M. Landsberger, Department of Otolaryngology, New York University School of Medicine, 550 1st Avenue, STE NBV 5E5, New York, NY 10016, USA. E-mail: david.landsberger@nuymc.org Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Age Differences in the Effects of Speaking Rate on Auditory, Visual, and Auditory-Visual Speech Perception
Objectives: This study was designed to examine how speaking rate affects auditory-only, visual-only, and auditory-visual speech perception across the adult lifespan. In addition, the study examined the extent to which unimodal (auditory-only and visual-only) performance predicts auditory-visual performance across a range of speaking rates. The authors hypothesized significant Age × Rate interactions in all three modalities and that unimodal performance would account for a majority of the variance in auditory-visual speech perception for speaking rates that are both slower and faster than normal. Design: Participants (N = 145), ranging in age from 22 to 92, were tested in conditions with auditory-only, visual-only, and auditory-visual presentations using a closed-set speech perception test. Five different speaking rates were presented in each modality: an unmodified (normal rate), two rates that were slower than normal, and two rates that were faster than normal. Signal to noise ratios were set individually to produce approximately 30% correct identification in the auditory-only condition and this signal to noise ratio was used in the auditory-only and auditory-visual conditions. Results: Age × Rate interactions were observed for the fastest speaking rates in both the visual-only and auditory-visual conditions. Unimodal performance accounted for at least 60% of the variance in auditory-visual performance for all five speaking rates. Conclusions: The findings demonstrate that the disproportionate difficulty that older adults have with rapid speech for auditory-only presentations can also be observed with visual-only and auditory-visual presentations. Taken together, the present analyses of age and individual differences indicate a generalized age-related decline in the ability to understand speech produced at fast speaking rates. The finding that auditory-visual speech performance was almost entirely predicted by unimodal performance across all five speaking rates has important clinical implications for auditory-visual speech perception and the ability of older adults to use visual speech information to compensate for age-related hearing loss. ACKNOWLEDGMENTS: This research was supported by a grant from the National Institute of Aging. The authors have no conflicts of interest to disclose. Received January 29, 2019; accepted May 24, 2019. Address for correspondence: Mitchell S. Sommers, Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO 63130, USA. E-mail: msommers@Wustl.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Binaural Optimization of Cochlear Implants: Discarding Frequency Content Without Sacrificing Head-Shadow Benefit
Objectives: Single-sided deafness cochlear-implant (SSD-CI) listeners and bilateral cochlear-implant (BI-CI) listeners gain near-normal levels of head-shadow benefit but limited binaural benefits. One possible reason for these limited binaural benefits is that cochlear places of stimulation tend to be mismatched between the ears. SSD-CI and BI-CI patients might benefit from a binaural fitting that reallocates frequencies to reduce interaural place mismatch. However, this approach could reduce monaural speech recognition and head-shadow benefit by excluding low- or high-frequency information from one ear. This study examined how much frequency information can be excluded from a CI signal in the poorer-hearing ear without reducing head-shadow benefits and how these outcomes are influenced by interaural asymmetry in monaural speech recognition. Design: Speech-recognition thresholds for sentences in speech-shaped noise were measured for 6 adult SSD-CI listeners, 12 BI-CI listeners, and 9 normal-hearing listeners presented with vocoder simulations. Stimuli were presented using nonindividualized in-the-ear or behind-the-ear head-related impulse-response simulations with speech presented from a 70° azimuth (poorer-hearing side) and noise from 70° (better-hearing side), thereby yielding a better signal-to-noise ratio (SNR) at the poorer-hearing ear. Head-shadow benefit was computed as the improvement in bilateral speech-recognition thresholds gained from enabling the CI in the poorer-hearing, better-SNR ear. High- or low-pass filtering was systematically applied to the head-related impulse-response–filtered stimuli presented to the poorer-hearing ear. For the SSD-CI listeners and SSD-vocoder simulations, only high-pass filtering was applied, because the CI frequency allocation would never need to be adjusted downward to frequency-match the ears. For the BI-CI listeners and BI-vocoder simulations, both low and high pass filtering were applied. The normal-hearing listeners were tested with two levels of performance to examine the effect of interaural asymmetry in monaural speech recognition (vocoder synthesis-filter slopes: 5 or 20 dB/octave). Results: Mean head-shadow benefit was smaller for the SSD-CI listeners (~7 dB) than for the BI-CI listeners (~14 dB). For SSD-CI listeners, frequencies <1236 Hz could be excluded; for BI-CI listeners, frequencies <886 or >3814 Hz could be excluded from the poorer-hearing ear without reducing head-shadow benefit. Bilateral performance showed greater immunity to filtering than monaural performance, with gradual changes in performance as a function of filter cutoff. Real and vocoder-simulated CI users with larger interaural asymmetry in monaural performance had less head-shadow benefit. Conclusions: The “exclusion frequency” ranges that could be removed without diminishing head-shadow benefit are interpreted in terms of low importance in the speech intelligibility index and a small head-shadow magnitude at low frequencies. Although groups and individuals with greater performance asymmetry gained less head-shadow benefit, the magnitudes of these factors did not predict the exclusion frequency range. Overall, these data suggest that for many SSD-CI and BI-CI listeners, the frequency allocation for the poorer-ear CI can be shifted substantially without sacrificing head-shadow benefit, at least for energetic maskers. Considering the two ears together as a single system may allow greater flexibility in discarding redundant frequency content from a CI in one ear when considering bilateral programming solutions aimed at reducing interaural frequency mismatch. ACKNOWLEDGMENTS: The authors thank Cochlear Ltd. and Med-El for providing equipment and technical support. The authors thank John Culling for providing head shadow modeling software. The authors thank Ginny Alexander for her assistance with subject recruitment, coordination, and payment of subjects at the University of Maryland-College Park, as well as Brian Simpson and Matt Ankrom for the recruitment, coordination, and payment of the subject panel at the Air Force Research Laboratory. Research reported was supported by the National Institute On Deafness And Other Communication Disorders of the National Institutes of Health under Award Number R01DC015798 (J.G.W.B. and M.J.G.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The identification of specific products or scientific instrumentation does not constitute endorsement or implied endorsement on the part of the author, DoD, or any component agency. The views expressed in this article are those of the authors and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. Portions of these data were presented at the 2017 Midwinter Meeting of the Association for Research in Otolaryngology, Baltimore, MD, and the 2017 Conference on Implantable Auditory Prostheses, Tahoe City, CA. The authors have no conflicts of interest to disclose. Received April 2, 2018; accepted June 25, 2019. Address for correspondence: Sterling W. Sheffield, Department of Speech, Language and Hearing Sciences, University of Florida, 1225 Center Drive, Room 2130, Gainesville, FL 32610, USA. E-mail: s.sheffield@phhp.ufl.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Vocal Turn-Taking Between Mothers and Their Children With Cochlear Implants
Objectives: The primary objective of the study was to examine the occurrence and temporal structure of vocal turn-taking during spontaneous interactions between mothers and their children with cochlear implants (CI) over the first year after cochlear implantation as compared with interactions between mothers and children with normal hearing (NH). Design: Mothers’ unstructured play sessions with children with CI (n = 12) were recorded at 2 time points, 3 months (mean age 18.3 months) and 9 months (mean age 27.5 months) post-CI. A separate control group of mothers with age-matched hearing children (n = 12) was recorded at the same 2 time points. Five types of events were coded: mother and child vocalizations, vocalizations including speech overlap, and between- and within-speaker pauses. We analyzed the proportion of child and mother vocalizations involved in turn-taking, the temporal structure of turn-taking, and the temporal reciprocity of turn-taking using proportions of simultaneous speech and the duration of between- and within-speaker pauses. Results: The CI group produced a significantly smaller proportion of vocalizations in turn-taking than the NH group at the first session; however, CI children’s proportion of vocalizations in turn-taking increased over time. There was a significantly larger proportion of simultaneous speech in the CI compared with the NH group at the first session. The CI group produced longer between-speaker pauses as compared with those in the NH group at the first session with mothers decreasing the duration of between-speaker pauses over time. NH infants and mothers in both groups produced longer within- than between-speaker pauses but CI infants demonstrated the opposite pattern. In addition, the duration of mothers’ between-speaker pauses (CI and NH) was predicted by the duration of the infants’ between-speaker pauses. Conclusions: Vocal turn-taking and timing in both members of the dyad, the mother and infant, were sensitive to the experiential effects of child hearing loss and remediation with CI. Child hearing status affected dyad-specific coordination in the timing of responses between mothers and their children. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank all the families who participated in this study, the audiologists at the different sites for their help with recruiting the families, and all the research assistants and staff members for their help with gathering and analyzing the data. This research was supported by the National Institutes of Health, National Institute on Deafness and Other Communication Disorders (NIH-NIDCD) Research Grant 5R01DC008581-08 to D.M. Houston and L. Dilley and NIH-NIDCD grant R01DC008581 to T. Bergeson. The authors have no conflicts of interest to disclose. Received November 27, 2018; accepted May 27, 2019. Address for correspondence: Maria V. Kondaurova, Department of Psychological & Brain Sciences, University of Louisville, 301 Life Sciences Building, Louisville, KY 40292, USA. E-mail: maria.kondaurova@louisville.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Improving Cochlear Implant Performance in the Wind Through Spectral Masking Release: A Multi-microphone and Multichannel Strategy
Objectives: Adopting the omnidirectional microphone (OMNI) mode and reducing low-frequency gain are the two most commonly used wind noise reduction strategies in hearing devices. The objective of this study was to compare the effectiveness of these two strategies on cochlear implant users’ speech-understanding abilities and perceived sound quality in wind noise. We also examined the effectiveness of a new strategy that adopts the microphone mode with lower wind noise level in each frequency channel. Design: A behind-the-ear digital hearing aid with multiple microphone modes was used to record testing materials for cochlear implant participants. It was adjusted to have linear amplification, flat frequency response when worn on a Knowles Electronic Manikin for Acoustic Research to remove the head-related transfer function of the manikin and to mimic typical microphone characteristics of hearing devices. Recordings of wind noise samples and hearing-in-noise test sentences were made when the hearing aid was programmed to four microphone modes, namely (1) OMNI; (2) adaptive directional microphone (ADM); (3) ADM with low-frequency roll-off; and (4) a combination of omnidirectional and directional microphone (COMBO). Wind noise samples were recorded in an acoustically treated wind tunnel from 0° to 360° in 10° increment at a wind velocity of 4.5, 9.0, and 13.5 m/s when the hearing aid was worn on the manikin. Two wind noise samples recorded at 90° and 300° head angles at the wind velocity of 9.0 m/s were chosen to take advantage of the spectral masking release effects of COMBO. The samples were then mixed with the sentences recorded using identical settings. Cochlear implant participants listened to the speech-in-wind testing materials and they repeated the sentences and compared overall sound quality preferences of different microphone modes using a paired-comparison categorical rating paradigm. The participants also rated their preferences of wind-only samples. Results: COMBO yielded the highest speech recognition scores among the four microphone modes, and it was also preferred the most often, likely due to the reduction of spectral masking. The speech recognition scores generated using ADM with low-frequency roll-off were either equal to or lower than those obtained using ADM because gain reduction decreased not only the level of wind noise but also the low-frequency energy of speech. OMNI consistently yielded speech recognition scores lower than COMBO, and it was often rated as less preferable than other microphone modes, suggesting the conventional strategy to switch to the omnidirectional mode in the wind was undesirable. Conclusions: Neither adopting an OMNI nor reducing low-frequency gain generated higher speech recognition scores or higher sound quality ratings than COMBO. Adopting the microphone with lower wind noise level in different frequency channels can provide spectral masking release, and it is a more effective wind noise reduction strategy. The natural 6 dB/octave low-frequency roll-off of first-order directional microphones should be compensated when speech is present. Signal detection and decision rules for wind noise reduction applications are discussed in hearing devices with and without binaural transmission capability. ACKNOWLEDGMENTS: The author thanks Lance Nelson and Melissa Teske Dunn for data collection and Jens Balslev, Peter Nopp, Nick McKibben, and Drs. Ernst Aschbacher and Kaiboa Nie, and the staff at the Herrick Laboratories for technical support. Many thanks to Oticon Foundation and Med-El Corporation for providing funding to support this study. This study was funded by the Oticon Foundation and Med-El Corporation. The author was solely responsible for the design of the study procedures and the contents presented in this article. The author has a United States Patent (8942815) - Enhancing cochlear implnats with hearing aid signal processing technologies. The author has no conflicts of interest to declare. Received February 2, 2017; accepted May 27, 2019. Address for correspondence: King Chung, Department of Allied Health and Communicative Disorders, Northern Illinois University, DeKalb, IL 60115, USA. E-mail: kchung@niu.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Improving Sensitivity of the Digits-In-Noise Test Using Antiphasic Stimuli
Objectives: The digits-in-noise test (DIN) has become increasingly popular as a consumer-based method to screen for hearing loss. Current versions of all DINs either test ears monaurally or present identical stimuli binaurally (i.e., diotic noise and speech, NoSo). Unfortunately, presentation of identical stimuli to each ear inhibits detection of unilateral sensorineural hearing loss (SNHL), and neither diotic nor monaural presentation sensitively detects conductive hearing loss (CHL). After an earlier finding of enhanced sensitivity in normally hearing listeners, this study tested the hypothesis that interaural antiphasic digit presentation (NoSπ) would improve sensitivity to hearing loss caused by unilateral or asymmetric SNHL, symmetric SNHL, or CHL. Design: This cross-sectional study recruited adults (18 to 84 years) with various levels of hearing based on a 4-frequency pure-tone average (PTA) at 0.5, 1, 2, and 4 kHz. The study sample was comprised of listeners with normal hearing (n = 41; PTA ≤ 25 dB HL in both ears), symmetric SNHL (n = 57; PTA > 25 dB HL), unilateral or asymmetric SNHL (n = 24; PTA > 25 dB HL in the poorer ear), and CHL (n = 23; PTA > 25 dB HL and PTA air-bone gap ≥ 20 dB HL in the poorer ear). Antiphasic and diotic speech reception thresholds (SRTs) were compared using a repeated-measures design. Results: Antiphasic DIN was significantly more sensitive to all three forms of hearing loss than the diotic DIN. SRT test–retest reliability was high for all tests (intraclass correlation coefficient r > 0.89). Area under the receiver operating characteristics curve for detection of hearing loss (>25 dB HL) was higher for antiphasic DIN (0.94) than for diotic DIN (0.77) presentation. After correcting for age, PTA of listeners with normal hearing or symmetric SNHL was more strongly correlated with antiphasic (rpartial[96] = 0.69) than diotic (rpartial = 0.54) SRTs. Slope of fitted regression lines predicting SRT from PTA was significantly steeper for antiphasic than diotic DIN. For listeners with normal hearing or CHL, antiphasic SRTs were more strongly correlated with PTA (rpartial[62] = 0.92) than diotic SRTs (rpartial[62] = 0.64). Slope of the regression line with PTA was also significantly steeper for antiphasic than diotic DIN. The severity of asymmetric hearing loss (poorer ear PTA) was unrelated to SRT. No effect of self-reported English competence on either antiphasic or diotic DIN among the mixed first-language participants was observed. Conclusions: Antiphasic digit presentation markedly improved the sensitivity of the DIN test to detect SNHL, either symmetric or asymmetric, while keeping test duration to a minimum by testing binaurally. In addition, the antiphasic DIN was able to detect CHL, a shortcoming of previous monaural or binaurally diotic DIN versions. The antiphasic DIN is thus a powerful tool for population-based screening. This enhanced functionality combined with smartphone delivery could make the antiphasic DIN suitable as a primary screen that is accessible to a large global audience. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank all the participants of this study, Steve Biko Academic Hospital, and all participating private practices for their assistance with data collection. The authors thank Li Lin for assistance with data analysis. This research was funded by the National Institute of Deafness and Communication Disorders of the National Institutes of Health under Award Number 5R21DC016241-02. Additional funding support was obtained from the National Research Foundation (Grant PR_CSRP190208414782). DWS, DRM, and HCM relationship with the hearX Group and hearZA includes equity, consulting, and potential royalties. DRM is supported by Cincinnati Children’s Research Foundation and by the National Institute for Health Research Manchester Biomedical Research Centre. The authors have no conflicts of interest to disclose. Received January 4, 2019; accepted June 4, 2019. Address for correspondence: Cas Smits, Amsterdam, UMC Vrije Universiteit, Department of Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan 1117, Amsterdam, The Netherlands. E-mail: c.smits@vumc.nl This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Perspective on the Development of a Large-Scale Clinical Data Repository for Pediatric Hearing Research
The use of “big data” for pediatric hearing research requires new approaches to both data collection and research methods. The widespread deployment of electronic health record systems creates new opportunities and corresponding challenges in the secondary use of large volumes of audiological and medical data. Opportunities include cost-effective hypothesis generation, rapid cohort expansion for rare conditions, and observational studies based on sample sizes in the thousands to tens of thousands. Challenges include finding and forming appropriately skilled teams, access to data, data quality assessment, and engagement with a research community new to big data. The authors share their experience and perspective on the work required to build and validate a pediatric hearing research database that integrates clinical data for over 185,000 patients from the electronic health record systems of three major academic medical centers. ACKNOWLEDGMENTS: The authors are grateful to the late Judith Gravel, Ph.D., for her efforts in the early conception and design of this project. This study was funded by the National Institute on Deafness and Other Communications Disorders Grant Number 1R24DC012207-01A1. J.W.P. oversaw informatics and technical aspects of the project and drafted the article; B.R. architected the database, implemented all software, and extracted data for CHOP; J.M.M. extracted all imaging data and implemented imaging software components; J.P. performed analysis of CHOP audiology clinic workflows and data and served as a subject matter expert on the interpretation of audiology data; B.X. performed statistical summarization and literature review for data validation; I.K. served as a subject matter expert on the interpretation of clinical and genetic data; J.M. defined data requirements, performed data quality assessment, and extracted BCH data; T.G. performed analysis of BCH audiology clinic workflows and data and served as a subject matter expert on the interpretation of audiology data; D.S. performed analysis of BCH audiology clinic workflows and data and served as a subject matter expert on the interpretation of audiology data; M.K. served as a subject matter expert on clinical care and hearing loss research, provided scientific direction, and oversaw the extraction of BCH data; L.J.H. served as a subject matter expert on clinical care and hearing loss research, provided scientific direction, made major revisions to the article, and oversaw the extraction of VU data; J.G. served as a subject matter expert on hearing loss research, led compliance efforts, and provided scientific direction; E.B.C. oversaw the project, performed data quality assessment, provided scientific direction, and made major revisions to the article. The authors have no conflicts of interest to disclose. Received October 4, 2016; accepted June 11, 2019. Address for correspondence: E. Bryan Crenshaw III, Children’s Hospital of Philadelphia, 34th and Civic Center Blvd, Philadelphia, PA 19104, USA. E-mail: crenshaw@email.chop.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Quantifying the Range of Signal Modification in Clinically Fit Hearing Aids
Objectives: Hearing aids provide various signal processing techniques with a range of parameters to improve the listening experience for a hearing-impaired individual. In previous studies, we reported significant differences in signal modification for mild versus strong signal processing in commercially available hearing aids. In this study, the authors extend this work to clinically prescribed hearing aid fittings based on best-practice guidelines. The goals of this project are to determine the range of cumulative signal modification in clinically fit hearing aids across manufacturers and technology levels and the effects of listening conditions including signal to noise ratio (SNR) and presentation level on these signal modifications. Design: We identified a subset of hearing aids that were representative of a typical clinical setting. Deidentified hearing aid fitting data were obtained from three audiology clinics for adult hearing aid users with sensorineural hearing loss for a range of hearing sensitivities. Matching laboratory hearing aids were programmed with the deidentified fitting data. Output from these hearing aids was recorded at four SNRs and three presentation levels. The resulting signal modification was quantified using the cepstral correlation component of the Hearing Aid Speech Quality Index which measures the speech envelope changes in the context of a model of the listener’s hearing loss. These metric values represent the hearing aid processed signal as it is heard by the hearing aid user. Audiometric information was used to determine the nature of any possible association with the distribution of signal modification in these clinically fit hearing aids. Results: In general, signal modification increased as SNR decreased and presentation level increased. Differences across manufacturers were significant such that the effect of presentation level varied differently at each SNR, for each manufacturer. This result suggests that there may be variations across manufacturers in processing various listening conditions. There was no significant effect of technology level. There was a small effect of pure-tone average on signal modification for one manufacturer, but no effect of audiogram slope. Finally, there was a broad range of measured signal modification for a given hearing loss, for the same manufacturer and listening condition. Conclusions: The signal modification values in this study are representative of commonly fit hearing aids in clinics today. The results of this study provide insights into how the range of signal modifications obtained in real clinical fittings compares with a previous study. Future studies will focus on the behavioral implications of signal modifications in clinically fit hearing aids. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Kailey Durkin and Sarah Mullervy for conducting the hearing aid recordings, and Diane Novak and Pauline Norton for retrieving the hearing aid fitting data from Northwestern University Center for Speech, Language, & Learning. This study was supported by the National Institutes of Health Grant R01 DC012289 (to P.S. and K.A.). Portions of these data were presented at the 2018 International Hearing Aid Conference, Lake Tahoe, California, August 17, 2018. All authors contributed equally to this study. V.R., M.A., J.K., L.S., K.A., and P.S. contributed to the experimental design. V.R. and M.A. managed the experiment and supervised the hearing aid recordings. V.R. performed statistical analysis and wrote the main article. L.B. assisted with the design and interpretation of the statistical analysis and description of the results. M.A. and J.K. also contributed portions of the article. M.A. managed data retrieval at the University of Colorado Hospital, and L.S. managed data retrieval at I Love Hearing. P.S., M.A., K.A., J.K., L.S., and L.B. provided critical review of the article. All authors discussed the results and implications and contributed to the final article. The authors have no conflicts of interest to disclose. Received January 7, 2019; accepted June 4, 2019. Address for correspondence: Varsha Rallapalli, Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA. E-mail: varsha.rallapalli@northwestern.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου