Middle Ear Muscle Reflex and Word Recognition in “Normal-Hearing” Adults: Evidence for Cochlear Synaptopathy? Objectives: Permanent threshold elevation after noise exposure, ototoxic drugs, or aging is caused by loss of sensory cells; however, animal studies show that hair cell loss is often preceded by degeneration of synapses between sensory cells and auditory nerve fibers. The silencing of these neurons, especially those with high thresholds and low spontaneous rates, degrades auditory processing and may contribute to difficulties in understanding speech in noise. Although cochlear synaptopathy can be diagnosed in animals by measuring suprathreshold auditory brainstem responses, its diagnosis in humans remains a challenge. In mice, cochlear synaptopathy is also correlated with measures of middle ear muscle (MEM) reflex strength, possibly because the missing high-threshold neurons are important drivers of this reflex. The authors hypothesized that measures of the MEM reflex might be better than other assays of peripheral function in predicting difficulties hearing in difficult listening environments in human subjects. Design: The authors recruited 165 normal-hearing healthy subjects, between 18 and 63 years of age, with no history of ear or hearing problems, no history of neurologic disorders, and unremarkable otoscopic examinations. Word recognition in quiet and in difficult listening situations was measured in four ways: using isolated words from the Northwestern University auditory test number six corpus with either (a) 0 dB signal to noise, (b) 45% time compression with reverberation, or (c) 65% time compression with reverberation, and (d) with a modified version of the QuickSIN. Audiometric thresholds were assessed at standard and extended high frequencies. Outer hair cell function was assessed by distortion product otoacoustic emissions (DPOAEs). Middle ear function and reflexes were assessed using three methods: the acoustic reflex threshold as measured clinically, wideband tympanometry as measured clinically, and a custom wideband method that uses a pair of click probes flanking an ipsilateral noise elicitor. Other aspects of peripheral auditory function were assessed by measuring click-evoked gross potentials, that is, summating potential (SP) and action potential (AP) from ear canal electrodes. Results: After adjusting for age and sex, word recognition scores were uncorrelated with audiometric or DPOAE thresholds, at either standard or extended high frequencies. MEM reflex thresholds were significantly correlated with scores on isolated word recognition, but not with the modified version of the QuickSIN. The highest pairwise correlations were seen using the custom assay. AP measures were correlated with some of the word scores, but not as highly as seen for the MEM custom assay, and only if amplitude was measured from SP peak to AP peak, rather than baseline to AP peak. The highest pairwise correlations with word scores, on all four tests, were seen with the SP/AP ratio, followed closely by SP itself. When all predictor variables were combined in a stepwise multivariate regression, SP/AP dominated models for all four word score outcomes. MEM measures only enhanced the adjusted r2 values for the 45% time compression test. The only other predictors that enhanced model performance (and only for two outcome measures) were measures of interaural threshold asymmetry. Conclusions: Results suggest that, among normal-hearing subjects, there is a significant peripheral contribution to diminished hearing performance in difficult listening environments that is not captured by either threshold audiometry or DPOAEs. The significant univariate correlations between word scores and either SP/AP, SP, MEM reflex thresholds, or AP amplitudes (in that order) are consistent with a type of primary neural degeneration. However, interpretation is clouded by uncertainty as to the mix of pre- and postsynaptic contributions to the click-evoked SP. None of the assays presented here has the sensitivity to diagnose neural degeneration on a case-by-case basis; however, these tests may be useful in longitudinal studies to track accumulation of neural degeneration in individual subjects. ACKNOWLEDGMENTS: The authors gratefully acknowledge Mrs. Inge Knudson for coordinating subject recruitment. The authors thank Drs. J. J. Guinan, Jr., S. G. Kujawa, and M. D. Valero for their comments on earlier versions of this manuscript. The authors also gratefully acknowledge a gift from Decibel Therapeutics for the purchase of the commercial audiometric equipment. A.M.M. and S.A.K. performed the experiments and contributed equally to this work. K.E.H. developed software for data acquisition and analysis. K.B. and V.de.G. ran the statistical analyses. M.C.L. and S.F.M. designed the study and wrote the article. S.F.M. also performed experiments and data analysis. This work was supported by the National Institutes of Health – National Institute on Deafness and Other Communication Disorders P50 DC015857 (S.F.M., Project principal investigator (PI)) and the Lauer Tinnitus Research Center at the Massachusetts Eye & Ear (S.F.M., PI). M.C.L. is a scientific founder of Decibel Therapeutics. The other authors have no conflicts of interest to declare. Received June 20, 2018; accepted August 13, 2019 Address for correspondence: Stéphane F. Maison, Eaton-Peabody Laboratories, Massachusetts Eye & Ear, 243 Charles Street, Boston, MA 02114, USA. E-mail: stephane_maison@meei.harvard.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Rerouting Hearing Aid Systems for Overcoming Simulated Unilateral Hearing in Dynamic Listening Situations Objectives: Unilateral hearing loss increases the risk of academic and behavioral challenges for school-aged children. Previous research suggests that remote microphone (RM) systems offer the most consistent benefits for children with unilateral hearing loss in classroom environments relative to other nonsurgical interventions. However, generalizability of previous laboratory work is limited because of the specific listening situations evaluated, which often included speech and noise signals originating from the side. In addition, early studies focused on speech recognition tasks requiring limited cognitive engagement. However, those laboratory conditions do not reflect characteristics of contemporary classrooms, which are cognitively demanding and typically include multiple talkers of interest in relatively diffuse background noise. The purpose of this study was to evaluate the potential effects of rerouting amplification systems, specifically a RM system and a contralateral routing of signal (CROS) system, on speech recognition and comprehension of school-age children in a laboratory environment designed to emulate the dynamic characteristics of contemporary classrooms. It was expected that listeners would benefit from the CROS system when the head shadow limits audibility (e.g., monaural indirect listening). It was also expected that listeners would benefit from the RM system only when the RM was near the talker of interest. Design: Twenty-one children (10 to 14 years, M = 11.86) with normal hearing participated in laboratory tests of speech recognition and comprehension. Unilateral hearing loss was simulated by presenting speech-shaped masking noise to one ear via an insert earphone. Speech stimuli were presented from 1 of 4 loudspeakers located at either 0°, +45°, −90°, and −135° or 0°, −45°, +90°, and +135°. Cafeteria noise was presented from separate loudspeakers surrounding the listener. Participants repeated sentences (sentence recognition) and also answered questions after listening to an unfamiliar story (comprehension). They were tested unaided, with a RM system (microphone near the front loudspeaker), and with a CROS system (ear-level microphone on the ear with simulated hearing loss). Results: Relative to unaided listening, both rerouting systems reduced sentence recognition performance for most signals originating near the ear with normal hearing (monaural direct loudspeakers). Only the RM system improved speech recognition for midline signals, which were near the RM. Only the CROS system significantly improved speech recognition for signals originating near the ear with simulated hearing loss (monaural indirect loudspeakers). Although the benefits were generally small (approximately 6.5 percentage points), the CROS system also improved comprehension scores, which reflect overall listening across all four loudspeakers. Conversely, the RM system did not improve comprehension scores relative to unaided listening. Conclusions: Benefits of the CROS system in this study were small, specific to situations where speech is directed toward the ear with hearing loss, and relative only to a RM system utilizing one microphone. Although future study is warranted to evaluate the generalizability of the findings, the data demonstrate both CROS and RM systems are nonsurgical interventions that have the potential to improve speech recognition and comprehension for children with limited useable unilateral hearing in dynamic, noisy classroom situations. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: We thank Christine Jones, Lori Rakita, and Ora Bürkli for their insightful comments during study design. We also thank Laura Allen for consultation on the Coh-Metrix evaluation of stories used for the comprehension task. This project was funded by a grant from Sonova, AG. Portions of the project were presented at the Unilateral Hearing Loss Conference was held in Philadelphia, PA and the American Auditory Society conference was in Scottsdale, AZ sponsored by Phonak (October 22–24, 2017) and at the Scientific and Technical Conference of the American Auditory Society (March 1–3, 2018). Stimulus development for this project was supported by NIH grant P20 GM109023 (D.L.). The content of this manuscript is the responsibility and opinions of the authors and does not necessarily represent the views of the National Institutes of Health. D.L. and A.M.T. are members of the Phonak Pediatric Research Advisory Board. Received July 16, 2018; accepted July 30, 2019. Address for correspondence: Erin M. Picou, Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, 1215 21st Ave South, Room 8310, Nashville, TN 37232, USA. E-mail: erin.picou@vanderbilt.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Evaluating the Effect of Training Along With Fit Testing on Premolded Earplug Users in a Chinese Petrochemical Plant Objectives: To gain insight into the current practice of hearing protection of Chinese workers and the value of hearing protection device (HPD) fit testing. Design: HPD fit testing was conducted on workers (N = 774) in a petrochemical plant in Eastern China who were on duty during the period of this study. The 3M E-A-Rfit Dual-Ear Validation System was used to measure the personal attenuation ratings (PARs) of a premolded earplug used at the work site. Repeated fit testing was conducted at approximately 6- or 12-month intervals. Wilcoxon signed rank tests were conducted to analyze the pairwise differences between the baseline, postintervention, and follow-up visit PARs, and Mann–Whitney tests were used to compare the PARs obtained by two follow-up groups. Results: The median PAR baseline was 11 dB; significant improvement was shown on the postintervention PARs (p < 0. 001). No significant difference was shown between PARs obtained during the 6- and 12-month follow-up visits (p > 0.05). Comparing PARs of follow-up visits with the baseline PAR demonstrated a significant improvement (p < 0. 001), but revealed a significant decline (p < 0. 001) comparing with the postintervention PARs. Conclusions: HPD fit testing showed value added as to verify the sufficiency of attenuation. The training along with fit testing showed contributions to improve PARs, maintained effectiveness over time, and assisted in HPD selection. Follow-up is believed to be important to ensure that the HPDs are continually used correctly. There was no significant difference in the sustained effectiveness of the follow-up when observe 6- and 12-month subsequent to intervention. ACKNOWLEDGMENTS: We thank to Enmin Ding, Jun Wu at Jiangsu Provincial Center for Disease Control and Prevention on industrial hygiene field assistance. We’d like to express our great appreciation to Elliott H. Berger, Division Scientist of 3M Personal Safety Division for his careful review and comments from scientific point of view on this manuscript. This study was supported by the Jiangsu Provincial Outstanding Medical Academic Leader and Innovation Team (CXTDA2017029), the Natural Science Foundation of the Jiangsu Province (Grant No. BK20151594), and Personal Safety Division of 3M China Ltd. The authors have no conflicts of interest to disclose. Received April 21, 2018; accepted July 29, 2019. Address for correspondence: Yufei Liu, Personal Safety Division of 3M China Ltd., 3/F, 3M Guangzhou Plant, No. 9, Nanxiang Er Road, Science City, Guangzhou, People’s Republic of China. E-mail: sliu9@mmm.com Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Acoustic Hearing Can Interfere With Single-Sided Deafness Cochlear-Implant Speech Perception Objectives: Cochlear implants (CIs) restore some spatial advantages for speech understanding in noise to individuals with single-sided deafness (SSD). In addition to a head-shadow advantage when the CI ear has a better signal-to-noise ratio, a CI can also provide a binaural advantage in certain situations, facilitating the perceptual separation of spatially separated concurrent voices. While some bilateral-CI listeners show a similar binaural advantage, bilateral-CI listeners with relatively large asymmetries in monaural speech understanding can instead experience contralateral speech interference. Based on the interference previously observed for asymmetric bilateral-CI listeners, this study tested the hypothesis that in a multiple-talker situation, the acoustic ear would interfere with rather than improve CI speech understanding for SSD-CI listeners. Design: Experiment 1 measured CI-ear speech understanding in the presence of competing speech or noise for 13 SSD-CI listeners. Target speech from the closed-set coordinate response-measure corpus was presented to the CI ear along with one same-gender competing talker or stationary noise at target-to-masker ratios between −8 and 20 dB. The acoustic ear was presented with silence (monaural condition) or with a copy of the competing speech or noise (bilateral condition). Experiment 2 tested a subset of 6 listeners in the reverse configuration for which SSD-CI listeners have previously shown a binaural benefit (target and competing speech presented to the acoustic ear; silence or competing speech presented to the CI ear). Experiment 3 examined the possible influence of a methodological difference between experiments 1 and 2: whether the competing talker spoke keywords that were inside or outside the response set. For each experiment, the data were analyzed using repeated-measures logistic regression. For experiment 1, a correlation analysis compared the difference between bilateral and monaural speech-understanding scores to several listener-specific factors: speech understanding in the CI ear, preimplantation duration of deafness, duration of CI experience, ear of deafness (left/right), acoustic-ear audiometric thresholds, and listener age. Results: In experiment 1, presenting a copy of the competing speech to the acoustic ear reduced CI speech-understanding scores for target-to-masker ratios ≥4 dB. This interference effect was limited to competing-speech conditions and was not observed for a noise masker. There was dramatic intersubject variability in the magnitude of the interference (range: 1 to 43 rationalized arcsine units), which was found to be significantly correlated with listener age. The interference effect contrasted sharply with the reverse configuration (experiment 2), whereby presenting a copy of the competing speech to the contralateral CI ear significantly improved performance relative to monaural acoustic-ear performance. Keyword condition (experiment 3) did not influence the observed pattern of interference. Conclusions: Most SSD-CI listeners experienced interference when they attended to the CI ear and competing speech was added to the acoustic ear, although there was a large amount of intersubject variability in the magnitude of the effect, with older listeners particularly susceptible to interference. While further research is needed to investigate these effects under free-field listening conditions, these results suggest that for certain spatial configurations in a multiple-talker situation, contralateral speech interference could reduce the benefit that an SSD-CI otherwise provides. ACKNOWLEDGMENTS: We thank Cochlear Ltd. and Med-El for providing the testing equipment and technical support. The research reported in this publication was supported by the National Institute on Deafness and other Communication Disorders of the National Institutes of Health under Award Number R01 DC015798 (J.G.W.B. and M.J.G.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The views expressed in this article are those of the authors and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or US Government. The identification of specific products or scientific instrumentation does not constitute endorsement or implied endorsement on the part of the author, Department of Defense, or any component agency. Portions of this article were presented at the Midwinter Meeting of the Association for Research in Otolaryngology, Baltimore, MD, February 2017, the 173rd Meeting of the Acoustical Society of America, Boston, MA, June 2017, and the Conference on Implantable Auditory Prostheses, Tahoe City, CA, July 2019. J.G.W.B., O.A.S., and M.J.G. designed the experiments; J.G.W.B., O.A.S., and K.K.J. recruited listeners and collected the data; J.G.W.B. analyzed the data; J.G.W.B., O.A.S., K.K.J., and M.J.G. wrote the article. All authors discussed the results and implications and commented on the manuscript at all stages. The authors have no conflicts of interest to disclose. Received January 15, 2019; accepted August 14, 2019. Address for correspondence: Joshua G. W. Bernstein, National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, 4954 N. Palmer Rd., Bethesda, MD 20889, USA. E-mail: joshua.g.bernstein.civ@mail.mil Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Observations of Distortion Product Otoacoustic Emission Components in Adults With Hearing Loss Objectives: Distortion product otoacoustic emissions (DPOAEs) measured in the ear canal are composed of OAEs generated by at least two mechanisms coming from different places in the cochlea. Otoacoustic emission (OAE) models hypothesize that reduction of cochlear gain will differentially impact the components. The purpose of the current experiment was to provide preliminary data about DPOAE components in adults with hearing loss in relation to OAE models and explore whether evaluation of the relative amplitudes of generator and reflection components can enhance identification of hearing loss. Design: DPOAEs were measured from 45 adult ears; 21 had normal hearing (≤15 dB HL) and 24 with mild-to-severe sensorineural hearing loss (>15 dB HL). The higher frequency primary (f2) was swept logarithmically between 1500 and 6000 Hz, and f2/f1 was 1.22. The two equal-level primaries varied from 55 to 75 dB SPL in 5 dB steps. The swept primary procedure permitted the measurement of the amplitude and phase of the DPOAE fine structure and the extraction of the two major components (generator and reflection) by varying the predicted delays of the analysis windows. Results: DPOAE fine structure was reduced or absent in ears with hearing loss. DPOAE generator and reflection components were lower in ears with hearing loss than those with normal hearing, especially for the reflection component. Significant correlations were found between the generator component and hearing threshold but not between reflection levels and hearing threshold. Most ears with normal hearing had both components, but only a small number of ears with hearing loss had both components. Conclusions: The reflection component is not recordable or low in level in ears with hearing loss, explaining the reduced or absent DPOAE fine structure. DPOAE generator components are also lower in level in ears with hearing loss than in ears without hearing loss. In ears that had both measurable generator and reflection components, the relationship between the two did not depend on the presence or absence of hearing loss. Because reflection components are not measurable in many ears with hearing thresholds >15 dB HL, stimuli that evoke other types of reflection emissions, such as stimulus-frequency or long-latency transient-evoked emissions, should be explored in conjunction with DPOAE generator components. ACKNOWLEDGMENTS: The authors thank Lisa Lamson, Stefania Arduini, and Devon Pacheco for article preparation. This research was funded by the March of Dimes Birth Defects Foundation. The authors have no conflicts of interest to disclose. Received May 3, 2018; accepted July 15, 2019. Address for correspondence: Beth A. Prieve, Department of Communication Sciences and Disorders, Syracuse University, 621 Skytop Road, Syracuse, NY 13244, USA. E-mail:baprieve@syr.edu. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Modality Effects on Lexical Encoding and Memory Representations of Spoken Words Objectives: The present study investigated presentation modality differences in lexical encoding and working memory representations of spoken words of older, hearing-impaired adults. Two experiments were undertaken: a memory-scanning experiment and a stimulus gating experiment. The primary objective of experiment 1 was to determine whether memory encoding and retrieval and scanning speeds are different for easily identifiable words presented in auditory-visual (AV), auditory-only (AO), and visual-only (VO) modalities. The primary objective of experiment 2 was to determine if memory encoding and retrieval speed differences observed in experiment 1 could be attributed to the early availability of AV speech information compared with AO or VO conditions. Design: Twenty-six adults over age 60 years with bilateral mild to moderate sensorineural hearing loss participated in experiment 1, and 24 adults who took part in experiment 1 participated in experiment 2. An item recognition reaction-time paradigm (memory-scanning) was used in experiment 1 to measure (1) lexical encoding speed, that is, the speed at which an easily identifiable word was recognized and placed into working memory, and (2) retrieval speed, that is, the speed at which words were retrieved from memory and compared with similarly encoded words (memory scanning) presented in AV, AO, and VO modalities. Experiment 2 used a time-gated word identification task to test whether the time course of stimulus information available to participants predicted the modality-related memory encoding and retrieval speed results from experiment 1. Results: The results of experiment 1 revealed significant differences among the modalities with respect to both memory encoding and retrieval speed, with AV fastest and VO slowest. These differences motivated an examination of the time course of stimulus information available as a function of modality. Results from experiment 2 indicated the encoding and retrieval speed advantages for AV and AO words compared with VO words were mostly driven by the time course of stimulus information. The AV advantage seen in encoding and retrieval speeds is likely due to a combination of robust stimulus information available to the listener earlier in time and lower attentional demands compared with AO or VO encoding and retrieval. Conclusions: Significant modality differences in lexical encoding and memory retrieval speeds were observed across modalities. The memory scanning speed advantage observed for AV compared with AO or VO modalities was strongly related to the time course of stimulus information. In contrast, lexical encoding and retrieval speeds for VO words could not be explained by the time-course of stimulus information alone. Working memory processes for the VO modality may be impacted by greater attentional demands and less information availability compared with the AV and AO modalities. Overall, these results support the hypothesis that the presentation modality for speech inputs (AV, AO, or VO) affects how older adult listeners with hearing loss encode, remember, and retrieve what they hear. ACKNOWLEDGMENTS: Supported by research grant numbers R29 DC01643 and R29 DC00792 awarded to P.F.S., from the National Institutes on Deafness and Other Communication Disorders, National Institutes of Health. The identification of specific products or scientific instrumentation is considered an integral part of the scientific endeavor and does not constitute endorsement or implied endorsement on the part of the authors, DoD, or any component agency. The views expressed in this article are those of the authors and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, the Department of State, or U.S. Government. Received December 11, 2017; accepted August 2, 2019. Address for correspondence Lynn M. Bielski, Speech Pathology & Audiology Department, Ball State University, HPB 410, Muncie, IN 47306, USA. E-mail: lmbielski@bsu.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
An online wideband acoustic immittance (WAI) database and corresponding website. No abstract available |
Discrepancies in Hearing Thresholds between Pure-Tone Audiometry and Auditory Steady-State Response in Non-Malingerers Objectives: To evaluate discrepancies between pure-tone audiometry (PTA) and auditory steady state response (ASSR) tests in non-malingerers and investigate brain lesions that may explain the discrepancies, especially in cases where the PTA threshold was worse than the estimated ASSR threshold. Design: PTA, speech audiometry, auditory brainstem response, ASSR, and neuroimaging tests were carried out on individuals selected from 995 cases of hearing impairment. Among these, medical records of 25 subjects (19 males, 6 females; mean age = 46.5 ± 16.0 years) with significant discrepancy between PTA and estimated ASSR thresholds were analyzed retrospectively. To define acceptable levels of discrepancy in PTA and ASSR hearing thresholds, 56 patients (27 males, 29 females; mean age = 53.0 ± 13.6 years) were selected for the control group. Magnetic resonance images, magnetic resonance angiograms, and positron emission tomograms were reviewed to identify any neurologic abnormalities. Results: Pathologic brain lesions were found in 20 cases (80%) in the study group, all of which showed a significant discrepancy in hearing threshold between PTA and ASSR. Temporal lobe lesions were found in 14 cases (70%), frontal lobe lesions in 12 (60%), and thalamic lesions without the frontal or temporal lobe in 2 cases (10%). On repeated PTA and ASSR tests a few months later, the discrepancy between ASSR and behavioral hearing thresholds was reduced or resolved in 6 cases (85.7%). Temporal lobe lesions were found in all 3 cases in which the estimated ASSR threshold worsened with unchanged PTA threshold, and frontal lobe lesions were found in all 3 cases in which the PTA threshold improved but the estimated ASSR threshold was unchanged. No neurological lesions were found in 5 cases (20%) of patients with a discrepancy between ASSR and behavioral hearing thresholds. Conclusions: Clinicians should not rely exclusively on ASSR, especially in cases of central nervous system including temporal, frontal lobe, or thalamus lesions. If no lesions are found in a neuroimaging study of a patient with a discrepancy between PTA thresholds and estimated ASSR thresholds, further functional studies of the brain may be needed. If clinicians encounter patients with a discrepancy between PTA thresholds and estimated ASSR thresholds, an evaluation of brain lesions and repeat audiologic tests are recommended in lieu of relying solely on ASSR. The authors have no conflicts of interest to declare. Received January 7, 2019; accepted July 15, 2019. Address for correspondence: Dong-Hee Lee, Department of Otolaryngology-Head and Neck Surgery, Uijeongbu St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 271 Cheonbo Street, Uijeongbu City, Gyeonggi-do, 11765, Republic of Korea. E-mail: leedh0814@catholic.ac.kr. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Deep Learning in Automated Region Proposal and Diagnosis of Chronic Otitis Media Based on Computed Tomography Objectives: The purpose of this study was to develop a deep-learning framework for the diagnosis of chronic otitis media (COM) based on temporal bone computed tomography (CT) scans. Design: A total of 562 COM patients with 672 temporal bone CT scans of both ears were included. The final dataset consisted of 1147 ears, and each of them was assigned with a ground truth label from one of the 3 conditions: normal, chronic suppurative otitis media, and cholesteatoma. A random selection of 85% dataset (n = 975) was used for training and validation. The framework contained two deep-learning networks with distinct functions: a region proposal network for extracting regions of interest from 2-dimensional CT slices; and a classification network for diagnosis of COM based on the extracted regions. The performance of this framework was evaluated on the remaining 15% dataset (n = 172) and compared with that of 6 clinical experts who read the same CT images only. The panel included 2 otologists, 3 otolaryngologists, and 1 radiologist. Results: The area under the receiver operating characteristic curve of the artificial intelligence model in classifying COM versus normal was 0.92, with sensitivity (83.3%) and specificity (91.4%) exceeding the averages of clinical experts (81.1% and 88.8%, respectively). In a 3-class classification task, this network had higher overall accuracy (76.7% versus 73.8%), higher recall rates in identifying chronic suppurative otitis media (75% versus 70%) and cholesteatoma (76% versus 53%) cases, and superior consistency in duplicated cases (100% versus 81%) compared with clinical experts. Conclusions: This article presented a deep-learning framework that automatically extracted the region of interest from two-dimensional temporal bone CT slices and made diagnosis of COM. The performance of this model was comparable and, in some cases, superior to that of clinical experts. These results implied a promising prospect for clinical application of artificial intelligence in the diagnosis of COM based on CT images. ACKNOWLEDGMENTS: We appreciate the suggestions provided to improve our methodology by Dayi Bian, Shunxing Bao, Yiyuan Zhao from Vanderbilt University, and Chenghua Tao from Indiana University at Bloomington. We acknowledge Maria Powell from Vanderbilt University Medical Center for her invaluable opinions in manuscript writing. Supported by the National Key Research and Development Program of China (2016YFC0905200, 2016YFC0905202) to F.-L.C.; the National Natural Science Foundation of China (NSFC) (Grant Nos. 81420108010 to F.-L.C. and 81771017 and 81570920 to D.-D.R.); the “Zhuo-Xue Plan” of Fudan University to D.-D.R.; the Shanghai Outstanding Young Medical Talent Program to D.-D.R.. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Y.L. conceptualized and designed the study, reviewed and analyzed the data, performed computer programming, evaluated the AI model, wrote and edited the manuscript; Y.-M.W. retrieved and validated the data; Y.-S.C. retrieved the data and evaluated the AI model; Z.-Y.H. retrieved and validated the data; J.-M.Y., J.-H.X., and Z.-C.C. evaluated the AI model; F.-l.C. provided funding support and data resources; D.-D.R. conceptualized the study, provided funding support and data resources, administered the project and edited the manuscript. All authors have reviewed, discussed, and approved the manuscript. The authors have no conflicts of interest to disclose. Received May 28, 2019; accepted July 22, 2019. Address for correspondence: Yike Li, Department of Otolaryngology, Vanderbilt University Medical Center, 1313 21st Avenue South, 602 Oxford House, Nashville, TN 37232, USA. E-mail: yike.li.1@vumc.org; Dong-Dong Ren, Department of Otorhinolaryngology, Eye and ENT Hospital, 83 Fenyang Road, Shanghai, 200031, China. E-mail: dongdongren@fudan.edu.cn This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Association Between Saccule and Semicircular Canal Impairments and Cognitive Performance Among Vestibular Patients Objectives: Growing evidence suggests that vestibular function impacts higher-order cognitive ability such as visuospatial processing and executive functioning. Despite evidence demonstrating vestibular functional impairment impacting cognitive performance, it is unknown whether cognitive ability is differentially affected according to the type of vestibular impairment (semicircular canal [SCC] versus saccule) among patients with diagnosed vestibular disease. Design: Fifty-four patients who presented to an academic neurotologic clinic were recruited into the study. All patients received a specific vestibular diagnosis. Forty-one patients had saccule function measured with the cervical vestibular-evoked myogenic potential, and 43 had SCC function measured using caloric irrigation. Cognitive tests were administered to assess cognitive performance among patients. One hundred twenty-five matched controls were recruited from the Baltimore Longitudinal Study of Aging to compare cognitive performance in patients relative to age-matched healthy controls. Results: Using multivariate linear regression analyses, patients with bilaterally absent cervical vestibular-evoked myogenic potential responses (i.e., bilateral saccular impairments) were found to take longer in completing the Trail-Making test (β = 25.7 sec, 95% confidence interval = 0.3 to 51.6) and to make significantly more errors on the Benton Visual Retention test part-C (β = 4.5 errors, 95% confidence interval [CI] = 1.2 to 7.8). Patients with bilateral SCC impairment were found to make significantly more errors on the Benton Visual Retention test part-C (β = 9.8 errors, 95% CI = 0.2 to 19.4). From case–control analysis, for each SD difference in Trail-Making test part-B time, there was a corresponding 142% increase in odds of having vestibular impairment (odds ratio = 2.42, 95% CI = 1.44 to 4.07). Conclusions: These data suggest that bilateral saccule and SCC vestibular impairments may significantly affect various domains of cognitive performance. Notably, the cognitive performance in patients in this study was significantly poorer relative to age-matched healthy adults. Cognitive assessment may be considered in patients with saccule and SCC impairments, and cognitive deficits in vestibular patients may represent an important target for intervention. ACKNOWLEDGMENTS: Supported in part by the National Institutes of Health (NIDCD K23 DC013056, NIDCD T32 DC000023). K.P. interviewed clinic patients, collected data, assisted in analysis, and wrote the article. D.P. recruited and interviewed clinic patients and collected data. E.W., R.K., and B.K. assisted in editing the article. E.W. performed most of the analyses. Y.A. designed the project and edited the entire article. The authors have no conflicts of interest to disclose. Received March 29, 2019; accepted July 25, 2019. Address for correspondence: Kevin Pineault, Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, 2308 E. Fairmount Ave., Baltimore, MD 21224, USA. E-mail: kpineau1@jhmi.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Medicine by Alexandros G. Sfakianakis,Anapafseos 5 Agios Nikolaos 72100 Crete Greece,00302841026182,00306932607174,alsfakia@gmail.com,
Ετικέτες
Σάββατο 5 Οκτωβρίου 2019
Αναρτήθηκε από
Medicine by Alexandros G. Sfakianakis,Anapafseos 5 Agios Nikolaos 72100 Crete Greece,00302841026182,00306932607174,alsfakia@gmail.com,
στις
5:37 π.μ.
Ετικέτες
00302841026182,
00306932607174,
alsfakia@gmail.com,
Anapafseos 5 Agios Nikolaos 72100 Crete Greece,
Medicine by Alexandros G. Sfakianakis
Εγγραφή σε:
Σχόλια ανάρτησης (Atom)
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου