Use of Sentence Context and Speech Perception in Noise for Children Who Were Suspected for Auditory Processing Disorder

Article information

Audiol Speech Res. 2019;15(1):54-62
Publication date (electronic) : 2019 January 31
doi : https://doi.org/10.21848/asr.2019.15.1.54
School of Audiology, Pacific University, Hillsboro, OR, USA
Correspondence: Kanae Nishi, School of Audiology, Pacific University, 333 SE 7th Ave, Suite 4450, Hillsboro, OR 97123, USA Tel: +1-971-217-9636 / Fax: +1-503-924-6704 / E-mail: knishi@pacificu.edu

Portions of this work were presented at the Annual Meeting of the American Auditory Society, Scottsdale, AZ in 2010.

Received 2018 September 17; Revised 2018 October 27; Accepted 2018 November 13.

Abstract

Purpose

Atypical difficulty perceiving speech in noise is one of the behavioral symptoms frequently reported for auditory processing disorder (APD). However, currently there is no consensus regarding the underlying mechanisms for this deficit. Motivated by a view that this is a cumulative result of acoustic-phonetic processing failures, the present study examined the effect of increasing linguistic complexity on speech perception in noise by children who failed audiologic screening for APD.

Methods

Listeners were 84 7-to-12 year-old English-speaking children with normal hearing. Nine children failed audiologic APD screening and were referred for further testing (children with referral); 75 were typically-developing peers. Using an adaptive procedure, signal-to-noise ratios required for 70%-correct recognition of words and sentences (SNR70) were obtained for each listener to quantify additional deficit due to sentence context when listening to speech in noise.

Results

Despite reported deficits in noise, and contrary to our prediction, no group difference in SNR70 was observed between children with referral and their typically-developing peers. Analysis for individual differences showed that SNR70 for only one child with referral was outside the bounds established for the typically-developing children.

Conclusion

Our results did not reflect the reported atypically compromised speech perception in noise by children with referral when compared to their typically-developing peers, implying that these difficulties stem neither from the deficit in acoustic-phonetic processing nor the deficit in the use of sentence context. The present results also suggest that speech perception in noise deficit in APD require more complex speech materials and/or acoustic environments.

INTRODUCTION

Listening in acoustically challenging environments can have negative impact on learning outcomes for typically-developing school-age children (Sullivan et al., 2015). Therefore, it is not surprising to find children with speech, language, and/or hearing-related learning disabilities face even greater challenge in the same environment. Unlike congenital hearing loss which is often identified in infancy, diagnosis of such learning disabilities typically does not occur until school age when significant delays in academic performance are evident. In addition, this increased difficulty perceiving speech in noise can be experienced by more than one clinical population. Auditory processing disorder (APD) is one of such disorders (Lagacé et al., 2010).

To date, specialized behavior-based diagnostic tools and test batteries for APD have been developed (e.g., SCAN) (Keith, 2000). The validity of such behavioral tests has been questioned (Cacace & McFarland, 2005). Similarly, a diagnostic gold standard for some of such deficits has not been established (Fortunato-Tavares et al., 2009; Marler et al., 2001), due to the individual differences among speech, language, and/or hearing-related learning disabilities. In addition, the results of recent findings suggest cognitive deficits in individuals who are suspected for these disorders (Ahmmed et al., 2014; Tomlin et al., 2015). This makes it very difficult to separate APD from the other disorders based only on behavioral tests. In fact, a recent study by Brenneman et al. (2017) revealed that auditory processing abilities had a significant amount of shared variance with cognitive abilities (23%) and shared 29 percent of variance with language abilities. Although, some professional societies (e.g., British Society of Audiology, 2018), acknowledge that children with atypical listening difficulties present with multimodal problems; other societies and organizations (American Academy of Audiology, 2010; Cameron et al., 2015; International Bureau for Audiophonologie, 2017) exclude cognition and language from the definition of APD and present it as a purely auditory disorder. The issue remains controversial, and much research is needed to tease out the multiple underlying mechanisms and modalities involved in listening difficulties labeled as APD.

At present, there is no consensus regarding the underlying mechanisms for increased noise susceptibility in individuals with APD. Regardless, intervention methods based on an assumption of inefficient spectrotemporal coding have been developed. Typically, these intervention methods use stimuli with acoustically modified/enhanced cues that are gradually changed to natural levels as treatment progresses. Studies that evaluated these methods showed mixed results. Some found significant improvements as well as changes in neural coding in the brain (Warrier et al., 2004), but others found no significant benefit of cue enhancement or training using such stimuli (Fey et al., 2011). However, as Studdert-Kennedy & Mody (1995) pointed out early on, “perception of temporal properties of events” is often confused with “processing of rapidly presented stimuli” in the intervention methods or some studies supporting the spectrotemporal view.

The idea that deficits in spectrotemporal coding account for increased noise susceptibility also has mixed evidence. While some studies show no correlation between psychoacoustic measures of either spectral or temporal perception and speech recognition for listeners with normal hearing (Surprenant & Watson, 2001), children with specific language impairments (Bishop et al., 1999), or APD (Dawes et al., 2009), frequency pattern discrimination ability in children with specific language impairment has been reported to significantly correlate with their ability to understand syntactically complex sentences (Fortunato-Tavares et al., 2009). Studies that did not find association between speech perception and psychoacoustic measures support the view that basic auditory skills in individuals with increased noise susceptibility are intact and that the listening difficulties may be secondary to faulty processing of acoustic cues in speech signal that determine phoneme identity (Studdert-Kennedy, 2002; Studdert-Kennedy & Mody, 1995). This acoustic-phonetic view for increased noise susceptibility challenges the auditory-specific aspect of the definition of APD and suggests that deficits underlying APD entail language abilities, and that the APD diagnostic tools might benefit from the inclusion of linguistically richer materials (Lagacé et al., 2010).

The present study was motivated by the acoustic-phonetic view. Nishi (2018) demonstrated that a brief speech-in-noise test can classify differences in listening strategy among highly proficient adult second language learners. The same speech-in-noise test was used in the present study to compare the effect of linguistic complexity of stimulus materials on speech recognition between children with reported listening difficulties who failed APD screening and were referred for further diagnostic workup and their typically-developing peers.

Briefly, in Nishi (2018), listener performance was measured as the signal-to-noise ratios (SNRs) required for 70%-correct recognition (SNR70) of simple sentences and words that had been extensively used in clinical audiologic testing. Individual listeners were statistically classified as “native-like” or “non-native” based on sentence and word SNR70 values. Results showed that individual differences among the native speakers were small and none of the native speakers were classified as non-native. In contrast, variance among the second-language learners was large, and approximately half of the second-language learners required considerably more favorable SNR70s than native speakers and were classified as non-native. Further analyses showed that both native speakers and “native-like” second-language learners’ SNR70s were lower (better) for sentences than for words, suggesting the benefit of sentence context. In contrast, one half of the “non-native” second-language learners required more favorable SNR70s for sentences than for words (no benefit of context, and additional deficit due to increased linguistic complexity).

Applying the same method as in Nishi (2018), the present study set out to examine whether children with listening difficulties who were referred for further diagnostic testing due to failed APD screening indeed exhibit increased difficulty perceiving speech in noise as compared to their typically-developing peers. Additional purpose of the present study was to assess whether children with APD referral and their typically-developing peers receive benefit from sentence context in a similar manner. The acoustic-phonetic view postulates that children with listening difficulties who failed APD screen would require higher SNRs than their typically-developing peers, for both words and sentences. If children could effectively use context to perceive sentences, sentence SNR70 would be lower (better) than for words. However, if the deficit with acoustic-phonetic cue processing triggers further disruption of access to linguistic content at higher processing levels, sentence SNR70 would be elevated (indicating poorer performance) compared to word SNR70. This outcome would support the acoustic-phonetic view, and would suggest that audiological testing should include linguistically rich materials.

MATERIALS AND METHODS

Participants

A total of 84 listeners participated: 9 monolingual English-speaking children with teacher-reported listening difficulties and failed APD screening resulting in a referral for a full APD test battery (8-11 years old; 4 female, 5 male), and 75 typically-developing age peers (7-12 years old; 31 female, 44 male) who served as control group. Children with listening difficulties will be referred to as “children with referral” hereafter.

All children with referral were recruited through the Hearing and Balance Center at Boys Town National Research Hospital. They were referred to the clinic from school. They had academic achievement significantly below expectations despite having normal hearing, age-appropriate cognitive abilities, and speech/language skills. The Boys Town clinic performed screening for APD on these children following the clinic protocol in place at the time. Clinic visits occurred at least six months prior to the participation in the present study. Prior to the visit, parents provided results of school testing and additional information [an intake form, Children’s Home Inventory of Listening Difficulties (C.H.I.L.D) (Anderson & Smaldino, 2000)]. Based on the information provided, the clinic determined if a child met the inclusion criteria for APD screening. The criteria were: 1) fluent speakers of American English; 2) normal hearing with no evidence of middle ear dysfunction; 3) expressive and receptive language scores no lower than one standard deviation (SD) below norm for chronological age; 4) no diagnosis of autism, pervasive developmental disorder, or related concerns as documented through assessments. During the clinic visit, children were tested for normal hearing, then with Auditory Continuous Performance Test (Keith, 1994) to rule out attention issues. Children were then screened with four subtests of a widely used clinical test battery for APD, SCAN-C (Keith, 2000): Filtered Words Subset (low-pass filtered words recognition), Auditory Figure-Ground Subset (words in babble at +8 SNR), Competing Words Subset (dichotic word test), Competing Sentences Subset (dichotic sentence test). The composite scores of all nine children with referral were outside the normal range on SCAN-C. Therefore, they were referred to other specialized facilities for a full diagnostic test battery.

All but two typically-developing children had pure-tone audiometric thresholds ≤ 20 dB HL bilaterally at octave frequencies from 250 HZ to 8,000 Hz. The two typically-developing children had single-sided elevated thresholds (25-30 dB HL) at one of the frequencies. All children had no articulation errors in English as revealed by screening with Bankson-Bernthal Quick Screen of Phonology (BBQSP) (Bankson & Bernthal, 1990). All typically-developing children had age-appropriate receptive vocabulary as determined with Peabody Picture Vocabulary Test, 3rd ed. (Dunn & Dunn, 2007). Sentence completion component of the Comprehensive Assessment of Spoken Language (CASL) (Carrow-Woolfolk, 1999) and Listening comprehension component of the Oral and Written Language Scales (OWLS) (Carrow-Woolfolk, 1996) were administered to children with referral to document their skills for listening and using sentence context. Three children with referral (8 yr-4, 9 yr-2, and 11 yr-2) scored below the lower norm for CASL or OWLS. In the results section, the findings for these children are summarized separately from the other children with referral. Table 1 shows test scores and demographics of the children with referral.

Demographics, standard test scores, and performance in quiet for children with referral

Stimuli

Speech stimuli were the same as those used by Nishi (2018). Briefly, they were words from the Phonetically Balanced Kindergarten (PBK) test (Haskins, 1949) and the sentences from the Bamford-Kowal-Bench (BKB) Standard Sentence Test (Bench & Bamford, 1979). An adult female Midwestern dialect speaker of American English recorded all speech materials.

The PBK test is a speech discrimination test consisting of four lists of 50 monosyllabic words that are phonetically balanced and familiar to kindergarten-age children (Haskins, 1949). For the present study, words from lists 1, 3, and 4 (150 words) were used for the listening task. BKB test consists of 320 simple everyday sentences that are syntactically and semantically appropriate and contain three to four keywords each. They have been used extensively in the USA to evaluate speech perception in noise for children between 5 and 15 years old.

PBK words and BKB sentences were chosen for their age-appropriateness and low possibility of learning effects (only 26 words were shared between PBK words and BKB sentences). Furthermore, a subset of these materials was successfully used in Stelmachowicz et al. (2010) study to examine the influence of linguistic context on the speech perception in noise performance for 5-to-10 year-old children with mild-to-moderate hearing loss. Although nonsense words would be more effective in examining acoustic-phonetic cue processing, nonsense materials were avoided to minimize the influence of unnatural phonotactics and confusion with real words, both of which were expected to have greater negative influence on children with referral. To assess performance in quiet, 25 PBK words (half of List 4) and 20 BKB sentences (List 1A & 1B) were used (neither were used for speech-in-noise testing).

Procedure

Following informed consent, hearing screening as well as speech and language tests were administered to each child. In addition, children with referral were screened with 25 PBK words and 20 BKB sentences that were not test stimuli to ensure good performance in quiet. The cutoff score (80%) for word was established in reference to the critical differences from 96%-correct (equivalent to 1 error on 25-item test) recognition (Raffin & Thornton, 1980). There is no such reference for critical differences for BKB sentences.

Using the same procedure as in Nishi (2018), listening task was administered to individual listeners in a sound-attenuated booth. Listeners repeated the speech stimuli exactly as heard, even if they did not make sense. However, listeners were told that all words in both sets of materials were meaningful. Responses were scored online by an experimenter who was a native speaker of American English. PBK words were scored as correct only when the entire word was correct. BKB sentences were scored as correct only when all keywords in a sentence were repeated in correct order. In the case of unclear/ambiguous responses, the intended answer was confirmed by asking to spell or define words (with older children) or by asking for clarification using the results of BBQSP (e.g., for /f/-/θ/ confusions, asked “Did you mean /f/ as in ‘off’ or /θ/ as in ‘bath’?”) for children who were not proficient spellers.

Stimuli were presented binaurally via earphones (Sennheiser HD25; Sennheiser, Wedemark, Germany) in a background of speech-shaped noise (ANSI, 1997) at a fixed overall rms level of 65 dB SPL. Stimulus presentation and response acquisition were controlled by a computer program developed at Boys Town National Research Hospital (Behavioral Auditory Research Tests). Listener performance was measured as the signal-to-noise ratios for 70%-correct performance (SNR70) using adaptive-tracking method (Levitt, 1971). Specifically, each block started at 10 dB SNR; noise level was increased following two correct responses and decreased following each incorrect response to estimate SNR70; following the first four reversals, step size was decreased from 4 dB to 2 dB. Tracking stopped after four reversals at the smaller step size. The SNR70 was calculated as the mean value of all reversals obtained with the smaller step size. The total number of trials for each listener varied depending on how rapidly the criterion was met for 70%-correct performance. This procedure was repeated six times - three for PBK words and three for BKB sentences - and average SNR70 values were calculated for each listener across three blocks for PBK words and BKB sentences.

Stimulus presentation was grouped by stimulus type - PBK words or BKB sentences. The order between PBK word and BKB sentence blocks was randomized across listeners. The three blocks within each stimulus type were presented in a random order. Within each block, the software chose the item for each trial in a random manner. The entire testing time to obtain SNR70 for words and sentences was less than 30 minutes combined per listener.

RESULTS

Results were analyzed for group and individual differences. This is because as in previous report (Allen & Allan, 2014), even if the data for some individuals are outside the typical range, group analyses may not uncover such individual-level differences which may provide useful clinical insight.

Visual inspection of data indicated that SNR70 values for words and sentences for the listeners with slightly elevated puretone thresholds in one ear did not differ from other listeners. Therefore, results from these listeners were included in their respective groups in all analyses.

Figure 1 presents individual listeners’ word and sentence SNR70s. The axes are plotted in reverse order to show performance from poorest (bottom/left) to best (top/right). The diagonal line indicates equivalent SNR70 between words and sentences. A data point above diagonal lines indicates that a listener’s SNR70 was lower (better performance) for sentences than for words and suggests that the listener could recognize speech correctly despite a higher noise level - i.e., benefit of context information in sentence. Figure 2 depicts the benefit of context information (word - sentence SNR70).

Figure 1.

SNR70 for words and sentences for individual CWR (red diamond) and TD (gray circle). Diagonal line shows equal SNR70 between words and sentences. The axes are arranged so that better performance is shown at the top and right. Three children with referral with below norm scores on Comprehensive Assessment of Spoken Language and Oral and Written Language Scales are shown in magenta. SNR70: signal-to-noise ratios required for 70%-correct recognition of words and sentences, TD: typically-developing peers, CWR: children with referral.

Figure 2.

Benefit of context (word - sentence SNR70) for CWR and TD. Each symbol represents an individual listener. Positive value indicates sentence context facilitated better speech-in-noise perception. Numbers in parentheses show typically-developing peers in each age. SNR70: signal-to-noise ratios required for 70%-correct recognition of words and sentences, TD: typically-developing peers, CWR: children with referral.

As can be seen, regardless of the group or age, there was more disparity in SNR70 values for the recognition of words (SD = 1.85) than sentences (SD = 1.15). No obvious difference was noted between groups, with the exception of one child with referral had sentence SNR70 higher (poorer performance) than other children in either group and appeared to be outside the typical limits, warranting further in-depth evaluation.

The majority of the children had lower SNR70s for sentences, suggesting that they could use sentence context effectively. There also was a trend of larger benefit of sentence context for poorer word recognition performers, and reduced benefit for those who required lower (better) SNR70s for words.

Group difference

First, to statistically evaluate group difference, prior to comparing the nine children with referral to a large group of typically-developing listeners, SNR70 values for typically-developing group were examined for the presence of age effect. Levine’s test of homogeneity of variance suggested that variances within each of the six ages (7-, 8-, 9-, 10-, 11-, and 12-year-old) did not differ significantly either for words [F(5, 69) = 0.939, p = 0.462] or sentences [F(5, 69) = 0.937, p = 0.441]. Therefore, a mixed-design analysis of variance (ANOVA), where age was a between-subjects variable and context (word vs. sentence) was a within-subjects variable, was performed to compare the six ages. Results of the ANOVA indicated that age-related difference was not present [F(5, 69) = 1.961, p = 0.095, ηp2 = 0.124], and that age × context interaction was not significant [F(5, 69) = 0.076, p = 0.996, ηp2 = 0.005]. However, a significant context effect [F(1, 69) = 217.363, p < 0.001, ηp2 = 0.759] suggested that, on average, typically-developing children required more favorable SNR70 for words (-0.25 dB, SD = 1.92) than for sentences (-4.23 dB, SD = 1.07).

This demonstrates that, despite the larger cognitive load associated with the number of words required to process and remember in the correct order for sentences, it was more difficult for 7-to-12 year-old children to access acoustic-phonetic cues in noise when they are in single words, and that the contextual information in BKB sentences provided extra cues for them to reconstruct the information masked by noise.

The next analysis examined difference between children with referral and typically-developing peers. Levine’s test of homogeneity of variance suggested that within-group variances were not different for either PBK words [F(1, 82) = 2.010, p = 0.160] or BKB sentences [F(1, 82) = 0.471, p = 0.494]. Results of ANOVA, where group (children with referral vs. typically-developing peers) was a between-subjects variable and context (words vs. sentences) was a within-subjects variable, suggested no difference between groups [F(1, 82) = 2.029, p = 0.158, ηp2 = 0.024]. Interaction between context and group was also non-significant [F(1, 82) = 1.195, p = 0.278, ηp2 = 0.014]. Only the main effect of context (word vs. sentence) [F(1, 82) = 85.280, p < 0.001, ηp2 = 0.510] reached significance.

These results show that despite frequent report and mention in clinical guides, there was no considerable group difference between children with reported increased listening difficulties and failed APD screening (SCAN-C) and typically-developing peers in terms of the SNRs required for recognizing simple words and sentences at 70% accuracy.

Individual difference

To evaluate whether individual children with referral were within the group norm of typically-developing peers, linear discriminant analysis (LDA) (Klecka, 1980) was used in the subsequent analyses. LDA is a multidimensional correlational technique that allows quantifying differences among two or more groups using multiple measurements simultaneously. Using centers of gravity and within-group dispersion for each group in the input dataset, it computes weighted parameter values (discriminant functions) that maximize the distance between pre-specified groups. It then applies the data-driven group parameter weights to individual data points and calculates their probability of membership to each of the pre-specified groups.

In the present study, a discriminant function was calculated for the data from typically-developing peers and children with referral to establish “typical” and “atypical” SNR70 values, respectively. Then regardless of their original group membership, using the discriminant functions, individual children were classified into a group to which they were most similar. Of particular interest here was the group membership of one child with referral whose SNR70 appeared to be outside the normal limits (Figure 1).

Results of LDA showed that all children were classified as typically-developing, except one child with referral. The results of an analysis evaluating the contribution of sentence and word SNR70 showed that sentence SNR70 [Wilks’ Lambda = 0.999, F(1, 81) = 5.896, p = 0.017] contributed significantly to the classification of listeners, but word SNR70 [Wilks’ Lambda = 0.932, F(1, 81) = 0.031, p = 0.861] did not.

These results suggest that even when examined at the individual level, only one out of nine children with listening difficulties and failed APD screening and were referred for further APD testing required considerably more favorable noise level than typically-developing peers did to achieve 70%-correct speech recognition performance. Specifically, sentence SNR70 was more sensitive than word SNR70 to discriminate this child with referral from the typically-developing peers.

DISCUSSIONS

Motivated by the view that the listening difficulties may be secondary to faulty processing of acoustic cues in speech signal that determine phoneme identity (Studdert-Kennedy, 2002; Studdert-Kennedy & Mody, 1995), the present study examined how linguistic complexity of speech materials affects speech recognition by children with reported listening difficulties and were referred for in-depth APD testing after failing APD screening. To quantify the benefit of sentence context, SNR70s were obtained for simple words and sentences commonly used in clinical testing with children. SNR70 values for children with referral were compared to those for typically-developing peers. Based on the previous results that English-speaking adults require lower SNR70s for sentences than for words possibly due to the benefit of sentence context (Nishi, 2018), it was hypothesized that similar context benefit would be observed for children if they could effectively use sentence context. It was further hypothesized that, if the burden of having difficulty with acousticphonetic cue processing causes a breakdown in context-independent processing of speech for children with referral, and their context-dependent processing is also compromised, then their SNR70 for sentences would be elevated (indicating poorer performance) compared to word SNR70.

Contrary to our prediction, the present results showed that eight out of nine children who were classified as outside of normal ranges and indicative of APD on SCAN-C (Keith, 2000) could perform as well as typically-developing peers on the recognition of simple words and sentences in noise, inasmuch as most children with referral presented with no deficit for either acoustic-phonetic processing of speech or deficit in the ability to use contextual information in sentences. Although based on a small sample, there are at least two explanations for this low occurrence of increased noise susceptibility and the absence of difference between children with referral and typically-developing peers: 1) perception of speech in noise is not compromised for all children with reported listening difficulties in school and suspected of APD, because the breakdown occurs at higher stages of processing (e.g., comprehension, storage, etc.) and, as such, the task and materials used in the present study were too simplistic; 2) the reported struggle occurs in more complex daily listening situations, where reverberation with variable and unpredictable noise types and sources is present (e.g., school classrooms), and speech presented in speech-shaped noise does not adequately reflect that. These explanations are further discussed below.

Firstly, only two of the four subtests in SCAN-C (filtered words, auditory figure-ground) approximate the method used here (masked speech), and the present study used relatively simple speech materials presented in the background of steady-state speech-shaped noise and tracked a relatively high speech recognition performance (70% correct). This task and materials could successfully classify adult second-language learners (Nishi, 2018), but it did not sufficiently challenge the children with referral (who were native English speakers). Nevertheless, sentence SNR70, but not word SNR70, was an effective measure to separate one child with referral from typically-developing peers, indicating deficit in the ability to use sentence context to fill in the missing information when speech becomes degraded in this child with referral. For other children with referral, the sensitivity of the test may improve when linguistically richer materials or more complex tasks are used. One way to improve test sensitivity is to mimic real-life listening conditions. In real-life situations, speech signals may include sentences with complex structure and concepts and the task may involve higher levels of processing such as comprehension or manipulation of information. In fact, even tracking SNR for 50%-correct or 30%-correct performance may make the task difficult enough to separate children with referral from typically-developing peers, but such poor performance level can be too challenging for young children and may cause attention lapse and fatigue. At any rate, future studies should require careful separation of deficits in acoustic-phonetic awareness and higher-level processing of speech.

Secondly, listening environments where children report listening difficulties include auditory disruption varying from heating and ventilation noise (steady, non-linguistic) to competing talkers (unpredictable, linguistic), and variable levels of reverberation. It has been reported that background of white noise alone can disrupt expressive word learning by typically-developing children (Riley & McGregor, 2012). The detrimental influence of noise can become even more complex when speech maskers are used. Leibold & Buss (2013) showed that, as opposed to speech-shaped noise condition where consonants were confused due to similarity in articulatory features, consonant confusion patterns in two-talker masker condition indicated intrusions from the speech masker. Furthermore, even typically-developing children can be negatively affected by reverberant listening environment (Neuman et al., 2010) and, in such reverberant environments, children have significantly reduced ability to fill in the missing information that is available due to brief periods of silence in speech-envelope modulated noise (Wróblewski et al., 2012). It is possible that the children with referral in the present study might have performed more poorly if more challenging listening conditions (lower SNR, noise type with more disruption, and reverberation) that mimic their learning environments were used.

Furthermore, the present study included a small group of individuals who were referred for specialized APD testing based on reported listening difficulties in school and the results of APD screening administered by audiologists. Although possible other causes (e.g., hearing impairment, attention disorder, expressive and receptive language delay/disorder, second-language instruction, autism, and pervasive developmental disorder) known to contribute to poor academic performance in otherwise typically-developing children were ruled out during screening, these individuals were not confirmed with any definitive diagnosis. Therefore, possibility exists that, despite their listening difficulties and poorer than expected academic performance, some or all children with referral in the present study might not present with APD. Currently, there is no universally accepted method of screening for APD (American Academy of Audiology, 2010). However, as described in the introduction, increased noise susceptibility has been reported for a heterogeneous group of clinical populations. This suggests that regardless of confirmed diagnosis, if susceptibility to noise is one of the major symptoms shared among such groups, it should be observed with high frequency.

To sum up, the present study showed that, contrary to our prediction, as a group, children with reported listening difficulty in school and failed audiologic APD screening warranting a referral for a full APD test battery did not require more favorable SNRs than their typically-developing peers to recognize words and sentences in noise. The present findings can be considered as evidence contrary to the spectrotemporal view pertaining to APD and suggest that the frequently-reported increased noise susceptibility for it is not likely caused by auditory deficit only but by or in combination with higher-order cognitive, language, or related deficits, thus supporting the acoustic-phonetic view. This, in turn, suggests that, although further investigation is necessary, testing for children who are suspected for APD should include linguistically richer materials and more ecologically valid testing paradigms.

Acknowledgements

The project described was supported by NIH Grants R03DC009334, P30DC004662, and P20GM109023. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. The authors thank all the participants for their time; Judy Kopun, Elizabeth Heinrichs-Graham, Meredith Spratford, Jessica Lewis, Kendell Simms, Kimberly Joyce, and Noel Griffith for their assistance in participant recruitment and data collection; Rachel Loveless for providing the detail of clinical protocol used at Boys Town National Research Hospital. Comments from Lori Leibold and Stephen Neely on earlier versions of this work were much appreciated.

Notes

Ethical Statement

Boys Town clinic does not provide full diagnostic tests for APD. These guidelines were used prior to the year 2010 for screening purpose only. All children in the “disordered” range on SCAN-C were referred to other specialized institutions for full diagnostic tests.

Participant recruitment, testing, compensation, and data handling followed the protocol approved by the Institutional Review Board at the Boys Town National Research Hospital (IRB# 07-06XP).

Declaration of Conflicting Interests

There are no conflicts of interest.

Funding

NIH Grants R03DC009334, P30DC004662, and P20GM109023.

References

1. Ahmmed A. U., Ahmmed A. A., Bath J. R., Ferguson M. A., Plack C. J., Moore D. R.. 2014;Assessment of children with suspected auditory processing disorder: A factor analysis study. Ear and Hearing 35(3):295–305.
2. Allen P., Allan C.. 2014;Auditory processing disorders: Relationship to cognitive processes and underlying auditory neural integrity. International Journal of Pediatric Otorhinolaryngology 78(2):198–208.
3. American Academy of Audiology. 2010. August. Diagnosis, Treatment and Management of Children and Adults with Central Auditory Processing Disorder. American Academy of Audiology. Retrieved from https://www.audiology.org/publications-resources/document-library/central-auditory-processing-disorder/.
4. American National Standards Institute (ANSI). 1997. Methods for calculation of the Speech Intelligibility Index. ANSI/ASA S3.5-1997 (R2017) New York, NY: ANSI.
5. Anderson K. L., Smaldino J. J.. 2000. Children’s Home Inventory for Listening Difficulties: CHILD. Retrieved from http://home.earthlink.net/~karenlanderson/child.html.
6. Bankson N. W., Bernthal J. E.. 1990. Quick Screen of Phonology San Antonio, TX: Special Press.
7. Bench R. J., Bamford J.. 1979. Speech/Hearing Tests and the Spoken Language of Hearing-Impaired Children London: Academic Press.
8. Bishop D. V. M., Carlyon R. P., Deeks J. M., Bishop S. J.. 1999;Auditory temporal processing impairment: Neither necessary nor sufficient for causing language impairment in children. Journal of Speech, Language, and Hearing Research 42(6):1295–1310.
9. Brenneman L., Cash E., Chermak G. D., Guenette L., Masters G., Musiek F. E., et al. 2017;The relationship between central auditory processing, language, and cognition in children being evaluated for central auditory processing disorder. Journal of the American Academy of Audiology 28(8):758–769.
10. British Society of Audiology. 2018. February. Position Statement and Practice Guidance. Auditory Processing Disorder (APD). Retrieved from https://www.thebsa.org.uk/wp-content/uploads/2018/02/Position-Statement-and-Practice-Guidance-APD-2018.pdf.
11. Cacace A. T., McFarland D. J.. 2005;The importance of modality specificity in diagnosing central auditory processing disorder. American Journal of Audiology 14(2):112–123.
12. Cameron S., Glyde H., Dillon H., King A. M., Gillies K.. 2015;Results from a national central auditory processing disorder service: A real-world assessment of diagnostic practices and remediation for central auditory processing disorder. Seminars in Hearing 36(4):216–236.
13. Carrow-Woolfolk E.. 1996. Oral and Written Language Scales San Antonio, TX: Pearson.
14. Carrow-Woolfolk E.. 1999. Comprehensive Assessment of Spoken Language (CASLTM) San Antonio, TX: Pearson.
15. Dawes P., Sirimanna T., Burton M., Vanniasegaram I., Tweedy F., Bishop D. V. M.. 2009;Temporal auditory and visual motion processing of children diagnosed with auditory processing disorder and dyslexia. Ear and Hearing 30(6):675–686.
16. Dunn L. M., Dunn D. M.. 2007. Peabody Picture Vocabulary Test, Fourth Edition (PPVTTM-4) Bloomington, MN: NCS Pearson, Inc.
17. Fey M. E., Richard G. J., Geffner D., Kamhi A. G., Medwetsky L., Paul D., et al. 2011;Auditory processing disorder and auditory/language interventions: An evidence-based systematic review. Language, Speech, and Hearing Services in Schools 42(3):246–264.
18. Fortunato-Tavares T., Rocha C. N., Andrade C. R., Befi-Lopes D. M., Schochat E., Hestvik A., et al. 2009;Linguistic and auditory temporal processing in children with specific language impairment. Pró-Fono Revista de Atualização Científica 21(4):279–284.
19. Haskins H. A.. 1949. A phonetically balanced test of speech discrimination for children. Unpublished master’s thesis Northwestern University; Evanston, IL:
20. International Bureau for Audiophonologie. 2017. July. 10. Central Auditory Processing Disorders-Symptoms. Bureau International d’Audiophonologie. Retrieved from https://www.biap.org/en/recommandations/recommendations/tc-30-central-auditory-processes-cap.
21. Keith R. W.. 1994. Auditory Continuous Performance Test (ACPT) San Antonio, TX: Psychological Co.
22. Keith R. W.. 2000. SCAN-C: Test for Auditory Processing Disorders in Children-Revised San Antonio, TX: Pearson.
23. Klecka W. R.. 1980. Discriminant Analysis Newbury Park, CA: SAGE Publications, Inc.
24. Lagacé J., Jutras B., Gagné J. P.. 2010;Auditory processing disorder and speech perception problems in noise: Finding the underlying origin. American Journal of Audiology 19(1):17–25.
25. Leibold L. J., Buss E.. 2013;Children’s identification of consonants in a speech-shaped noise or a two-talker masker. Journal of Speech, Language, and Hearing Research 56(4):1144–1155.
26. Levitt H.. 1971;Transformed up-down methods in psychoacoustics. The Journal of the Acoustical Society of America 49(2)Suppl 2. :467–477.
27. Marler J. A., Champlin C. A., Gillam R. B.. 2001;Backward and simultaneous masking measured in children with language-learning impairments who received intervention with Fast ForWord or Laureate Learning Systems software. American Journal of Speech-Language Pathology 10(3):258–268.
28. Neuman A. C., Wroblewski M., Hajicek J., Rubinstein A.. 2010;Combined effects of noise and reverberation on speech recognition performance of normal-hearing children and adults. Ear and Hearing 31(3):336–344.
29. Nishi K.. 2018;Proficiency, use of context and non-native speech perception in noise performance. Audiology and Speech Research 14(3):176–183.
30. Raffin M. J. M., Thornton A. R.. 1980;Confidence levels for differences between speech-discrimination scores. A research note. Journal of Speech, Language, and Hearing Research 23(1):5–18.
31. Riley K. G., McGregor K. K.. 2012;Noise hampers children’s expressive word learning. Language, Speech, and Hearing Services in Schools 43(3):325–337.
32. Stelmachowicz P., Lewis D., Hoover B., Nishi K., McCreery R., Woods W.. 2010;Effects of digital noise reduction on speech perception for children with hearing loss. Ear and Hearing 31(3):345–355.
33. Studdert-Kennedy M.. 2002;Deficits in phoneme awareness do not arise from failures in rapid auditory processing. Reading and Writing 15(1-2):5–14.
34. Studdert-Kennedy M., Mody M.. 1995;Auditory temporal perception deficits in the reading-impaired: A critical review of the evidence. Psychonomic Bulletin and Review 2(4):508–514.
35. Sullivan J. R., Osman H., Schafer E. C.. 2015;The effect of noise on the relationship between auditory working memory and comprehension in school-age children. Journal of Speech, Language, and Hearing Research 58(3):1043–1051.
36. Surprenant A. M., Watson C. S.. 2001;Individual differences in the processing of speech and nonspeech sounds by normal-hearing listeners. The Journal of the Acoustical Society of America 110(4):2085–2095.
37. Tomlin D., Dillon H., Sharma M., Rance G.. 2015;The impact of auditory processing and cognitive abilities in children. Ear and Hearing 36(5):527–542.
38. Warrier C. M., Johnson K. L., Hayes E. A., Nicol T., Kraus N.. 2004;Learning impaired children exhibit timing deficits and training-related improvements in auditory cortical responses to speech in noise. Experimental Brain Research 157(4):431–441.
39. Wróblewski M., Lewis D. E., Valente D. L., Stelmachowicz P. G.. 2012;Effects of reverberation on speech recognition in stationary and modulated noise by school-aged children and young adults. Ear and Hearing 33(6):731–744.

Article information Continued

Figure 1.

SNR70 for words and sentences for individual CWR (red diamond) and TD (gray circle). Diagonal line shows equal SNR70 between words and sentences. The axes are arranged so that better performance is shown at the top and right. Three children with referral with below norm scores on Comprehensive Assessment of Spoken Language and Oral and Written Language Scales are shown in magenta. SNR70: signal-to-noise ratios required for 70%-correct recognition of words and sentences, TD: typically-developing peers, CWR: children with referral.

Figure 2.

Benefit of context (word - sentence SNR70) for CWR and TD. Each symbol represents an individual listener. Positive value indicates sentence context facilitated better speech-in-noise perception. Numbers in parentheses show typically-developing peers in each age. SNR70: signal-to-noise ratios required for 70%-correct recognition of words and sentences, TD: typically-developing peers, CWR: children with referral.

Table 1.

Demographics, standard test scores, and performance in quiet for children with referral

Group ID Gender CASL: sentence completion OWLS: listening comprehension Performance in quiet (%)
Word Sentence
Children with referral 8 yr-1 Female 96 93 100 100
8 yr-2 Male 97 96 100 100
8 yr-3 Male 102 104 96 100
8 yr-4 Male 49* 55* 84 90
9 yr-1 Female 114 115 96 100
9 yr-2 Male 82* 98 100 100
10 yr-1 Female 119 85 100 100
11 yr-1 Male 112 108 96 100
11 yr-2 Female 79* 67* 100 100
*

Scores below lower norm cutoff of CASL/OWLS.

CASL: Comprehensive Assessment of Spoken Language, OWLS: Oral and Written Language Scales