A comparison of machine learning algorithms and human listeners in the identification of phonemic contrasts

Paul Reid, Ksenia Gnevsheva, Hanna Suominen

Wednesday, December 14th, 2022, 2.30pm – 3pm

Abstract

 To elucidate the processes by which automatic speech recognition (ASR) architectures reach transcription decisions, our study compared human and ASR responses to stimuli with manipulated cues for stop manner (burst, silence, and vocalic onset) and voicing (voice onset time, aspiration amplitude, and vocalic onset). Fourteen participants and two ASR systems completed a forced-response identification task. Results indicated that the cues were of perceptual significance for human participants, and though weighted differently, significant predictors of ASR output. This demonstrated that ASR systems may be relying on the same key acoustic information as do human listeners for phonemic classification.