Usage
  • 315 views
  • 259 downloads

Computational modeling of human isolated auditory word recognition using DIANA

  • Author(s) / Creator(s)
  • In recent years, computational modeling has proved to be an essential tool for investigating cognitive processes underlying speech perception (see, e.g., Scharenborg & Boves, 2010). Here we address the question of how an end-to-end computational model that uses the acoustic signal as input simulates behavioral responses of actual participants. We used the Massive Auditory Lexical Decision (MALD) database recordings comprising of 26,800 isolated words produced by a single male native speaker of English. MALD response data came from 232 native speakers of English, with each participant responding to a subset of recorded words in an auditory lexical decision experiment (Tucker et al., submitted). We applied DIANA, a recently developed end-to-end computational model of word perception (Ten Bosch et al., 2013; Ten Bosch et al., 2015) to model the MALD response latency data. DIANA is a model that takes in the acoustic signal as input, activates internal word representations without assuming prelexical categorical decision, and outputs estimated response latencies and lexicality judgements. We report the results of the participant-to-model comparison, and discuss the simulated between-word competition as a function of time in the DIANA model.

  • Date created
    2017-12-07
  • Subjects / Keywords
  • Type of Item
    Conference/Workshop Poster
  • DOI
    https://doi.org/10.7939/R3C24R24D
  • License
    Attribution-NonCommercial-NoDerivatives 4.0 International