ERA Banner
Download Add to Cart Share
More Like This
  • http://hdl.handle.net/10402/era.26038
  • Reliability of Online Scoring of First Mentions in the Edmonton Narrative Normative Instrument
  • Abraham, Sandia
    Shaw, Kendra
  • Schneider, Phyllis
    Cummine, Jacqueline
  • reliability
    online
    Edmonton Narrative Norms Instrument (ENNI)
    First Mentions (FM)
    scoring
  • 2011/07/07
  • Report
  • English
  • Microsoft Word
  • 148480 bytes
  • When a story is told, referring expressions are used to introduce referents (characters and objects). This must be done so that the listener clearly understands that the character or object is new to the story (Schneider & Hayward, 2010). The ability to use referents correctly, according to the shared physical context and the preceding linguistic context, develops throughout the early school years (Hickmann, 2003). The term First Mentions (FM) has been coined to refer to the referring expressions that children use when introducing characters and objects when telling a story (Schneider & Hayward, 2010). Recently, in the Edmonton Narrative Norms Instrument (ENNI; Schneider, Dubé, & Hayward, 2002), a specific scoring system was developed to evaluate the appropriateness and sophistication of FMs through transcription analysis (Schneider & Hayward, 2010). The norms produced allow speech-language pathologists to differentiate between typically developing children and children with specific language impairment (Schneider & Hayward, 2010). This research examined whether it is possible to use the scoring system reliably when listening to the stories on tape, without transcribing. To determine reliability, stories from the original study were scored 'online' by listening to recordings of 41 narratives of children from the original ENNI study (Schneider & Hayward, 2010) and applying the FM scoring system while listening to the recordings. Cohen’s Kappa coefficient was used to determine reliability between transcription and online FM scoring. This statistical measure assessed inter-rater agreement between the FM scores and adjusted for probabilities of occurrence of individual FM categories (Rosner, 2006). Results indicated strong agreement (κ = .874) between the two analysis types. This research result indicates online scoring of FM is appropriate for speech-language pathologists to use.