Evaluation of Score Quality From an Assessment of Student Competencies

  • Author(s) / Creator(s)
  • Introduction: • Health science professions have identified and documented competencies for practice in their respective areas (e.g., Canadian Association of Occupational Therapists, 2012) • Performance-based assessments are one method for assessing students’ competencies within a simulated practice context in a relatively direct manner (Lane & Stone, 2006) • Assessments, such as practical skills examinations, typically require judgments to be made on aspects of a student’s performance; judgments often involve using rubrics to rate performance on demonstrations of specified competencies • Scoring rubrics are defined as a scoring tool for qualitative ratings of authentic or complex student work • Generalizability Theory (Brennan, 1992) is an analytical framework that can be used to investigate the extent to which scores measure the intended competency and vary due to other factors • It is important to know the extent to which rater scores are reliable and reflect more about a student’s competencies than they do lack of quality in the rubric or inconsistencies across raters; the quality of an assessment task has direct impact on the quality of the evidence generated, and consequently, the strength of inferences made about the student’s proficiencies. • Examination of a professional program’s assessment practice can serve multiple purposes, from ensuring the evidence generated supports inferences and decisions made about students’ competencies to supporting decision making processes related to administrations of the assessment.

  • Date created
  • Subjects / Keywords
  • Type of Item
    Conference/Workshop Poster
  • DOI
  • License
    Attribution-NonCommercial 4.0 International