Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Validation of a performance assessment instrument in problem-based learning tutorials using two cohorts of medical students

Validation of a performance assessment instrument in problem-based learning tutorials using two... Although problem-based learning (PBL) has been widely used in medical schools, few studies have attended to the assessment of PBL processes using validated instruments. This study examined reliability and validity for an instrument assessing PBL performance in four domains: Problem Solving, Use of Information, Group Process, and Professionalism. Two cohorts of medical students (N = 310) participated in the study, with 2 years of PBL evaluation data extracted from archive rated by a total of 158 faculty raters. Analyses based on generalizability theory were conducted for reliability examination. Validity was examined through following the Standards for Educational and Psychological Testing to evaluate content validity, response processes, construct validity, predictive validity, and the relationship to the variable of training. For construct validity, correlations of PBL scores with six other outcome measures were examined, including Medical College Admission Test, United States Medical Licensing Examination (USMLE) Step 1, National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination, NBME Comprehensive Clinical Science Examination, Clinical Performance Examination, and USMLE Step 2 Clinical Knowledge. Predictive validity was examined by using PBL scores to predict five medical school outcomes. The highest percentage of PBL total score variance was associated with students (60 %), indicating students in the study differed in their PBL performance. The generalizability and dependability coefficients were moderately high (Ep2 = .68, ϕ = .60), showing the instrument is reliable for ranking students and identifying competent PBL performers. The patterns of correlations between PBL domain scores and the outcome measures partially support construct validity. PBL performance ratings as a whole significantly (p < .01) predicted all the major medical school achievements. The second year PBL scores were significantly higher than those of the first year, indicating a training effect. Psychometric findings provided support for reliability and many aspects of validity of PBL performance assessment using the instrument. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Advances in Health Sciences Education Springer Journals

Validation of a performance assessment instrument in problem-based learning tutorials using two cohorts of medical students

Loading next page...
 
/lp/springer-journals/validation-of-a-performance-assessment-instrument-in-problem-based-Xze1WmJehd

References (48)

Publisher
Springer Journals
Copyright
Copyright © 2015 by Springer Science+Business Media Dordrecht
Subject
Education; Medical Education
ISSN
1382-4996
eISSN
1573-1677
DOI
10.1007/s10459-015-9632-y
pmid
26307371
Publisher site
See Article on Publisher Site

Abstract

Although problem-based learning (PBL) has been widely used in medical schools, few studies have attended to the assessment of PBL processes using validated instruments. This study examined reliability and validity for an instrument assessing PBL performance in four domains: Problem Solving, Use of Information, Group Process, and Professionalism. Two cohorts of medical students (N = 310) participated in the study, with 2 years of PBL evaluation data extracted from archive rated by a total of 158 faculty raters. Analyses based on generalizability theory were conducted for reliability examination. Validity was examined through following the Standards for Educational and Psychological Testing to evaluate content validity, response processes, construct validity, predictive validity, and the relationship to the variable of training. For construct validity, correlations of PBL scores with six other outcome measures were examined, including Medical College Admission Test, United States Medical Licensing Examination (USMLE) Step 1, National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination, NBME Comprehensive Clinical Science Examination, Clinical Performance Examination, and USMLE Step 2 Clinical Knowledge. Predictive validity was examined by using PBL scores to predict five medical school outcomes. The highest percentage of PBL total score variance was associated with students (60 %), indicating students in the study differed in their PBL performance. The generalizability and dependability coefficients were moderately high (Ep2 = .68, ϕ = .60), showing the instrument is reliable for ranking students and identifying competent PBL performers. The patterns of correlations between PBL domain scores and the outcome measures partially support construct validity. PBL performance ratings as a whole significantly (p < .01) predicted all the major medical school achievements. The second year PBL scores were significantly higher than those of the first year, indicating a training effect. Psychometric findings provided support for reliability and many aspects of validity of PBL performance assessment using the instrument.

Journal

Advances in Health Sciences EducationSpringer Journals

Published: Aug 26, 2015

There are no references for this article.