Advertisement

 

 

Voluntary vs. compulsory student evaluation of clerkships: effect on validity and potential bias.

Voluntary vs. compulsory student evaluation of clerkships: effect on validity and potential bias.
Author Information (click to view)

Aoun Bahous S, Salameh P, Salloum A, Salameh W, Park YS, Tekian A,


Aoun Bahous S, Salameh P, Salloum A, Salameh W, Park YS, Tekian A, (click to view)

Aoun Bahous S, Salameh P, Salloum A, Salameh W, Park YS, Tekian A,

Advertisement

BMC medical education 2018 01 0518(1) 9 doi 10.1186/s12909-017-1116-8
Abstract
BACKGROUND
Students evaluations of their learning experiences can provide a useful source of information about clerkship effectiveness in undergraduate medical education. However, low response rates in clerkship evaluation surveys remain an important limitation. This study examined the impact of increasing response rates using a compulsory approach on validity evidence.

METHODS
Data included 192 responses obtained voluntarily from 49 third-year students in 2014-2015, and 171 responses obtained compulsorily from 49 students in the first six months of the consecutive year at one medical school in Lebanon. Evidence supporting internal structure and response process validity was compared between the two administration modalities. The authors also tested for potential bias introduced by the use of the compulsory approach by examining students’ responses to a sham item that was added to the last survey administration.

RESULTS
Response rates increased from 56% in the voluntary group to 100% in the compulsory group (P < 0.001). Students in both groups provided comparable clerkship rating except for one clerkship that received higher rating in the voluntary group (P = 0.02). Respondents in the voluntary group had higher academic performance compared to the compulsory group but this difference diminished when whole class grades were compared. Reliability of ratings was adequately high and comparable between the two consecutive years. Testing for non-response bias in the voluntary group showed that females were more frequent responders in two clerkships. Testing for authority-induced bias revealed that students might complete the evaluation randomly without attention to content. CONCLUSIONS
While increasing response rates is often a policy requirement aimed to improve the credibility of ratings, using authority to enforce responses may not increase reliability and can raise concerns over the meaningfulness of the evaluation. Administrators are urged to consider not only response rates, but also representativeness and quality of responses in administering evaluation surveys.

Submit a Comment

Your email address will not be published. Required fields are marked *

2 + 1 =

[ HIDE/SHOW ]