Tuesday 29 September 2015

Research in Medical Education

I'm a medical education buff. I'm a true believer that to get the best healthcare system you need the best physicians and to get the best physicians you need to get the best students and train them well.

Today I attended and presented at a conference on medical education. It was a relatively small, local conference, but I can't say how excited I am by the work being done in the field of medical education and what's already been accomplished. The keynote speaker was particularly engaging, delivery an evidence-based, rather sensible and jargon-free presentation. Research in the social sciences - including education - tends to get bogged down with buzzwords that don't add much to clarify the concepts but really hamper knowledge transmission and transmission into practice.

Anyway, here were my big take-aways:

First, the Netherlands apparently has adopted the CanMeds framework (AMAZING!). In addition, they seem to have done a much better job at translating it into a workable approach to medical education (wait, WHAT?!). That's right, the Netherlands seems to actually implement the CanMeds competencies into training rather than simply talking about them, which is what happens most often in Canadian medical schools. It's great that we know we can implement the CanMeds competencies into our system, but a bit disappointing we've let another country leapfrog us like that...

Second, when doing evaluations of students, it doesn't seem to matter too much what kind of system you use for that evaluation in terms of reproduce-ability. "Objective" evaluations don't hold up much better than clearly subjective evaluations. Rather, the main factor appears to be time, with the major cutoff being 4 hours. Most of our evaluations are less than that. Combining assessment approaches does seem to help, but only if they're attempting to evaluate the same overall goal.

Third, going along with the first two points, we seem to have a really good idea of how we can make medical education significantly better, but the research is well ahead of implementation. I'd blame this discrepancy on money, but none of the suggestions seemed to be all that resource-intensive, and even demonstrated some ways money could be saved. I think the major barrier is that the research isn't getting to many of the individuals making the big decisions in education, or to the extent that it is, these decision-makers aren't willing or able to make the rather large conceptual changes that medical education requires for improvement.

Medicine is still, unfortunately, a rather hierarchical field that would prefer to make a series of small changes than a few big ones. Medical education isn't much different. That couldn't have been more clear today.

I was very encouraged to see some educational leaders at my school in attendance at the conference. Unfortunately, many of them were already the converted leaders, the ones who probably need no convincing. There's still a long road ahead here.

Lastly, and a bit more topically, the keynote speaker emphasized the importance of non-standardized metrics for evaluating students. In medicine, these metrics are probably essential, though they're often shied away from both in terms of getting into medical school and in terms or getting through medical school. Being comfortable with non-standardized evaluations and implementing them appropriately is a smaller shift in medical education schools could start implementing immediately.

On the flip side, standardized metrics are also important. Having both standardized and non-standardized metrics is important to maintain flexibility in evaluations while warding against bias. This is particularly relevant for CaRMS applications, which currently employ completely non-standardized metrics. While introducing a standardized component to CaRMS would be a challenge, and not one without meaningful drawbacks, it's a something I believe deserves a bit more attention than it currently receives.

4 comments:

  1. Do you think implementing a USMLE style metric for residency applications would make people less worried about the hidden curriculum and playing the subjective game of managing interpersonal alliance networks (which is important too) who can help advocate for your residency application?

    ReplyDelete
    Replies
    1. I think a USMLE-style metric could make people less worried about the hidden curriculum and subjective elements, but that's the strongest argument against such a metric. Those subjective elements are meaningful! For the sake of balance, I think it's worth implementing a standardized evaluation for CaRMS applications, like the USMLE, but there's a very valid concern that this will over-correct and lead to too little emphasis on non-standardized criteria. As I said, implementation of evidenced-based educational structures is not all that great, and while all well-intentioned, residency programs are no exception.

      Delete
  2. I'm interested in more information about how the Dutch integrate CanMEDS into their curriculum. Being currently trained by a program that also integrates CanMEDS into the curriculum, I'm not sure if I'm at all more trained on how to be better at them, rather I'm being bogged down with tonnes of paperwork and a check mark next to the CanMEDS box

    ReplyDelete
    Replies
    1. Heh, you're testing my memory from nearly a year ago, so no promises on accuracy here. The feeling I got was that they had evaluations - including formative evaluations - designed to test those roles outside a clinical setting, though I can't remember specific details. I agree, I've seen far more of a check mark approach to these roles in Canada and it's not a very helpful use of the CanMEDS framework.

      Delete