At the end of March, 2018, we completed the fourth edition of the MedPhys Match (MPM). The purpose of this article is to provide a few highlights from this year's MPM and discuss a few other things relevant to the MPM.
As in year's past, National Matching Services Inc, the company that provides the matching services for MPM, provides year by year statistics on their website.1 Starting this year, they also provide graphs showing how the match has changed over the years. For example, the total number of applicants registering for MPM has been steadily decreasing, from a high of 402 applicants in 2015, to 272 applicants in 2018. The number of applicants that register but withdraw or do not submit a rank list has dropped from 122 (2015 and 2016) to 68 (2018). The number of applicants submitting rank lists dropped from 280 in 2015 to slightly more than 200 in subsequent years.
Referring to the chart labeled Applicant Results on the NMS Statistics page, only about 39% of applicants submitting rank lists in 2015 were matched to a position. For 2016 and 2017, roughly 50% of applicants submitting rank lists were matched to a position. For this year's matching results, about 57% of applicants submitting rank lists were matched to a position. The number of residencies participating in the MPM continues to go up, increasing from 77 in 2015 to 87 in 2018. The number of positions offered is also increasing, from 112 in 2015 to 129 in 2018. From 2017 to 2018, the number of filled positions increased (from 107 to 116), but the number of unfilled positions almost doubled (from 7 to 13). This might be expected since the number of applicants competing for positions did not increase. I heard from some program directors with unfilled positions this year that were able to fill the positions quickly after the match results were announced.
Some of you may recall that in the very first Newsletter article about the 2015 MPM results,2 we defined an acceptable applicant as an applicant that submitted a rank list who was also ranked by at least one program. Because the MPM was free of charge in the inaugural year, there were many applicants that probably submitted rank lists that did not get any interviews and were likely not qualified. Therefore, we hypothesized that including only acceptable applicants would be a better indicator. Referring to the year by year statistics, the number of acceptable applicants from 2015 to 2018 was 185, 157, 174, and 176, respectively, for an average of 173 acceptable applicants per year. According to the CAMPEP 2016 graduate survey report,3 the average size of the incoming graduate student class from 2012 to 2016 was 309 students. Over that same period, the average number of graduates (not including DMP programs) was 271 and not all of those would be expected to enter a clinical career and require a residency. The CAMPEP 2016 residency survey report4 indicates that 170 applicants were admitted to residency programs in 2016, which is almost approximately the same as the number of acceptable applicants in the MPM. Therefore, the number of acceptable applicants is probably a reasonable indicator of how many qualified applicants there are each year.
In the 2015 Newsletter article, it was noted that most applicants and programs were matched to positions that they ranked highly. Figure 1 shows that this still remains the case for applicants. In all four years, at least 50% of matched applicants were matched to their top-ranked position. Approximately 95% of matched applicants were matched to one of their top 5 ranked positions.
Figure 2 shows the similar rank statistics for programs. For standardized ranks less than five, there seems to be more variability for programs compared to applicants. Note that programs are matched to their number one ranked applicant only about 35-40% of the time (compared to more than 50% for applicants). In 2017, programs were matched to higher positions on their rank list, and it looks like applicants might have been a little lower as a result. However, the numbers are relatively small, and this amount of variability is likely normal.
Prior newsletter articles about the MPM,2,5 also had some focus on matching statistics for various applicant subgroups. Figure 3 shows statistics for some of those subgroups. For those with a CAMPEP graduate background (degree or certificate program), about 85-90% were ranked by at least one program. For those that are ranked at least once, CAMPEP PhD graduate have a match rate of 90%, followed by certificate graduates at 76% and CAMPEP MS graduates at 63%. All of these numbers are higher than those reported for the 2015 and 2016 MPM. For applicants without a CAMPEP background (e.g., some or no coursework), only 69% of applicants are ranked by at least one program, and only 39% of those are matched to a position. Compared to 2015, the percentage of non-CAMPEP applicants getting ranked is much higher, but the match rate is much lower. The overall success rate was only slightly higher than in 2015 (27% vs 20%).
If we subdivide the applicants by gender, we see that female applicants are more likely to be ranked (92% vs 78%), to be matched (80% vs 65%), and to be successful in getting a position (74% vs 51%). Compared to 2015 and 2016, the percentage of female applicants being ranked has risen from 74% to 92%, while the percentage of male applicants being ranked has only gone up less dramatically, from 64% to 78%. The match rate for female applicants has also risen significantly (52% in 2015, 83% in 2016), while the match rate for male applicants has stayed about the same (65% in 2015, 72% in 2016).
In 2017, Hendrickson et al6 published their analysis of survey results for the MPM in 2015 and 2016. What they found was eye-opening, to say the least. The MPM has rules governing how it works, but recruitment should also be done in accordance with human resources best practices and non-discriminatory behavior. There are many types of behavior that at first glance appear to be very innocuous but can make an applicant feel very uncomfortable and push the boundaries of ethical and legal behavior. The manuscript includes an example form that the authors use to remind the interview team about recruitment best practices and I know of a few programs that have already incorporated something similar into their recruitment practices. I would encourage all programs to read the manuscript and reflect on how their recruitment practices could be improved to ensure that applicants are treated with the respect that they deserve.
Last year, we made some advanced features available to meet program recruitment needs, and I would encourage any program with a special recruitment need to look carefully at the signup packet. There is a very nice description of the features that should allow you to determine if they will meet your needs.
Without going into too much detail, these features allow programs to try to steer recruitment in their desired direction, while maintain flexibility to fill positions if required. A relatively simple example might be trying to get the algorithm to match your program to at least one female applicant, with the option of reverting the position to be open to all applicants if the position can't be filled with a female applicant in the first pass. Another feature allows programs to tie one or more positions to a subset of applicants. For example, one program was able to obtain funding for an additional position but the funding was restricted to students from a particular graduate program. The algorithm attempted to fill at least one position with a student from that graduate program and if that was not possible, the position would not be filled.
These features can also be used by so-called hybrid programs (longer than two years) with multiple positions. Each research advisor could have a separate list of acceptable applicants. Some applicants might be on only one list, and some might be on multiple research advisor lists. If the program has more research advisors than positions, the program could also specify which advisors are to get first priority in the matching algorithm. If higher priority positions are not filled in the first pass, the algorithm would attempt to fill other advisor's positions.
The advanced options can handle some very complicated recruitment scenarios, but they require a bit of forethought to ensure that rank lists and options are specified correctly to meet the program's recruitment needs. The simple matching model that we started with was not able to handle some of the scenarios that were being presented. We felt that adding these advanced features was better than having an unintended matching violation or not having the program participate in the MPM at all. If you have any questions about your program's particular needs, feel free to contact National Matching Services. I would also be happy to talk to any program director about these features.
Physicists are clever people, but we can also be clever dumb. I've heard many program directors (physics and radiation oncology) tell me that they know their program is good because they always match near the top of their list. The program could very well be good, but they could also be good at picking applicants that rank them highly. An applicant may also be tempted to rank a program lower because they feel that the program is not going to rank them highly. National Matching Services lists four common misconceptions about the matching algorithm and the four facts about those misconceptions.7
The algorithm is actually quite simple at its core. If you have not done so already, I would encourage you to look at the algorithm to convince yourself of these facts.
As mentioned in prior Newsletter articles, the first four years of MPM were subsidized by generous contributions from AAPM and SDAMPP. There are no plans for further subsidies, so costs for programs and applicants will be going up this year by about a factor of 2. This year's costs are now being determined and will be available when the next recruitment season starts this fall. After this year, cost increases should be much smaller and limited to regular cost of living increases.
The AAPM Subcommittee on the Oversight of MedPhys Match8 is currently responsible for administering the MPM. The subcommittee establishes MPM policies and reviews any potential violations of the MedPhys Match Rules of Participation.9 On behalf of the subcommittee, AAPM, SDAMPP, and residency applicants, we are grateful for the cooperation of participating applicants and programs. We welcome any and all constructive criticism regarding any aspect of the MPM program.
We have noticed that you have an ad blocker enabled which restricts ads served on this site.
Please disable it to continue reading AAPM Newsletter