White Paper
Our plan to beat the MSRA
The exam you can’t revise for
"Just do the question bank."
- old medical proverb
At some point, every doctor in the U.K. has been told to 'just do the question bank' when asking how to pass an upcoming exam.
And for good reason, it works.
We know the best way to pass any test is through practice. If you want to pass your driving test, do more driving. If you want to get an A* in your A-Level Biology exam, do the past papers. If you want to pass medical school finals, do the question bank.
Except, that is, for the Multi-Speciality Recruitment Assessment (MSRA) Professional Dilemmas (PD) paper.
A test where we are either told:
1. It's a random number generator, just hope and pray
2. Stick with the official practice questions (and then hope and pray)
And again for good reason.
The answers can seem inconsistent and far removed from how people would act in the real world. This is the test where we are famously expected to turn back from the airport during our planned annual leave to help fill a gap in the hospital rota.
To help prepare for this exam, Health Education England, have released 23 official MSRA PD questions with an answer key. There are also many similar Situational Judgement Test (SJT) questions, from the, now defunct, selection into the Foundation Programme SJT (FP-SJT).
These can be helpful in beginning to recognise the patterns of behaviour that are expected.
But most motivated candidates will go through all of these practice papers.
For a test where you are ranked against your peers, any advantage evaporates away. You need to do these practice questions just to reach the same baseline as other motivated candidates.
The other issue is that the FP-SJT questions are aimed at a different level, Foundation Year 1 doctors, as opposed to F2 for the MSRA. They purposely have lower complexity in some scenarios and nuanced differences in how options are ranked.
Third-party question banks have of course proliferated. But these are unvalidated, with no guarantee that the judgement of the person(s) who wrote the questions will be consistent with that of the official exam writers. As such, there is a significant risk that by practising with such questions, you will perversely worsen your performance.
For the MSRA there is no question bank to just do. This is an exam that now decides the career prospectives of the majority of U.K. doctors, with no way to gain an edge or prepare effectively. We aim to change this.
The random number generator
Many intelligent people bill the MSRA as random number generator.
Choosing which job a candidate is placed into with all the care and consideration of an orthopaedic surgeon reading an ECG.
Now we agree, there is some level of inconsistency, and thus randomness, in the marking of SJT questions. But not completely. There are patterns of expected behaviour that become evident when reading enough official practice questions.
We also know that the MSRA PD correlates with outcomes such as:
Suggesting that, to some degree, it is measuring something desirable.
The SJT has also been found to have high rates of internal consistency. People who score well on one question tend to score well on most questions and people who score poorly on one question will tend to score poorly on most questions. If it were random you would also expect scores to vary wildly from question to question for the same candidate.
We believe that the Subject Matter Experts (SMEs) and the psychologists from the Working Psychology Group (WPG) who oversee the development of the exam have an implicit, institutional model of what the ideal resident doctor is and how they should generally behave.
That there is some randomness, due to inherent nature of SJT's being subjective, i.e. there will always be some disagreement, even amongst 'experts'
But that if we can optimise and learn exactly what this model is, then we can effectively prepare for and beat the MSRA PD paper.
The model
Machine learning or Artificial Intelligence is based on taking in data and predicting something based on that data. They are great at modelling things that are implicit within the data itself and that would be hard to tease out using traditional statistical methods. And they're just what we need to learn the implicit model of the ideal resident doctor, that the MSRA PD is graded against.
Modern large language models, such as ChatGPT, are trained on extremely large amounts of data and, as such, learn multiple models of the world. So to start with we set out to see how well a large language model has learnt to model the situational judgement in medical scenarios.
We know from the FP-SJT technical reports that the mean score on the SJT paper is around 83-84%. This is remarkably consistent year to year. So this would be our baseline.
We started with GPT-4o, the latest large language A.I. model from OpenAI, this scored 84.9% on the MSRA practice paper, not too bad, but not good enough. We are aiming at much better than average performance.
We then tried a propriety system which we have developed. With some training we have managed to achieve a score of 89.9%. This is tested on unseen questions so the AI will not have just learnt the test data, a process known as overfitting.
One standard deviation is about 86.6%. For a normal distribution 89.9% would put you in the top 6.5% of test takers. Good enough to get onto any training programme.
We have our model, the ideal resident doctor (IRD-01).
A validated question bank
We plan on using IRD-01 to develop and validate a new, high-quality MSRA PD question bank.
Items will be developed initially by SMEs in the same way as the official MSRA. They will then go through several iterations of development. Following this, we will pass it through two stages of validation:
If questions do not pass validation, they are reworked, and if they fail subsequent validation they are rejected. Only questions that consistently pass validation will be added to the question bank. Overall we are aiming for 300-400 Professional Dilemma questions to fill our bank. Questions will also undergo further validation once they are live, a process we will detail in future blog posts.
The final step is to limit access to this question bank. If a question bank is widely used, it's likely to stop conferring an advantage. As such we will limit sign-ups to 20% of total MSRA takers.
This is an experimental project, with more work to be done. We aim to release the question bank in time for the 2025 MSRA sitting.
If you are interested, reserve your place by joining the waitlist below. Places will be limited.
Join the waitlist. Places are limited.
You will be emailed once your place is ready.