Why we should all think more carefully about performance models
Hugh Miller previews his upcoming paper Performance Enhancing: A look at Modern Operational Performance Rating Models, to be presented at the Actuaries Institute’s Injury and Disability Schemes Seminar in November.
In New York State in the 1990’s the health system undertook a quality improvement system that included monitoring and publishing surgeon mortality for coronary artery bypass graft surgery. The idea of public reporting is that it gives patients and other stakeholders (like insurers) more information when choosing providers, and provides an incentive for hospitals and surgeons to improve performance.
Many people now believe the practice backfired – several analyses have been performed on the impact of this reporting, which concluded (among other things) that[1]:
- It increased the number of surgeries on low-risk patients.
- Many providers actively decreased their willingness to operate on high-risk patients.
- Overall patient outcomes dropped – more people died of heart conditions than previously.
- Patient awareness of the reporting was limited (some estimates were that 6% of people were aware of its existence)[2] – so the behaviour change was driven by surgeon perceptions, rather than actual patient demand.
The public reporting incentivised surgeons to act in ways that improved their rankings. And the design of the comparisons led to poorer health outcomes.
This example raises fundamental questions about performance monitoring and comparisons. Unless comparisons are fair, then there is a strong incentive for those being monitored to react to that perceived unfairness.
Getting performance monitoring right is incredibly important since such measurement is widespread and pervasive. Many injury and disability schemes use a network of third-party providers (sometimes insurers) to manage claims and there is an instinctive desire to identify which providers are doing a ‘good’ job through performance measures. We see performance monitoring and comparison in many other government-funded programs too, including in health and education. Government employment services programs rely on detailed comparisons between providers to aid jobseekers find effective supports.
One natural way to improve comparisons is through ‘risk-adjustment’. Instead of comparing raw outcomes (e.g. the percentage of claims finalised in six months, or the percentage of surgery patient deaths) we compare actual versus expected performance, where expected performance can vary by caseload; if an insurer carries more older people who typically recover slower, this can be recognised and allowed for. Or a surgeon performing higher-risk operations can see that expected risk match the higher actual mortality.
My paper, to be presented at the 2019 Injury and Disability Schemes Seminar, focuses on the use of risk-adjustment for performance monitoring. There are important considerations associated with good risk-adjustment performance models, including:
- correlations between risk-adjustment variables and provider caseloads;
- dealing with regional variability;
- risks associated of over- and under-fitting; and
- properly understanding and reflecting uncertainty, particularly when the size of providers vary.
While some aspects of design, such as reducing the incentive for gaming, rely on good knowledge of the underlying scheme, there are many general design and statistical principles that can be applied to ensure that a performance measurement and comparison tool is as fair and reliable as possible.
Register for the upcoming Injury and Disability Schemes Seminar, being held from 11 – 13 November at QT Hotel, Canberra, to hear more from Hugh on his paper.
[1] See for example Is More Information Better? The Effects of “Report Cards” on Health Care Providers by Dranove et al (2003), https://www.kellogg.northwestern.edu/faculty/satterthwaite/research/2003-0520%20Dranove%20et%20al%20re%20Report%20Cards%20%28JPE%29.pdf
[2] https://www.nytimes.com/2015/07/22/opinion/giving-doctors-grades.html
CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.