Prolonged opioid use after surgery insights from machine learning
Myron Yaster MD and Elliot J. Krane MD
HAL 9000: “I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you”. 2001: A space odyssey
“Garbage in, garbage out.”. Axiom of computer programming.
I’ve asked Dr. Elliot Krane to weigh in on this PAAD. He previously wrote an editorial in Pediatrics dealing with many of the issues raised in this paper. Myron Yaster MD
Original article
Andrew Ward, Trisha Jani, Elizabeth De Souza, David Scheinker, Nicholas Bambos, T Anthony Anderson: Prediction of Prolonged Opioid Use After Surgery in Adolescents: Insights from Machine Learning. Anesth Analg 2021 Aug 1;133(2):304-313. PMID: 33939656
Previous Editorial Comment
Elliot J Krane, Steven J Weisman, Gary A Walco. The National Opioid Epidemic and the Risk of Outpatient Opioids in Children. Pediatrics 2018 Aug;142(2):e20181623. PMID: 30012558
Approximately 5 million children undergo surgery in the U.S. every year and many, if not most will receive an opioid perioperatively. A small percentage may continue to use opioids well after surgery and/or develop a substance abuse disorder at some time in their lives.
Prolonged opioid use after surgery (POUS), defined as ≥1 opioid prescription in the 90–180 days after surgery, occurs in adults and children, and has been used as a marker of opioid abuse.1,2 The thinking is: why would opioid prescriptions be refilled long after surgery was completed other than to feed an addiction? So we ask you readers this: can you think of a reason why a second, third, or more opioid prescription might be used after surgery? We can think of several reasons, chief among them is ongoing uncured disease.
There is currently no way to predict who will develop POUS (or a substance abuse disorder) and the Ward, et al. wondered could they develop a predictive model using large databases and machine learning techniques, branches of artificial intelligence and computer science?
As way of background, machine learning is an important component of the growing field of data science. Using statistical methods, algorithms are developed and computers trained to use those algorithms to make classifications or predictions, or uncovering key insights within data mining projects. These insights subsequently drive decision making and risk prediction within applications and businesses.
In this study, “medical claims data from January 1, 2003 to December 30, 2017 were collected from Optum Clinformatics Data Mart Database (OptumInsight, Eden Prairie, MN), a deidentified database from a national insurance provider. Utilizing this large national insurance claims dataset comprising data from over 167,000 eligible surgical patients, machine learning models achieved modest predictive performance across all surgeries, but substantially higher predictive performance for some specific surgeries”. So far, so good.
Of more than 1,000 variables used as inputs in the initial model, “variable importance analysis” found that opioid use in the year before surgery and the number of opioids prescribed after surgery (both the average daily MME and total days’ supply) are most important for POUS prediction, while zip code-level socioeconomic factors contribute to a lesser degree. Interestingly, these variables have been identified as risk factors for POUS in other studies.3 But inherent in this conclusion are assumptions, and it is important to appreciate that machines do not make assumptions. Programmers make assumptions. Data analysts make assumptions. And you, our readers, make assumptions.
There are 2 underlying assumptions in this paper. First, prolonged opioid use after surgery is unjustified and can lead to “negative healthcare consequences” and second, our proof of sustained opioid use can be based on insurance opioid refill databases. We don’t think that either of these hold water. This research is based solely on insurance claim data, not on patient interviews or medical record reviews. Did patients who had a second or third opioid prescription actually have an opioid-related adverse event, such as an ER visit, hospitalization, respiratory depression, or death? Were they misusing or abusing opioids? We don’t know. Or did they have moderate to severe ongoing pain, for which there was no other treatment? If any of you have survived severe trauma, had pancreatitis, or have cancer, then you might have an opinion of your own on this matter. Again, we do not know.
Further, we really don’t even know if the filled and refilled opioid prescriptions were actually used by the patients who received the prescriptions. It is not outside the realm of possibility that refill opioid prescriptions were diverted by parents or other family members. As a previous editorial by Krane, Weisman, and Walco (see above) pointed out: “If the concern is that opioid prescriptions to children lead to substance abuse disorder, other work reveals that first exposure to nonmedical use of opioids in adolescents occurs most often from access to family members’ or friends’ prescriptions, not their own. In comprehensive articles, Miech et al.4,5 concluded that “In the very lowest risk stratum … legitimate use of prescription opioids before high school completion does not predict opioid misuse after high school.”
Before we jump on the Big Data bandwagon, it is important to remember that large data sets, as impressive as they may be, are inherently biased by what data they include and what data are excluded. Large Medicare/Medicaid databases do not reflect the whole population, just those who are poor, elderly or disabled. Large Kaiser databases only reflect those who have private health insurance but exclude those who can afford more expensive policies. As Kate Crawford of the Harvard Business Review points out, “Former Wired editor-in-chief Chris Anderson embraced (Big Data) in his comment, ‘…with enough data, the numbers speak for themselves.’ ” But it is not as clean as Mr. Anderson predicted. Ms. Crawford goes on to remind us that “Data and data sets are not objective; they are creations of human design. We give numbers their voice, draw inferences from them, and define their meaning through our interpretations. Hidden biases in both the collection and analysis stages present considerable risks and are as important to the big-data equation as the numbers themselves.”6
And what of machine learning, another very seductive technology? Müller emphasizes in Ethics of Artificial Intelligence and Robotics that “Automated AI decision support systems and ‘predictive analytics’ operate on data and produce a decision as “output”. This output may range from the relatively trivial to the highly significant: this restaurant matches your preferences’, ‘the patient in this X-ray has completed bone growth’, ‘application to credit card declined’, ‘donor organ will be given to another patient’, ‘bail is denied’, or ‘target identified and engaged’…” To which we might add “this patient should not receive an opioid prescription.”7
And why not? Because he’s black? Because she’s a Latina? Because the patient is on public assistance? It is not far-fetched to imagine that the biased assumptions of a programmer can be embedded in an algorithm. Müller reminds us “Apart from the social phenomenon of learned bias, the human cognitive system is generally prone to have various kinds of “cognitive biases”, e.g., the “confirmation bias”: humans tend to interpret information as confirming what they already believe.” Among whites there is a strong belief that opioid deaths from abuse are more common in blacks, but the evidence is that whites die from opioid overdoses at seven times the frequency of blacks, per capita. Today there is also a strong cognitive bias that opioids are bad as medications. Yesterday the bias was that pain is bad for humans. The truth is certainly somewhere in the middle.
Not the last criticism of Big Data and AI by any means, the basis for decision making when relying on these methods is completely opaque to the user, and to the patient in the instance of medicine. This problem is so significant that the European Union has regulated decision making by algorithm and established a “Right to Explanation.”8 The answer to “Why can’t I get another prescription of oxycodone” should never be “because the computer said so.” We cannot imagine a reader who would disagree with this.
We’ll conclude with the perceptive view of Kate Crawford: “We know that data insights can be found at multiple levels of granularity, and by combining methods such as ethnography with analytics, or conducting semi-structured interviews paired with information retrieval techniques, we can add depth to the data we collect. We get a much richer sense of the world when we ask people the why and the how not just the “how many”. This goes beyond merely conducting focus groups to confirm what you already want to see in a big data set. It means complementing data sources with rigorous qualitative research. Social science methodologies may make the challenge of understanding big data more complex, but they also bring context-awareness to our research to address serious signal problems. Then we can move from the focus on merely “big” data towards something more three-dimensional: data with depth.”6
Finally, the journal thought this was important enough of a study to provide an infographic summarizing this paper. We’ve included it for you. What the journal does not illustrate for us is how many of those 4.5% with POUS, including the sad young lady so artfully silhouetted in profile, have persistent pain from ongoing disease, delayed healing, antibiotic resistant bone infections, sickle cell disease, hemophilia, multiple trauma and fractures, or an almost unlimited list of other sources of human misery.
Myron Yaster MD and Elliot J. Krane MD
References
1. Brummett CM, Waljee JF, Goesling J, Moser S, Lin P, Englesbe MJ, Bohnert ASB, Kheterpal S, Nallamothu BK: New Persistent Opioid Use After Minor and Major Surgical Procedures in US Adults. JAMA Surg 2017; 152: e170504
2. Harbaugh CM, Lee JS, Hu HM, McCabe SE, Voepel-Lewis T, Englesbe MJ, Brummett CM, Waljee JF: Persistent Opioid Use Among Pediatric Patients After Surgery. Pediatrics 2018; 141
3. Hah JM, Bateman BT, Ratliff J, Curtin C, Sun E: Chronic Opioid Use After Surgery: Implications for Perioperative Management in the Face of the Opioid Epidemic. Anesth Analg 2017; 125: 1733-1740
4. Miech R, Johnston L, O'Malley PM, Keyes KM, Heard K: Prescription Opioids in Adolescence and Future Opioid Misuse. Pediatrics 2015; 136: e1169-77
5. Kelley-Quon LI, Cho J, Strong DR, Miech RA, Barrington-Trimis JL, Kechter A, Leventhal AM: Association of Nonmedical Prescription Opioid Use With Subsequent Heroin Use Initiation in
6. Crawford, K. The Hidden Biases in Big Data. Harvard Business Review. https://hbr.org/2013/04/the-hidden-biases-in-big-data
7. Müller, Vincent C., "Ethics of Artificial Intelligence and Robotics", The Stanford Encyclopedia of Philosophy (Summer 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2021/entries/ethics-ai/>.
8. Goodman B & Flaxman S. “European Union Regulations on Algorithmic Decision Making and a “Right to Explanation.” AI Magazine, Fall 2017, 50-57.
PS: I’ve (Myron) got to add one personal comment. Before moving to Colorado, much of my professional career had been centered on studying and treating pediatric pain. This wouldn’t have been possible without Elliot (and Don Tyler) who as young faculty at the University of Washington Seattle organized the first world congress of pediatric pain in 1989. This meeting and the folks I met there changed the arc of my life…so, thank you Elliot and Don. MY