Data Series: Stop Proving and Start Improving
“What is the great revolution of science in the last 10, 15 years? It is the movement from the search of universals to the understanding of variability.”
- Malcolm Gladwell TED Talk 2004
“There are three kinds of lies: lies, damn lies, and statistics.”
- The origin of this quote is uncertain, but it was popularized by Mark Twain, among others
I’m a surgeon and early in my career I became a world expert in laparoscopic, or minimally invasive, surgery. I thought my purpose in life was to teach laparoscopic surgery to surgeons around the world, so I spent much of my career in healthcare with a lower-brain mindset thinking that a laparoscopic approach for surgery was always better than an open approach.
I participated in debates that supported my view, even though there were other well-respected surgeons who argued that an open approach might be better, at least for some patients. I was so arrogant and wanted to prove I was right so badly, I even thought to use a mission trip that I was invited on to propose a clinical study. I wanted to prove, through statistical significance, that a laparoscopic approach (with me doing the operation of course) was better than an open approach.
A surgeon from Creighton University had started a mission trip in the Dominican Republic to surgically repair hernias in people who lived mainly in the mountains around Santiago and could not afford care. The makeshift clinic and three operating rooms were in the mission, the Institute for Latin American Concern (ILAC), which was started by Creighton University several decades earlier.
This specific mission had been going on only a couple of years before I was invited; and since, at that time, I only did hernia repairs laparoscopically, I said I would go but I would need to bring laparoscopic equipment. The surgeon who led the effort reluctantly agreed. He was reluctant because he wasn’t sure how adding new technology in such a rudimentary environment would work because sometimes the electricity would go out – not a good thing when performing a laparoscopic procedure. Although he allowed me to bring the laparoscopic equipment, he was adamant that we wouldn’t be doing a study. He told me, “Bruce, we’re just trying to help these people, we shouldn’t do a study on them.”
I spent one week a year for four years traveling to Santiago, DR to repair hernias. Initially, I was proud that the other clinicians on the trip were impressed with my laparoscopic skills. But my combination of ignorance and arrogance was not a good thing and very much a lower-brain mindset.
During my third year traveling there, another surgeon from Creighton let me know he had just seen a patient in the clinic I had operated on during a previous trip who had a recurrence of his hernia. This surgeon and I had debated at surgical meetings in the past, where he defended the open approach for hernia repair. As embarrassed as I was at learning of my patient’s recurrence, the Creighton surgeon was humble and respectful of me (a higher-brain mindset). He asked me to examine the patient with him and wanted to scrub in and assist me when I performed the laparoscopic repair of my recurrence because he wanted to learn from me. Not only was I humbled, but also ashamed that I previously suggested doing a clinical study to prove that a laparoscopic approach was better than open.
The funny thing is that this hernia recurrence was the first known recurrence in the several years that these hernia mission trips had occurred and most of the hernia repairs had been done open, so it’s likely that if I had convinced the surgeon leading the hernia mission trips to do a study, it wouldn’t have gone well for me.
These mission trips and this particular event helped me to evolve to a much healthier, higher-brain mindset. I’m a better person now and have come to terms with my previous way of thinking, knowing it was a reflection of my environment and how the reductionist world continues to function, for the most part.
From a data science perspective, there’s no reason to attempt to prove a hypothesis using any type of analysis. Because of constant change, local variability, inability to completely isolate factors, and uncontrollable complex biologic variability, there are essentially no generalizable hypotheses in the real world. Instead, in healthcare, we should be learning to use data analysis tools to improve outcomes that are measured in terms of value for any patient care process, regardless if it’s a laparoscopic or open approach.
Data science has unfortunately been misinterpreted in healthcare. Statistical significance has been assumed to be achieved when a p-value is less than or equal to 0.05, which has been the threshold that allows acceptance and publication of peer-reviewed research. But as we gain a better understanding of our real biologic world, the lack of appropriate use and interpretation of statistics in medicine is now obvious. Even the American Statistical Association was compelled in 2016 to publish a statement. In it they quote a ScienceNews article (Siegfried 2010), “It’s sciences’ dirtiest secret. The ‘scientific method’ of testing hypotheses by statistical analysis stands on a flimsy foundation.”
Because of the misapplication of statistics, biomedical sciences have assumed there was a problem of reproducibility of research outcomes. Some have suggested that the solution is a more rigorous application of the reductionist scientific method and making the p-value threshold 0.005. But these interpretations are misguided. It’s normal, due to constant change and local variability, that repeated experiments will have different results – that is the real world.
In fact, Einstein never said, “Insanity, is doing the same thing over and over again, expecting a different result.” It apparently was first published in a Knoxville, Tennessee newspaper article reporting on an Al-Anon meeting in 1981. In the real world, doing the same thing over and over again, whether giving the same drug or doing the same surgical procedure, will result in different outcomes for different patient subpopulations.
Even when expert data scientists analyze the same dataset, there will be different outcomes. In one study, a dataset of red cards given to European soccer players, which included the color of the players’ skin, was analyzed by 29 different teams. The outcome was 29 different results. About 70% found that there could be bias by referees against players with darker skin. But about 30% of the analyses found that there was likely no bias by the referees.
Statistics or any analytical tool should be used to gain insight for learning and improvement. Why has this been such a problem in medicine? One reason is how we think. Our current reductionist science paradigm comes from lower-brain thinking, that certainty is possible and there are static “real” truths that are discoverable. And that we can prove what is right and wrong – that is just not reality. The higher brain accepts uncertainty. There is also a level of humility and curiosity using higher-brain thinking that can overcome the lower-brain default setting based on fear, competition, and having to prove we are right.
Our lower brain is much more comfortable with the certainty of one right answer, but our higher brain allows us (and I have learned personally) to see the beauty in variability and the opportunity to use the analyses of that variability to improve our world.