At an AI-in-healthcare discussion event, someone asked a question – what opportunities are there for AI in healthcare?
That’s a very broad question and one that each person or company entering the healthcare AI business is going to have to answer for themselves. Perhaps a more interesting question is, once you have some opportunities in mind, how do you decide which to choose? This is a good time to use a decision matrix.
A decision matrix is a way to organize your thoughts around a decision, putting criteria into a structure that can help you see the relationship between them. A commonly known one is credited to Eisenhower and shows the relationship between urgency and importance. What some miss without the matrix is that they are ignoring the important-but-not urgent and putting too much time into the urgent-but-not-important.

By Rorybowman - Own work, Public Domain, Link
A decision matrix from the Lean world is the PICK chart. The two criteria here are “payoff” and “ease of implementation” or in another way to think of them, “impact” and “difficulty”. The PICK chart then divides the matrix into four quadrants, each with a corresponding action: Possible, Implement, Challenge, or Kill. I’ve also heard “kick” or “kibosh” if killing ideas sounds too violent for you.

For AI in healthcare, however, as a nascent technology in a very restrictive industry, it’s important to understand what creates impact and what causes difficulty. There are many sources of difficulty in healthcare: Curated ground truth can be hard to come by, because “truth” in medicine is not as standardized or absolute as people like to think. Obtaining any data can be challenging, whether for algorithm development or for operationalization; it’s difficult not only because of privacy laws but also because healthcare systems are optimized for operations, not analysis. Creating interfaces and getting feeds can be pretty tough, both technically and because of organizational governance. Healthcare is also a highly regulated field and so any algorithm that works on treating people is going to be held to a higher standard of scrutiny.
But put those aside for the moment, since they are well known difficulties of working in healthcare tech. When you are exploring what makes things difficult the aspect that may be even more challenging than the technical issues is the emotional one: the matter of trust.
Healthcare has always been about trust, and earning that trust is going to be a major challenge for the AI industry. Trust from the doctor that you aren’t trying to take her job. Trust from the patient that a computer is as reliable as his personal physician. Trust from the organization that you aren’t going to misuse their data. Trust in healthcare is emotional, not necessarily logical, so showing that your algorithm can do better than the average doctor is not necessarily going to be enough.
How do you gain trust? Luckily we can look to outside industries to learn. For example, if you’ve used Netflix, you’ve probably seen the recommendation stated in terms of “Because you watched that, we recommend this.” Netflix is being transparent in showing you the source of its recommendation and you can judge the validity of the reasoning. Moreover, you get to decide if you want to watch the recommended show or not. Transparency and control – two of many potential elements of trust.
How can you apply this thinking to healthcare, to ensure that you are working on the “low difficulty” quadrant of the matrix? There’s no surefire recipe, but you might try a few things.
- Start with solving problems that are not threatening. Healthcare providers- doctors, nurses, physician assistants – don’t feel like they need help doing what brought them to medicine in the first place, namely taking care of patients, making diagnoses, figuring things out. What they hate are the small tasks that are required to make that possible: finding things scattered in the chart, documenting what they do, calculations, filling out forms, etc. If you can help them do those things, they will welcome you, not reject you.
- Show your work. Don’t just report that a mass has grown, show the volume, show the edges the algorithm has found, and allow for edits. Or if the algorithm is following a clinical guidance algorithm, show the steps. Give the healthcare provider both the ability to edit the work and knowledge to do so.
- Gain trust on the operational side of things first. Show that AI can determine which patients are most likely to fail to keep an appointment. Show that you can accurately determine the times of peak volume. Use AI to guess the billing codes. If you can successfully work with IT and show that you won’t crash the system or create a data breach, they might consider letting you at the clinical data. If you can show that you add value or add to the bottom line in non-clinical data, you might have removed some barriers to the adoption of your product, and maybe even gained some champions.
Don’t make things harder on yourself. Target the easy and impactful, and you have a higher chance of being welcomed into the fold.