Mar 9th, 2023

How AI is Making a Difference in Healthcare

Author avatar icon
Katherine Tattum
category icon
Tali AI
Time to read icon
7 min read
Share on
Share on TwitterShare on Linked in
Everyone’s talking about AI, about how much it can do – it’s the latest shiny, new toy. It can feel a bit like a fad, but for us here at Tali it isn’t just about using AI because it is the latest, cool thing. It’s about using the right technology, and using it well, to make a difference.

Table of Contents

Predictive Analysis
Example:
Example:
Image Analysis

When most people think about technology in healthcare they think about machines: CTs, ECGs, injection pumps, machines that go ping. However, software plays a huge role in healthcare and AI is building on all the work done for the last several decades across the industry.  

Everyone’s talking about AI, about how much it can do – it’s the latest shiny, new toy. People are trying out ChatGPT to do all sorts of things, with varying levels of success.  Companies of all stripes are talking about needing to have AI in their toolbox, as part of their strategy – just like they did about the internet in the late 1990s, Web 2.0 in the late aughts, or blockchain five years ago.  It can feel a bit like a fad, but for us here at Tali it isn’t just about using AI because it is the latest, cool thing.  It’s about using the right technology, and using it well, to make a difference.  

There are two scenarios that really leap out for me when AI applications make sense: first, when you've got too much information for a human to make sense of on their own;  or second, when the work is mundane but still requires some complex reasoning or knowledge to determine what to do next.  Today, I’m mostly talking about the former, I’ll talk more about the latter in my next post. 

The key in these two situations is you get the computer to do what computers are good at so that you can let humans do what humans are good at.  If the human being can focus on their judgment, empathy and creativity and not on marshalling masses of information, the patient will have a much better experience. 

So how do you get started?  Well, above all else, AI needs data. It doesn't matter what the problem is, whether you're talking about finding cat videos or diagnosing disease, the steps are still the same: you need enough data, it needs to be well-defined, expertly labelled, representative and unbiased. 

Health data comes primarily from EHRs. They started out as systems to assist in billing, and over time more and more administrative tasks were incorporated, from scheduling to bed management, departmental systems and so on.  As hardware improved and it has become easier to record data at the bedside, they're now used to document clinical care throughout the hospital and in primary care (for a more extensive background on Electronic Health Records (EHRs), I’d recommend this piece in Forbes).

Insurance claims data also form a rich source of information.  For Canadian readers, this will be a bit novel, so let’s imagine the situation:  a patient is in an accident and hurts their leg, so they attend hospital where they are evaluated, treated, given a prescription for a painkiller and a referral to a physiotherapist.

In Canada, the hospital will have data about the patient’s stay there, and any follow up visits they make; the pharmacy will have its data about the prescription; the physio will have their data about the therapy provided; and finally, the primary care physician will get a summary from the hospital and physiotherapist. Some of this information, such as any lab tests performed, might be available to everyone involved in the patient’s care through a provincial system.

In the United States, all of that will still be true – but also the insurance company will have a record of all of those things as claims data. A claim is, at its simplest, a record of who got treated, how, by whom, for what, where and for how much. It won’t include the result of a test for example, but it will indicate that the test was performed.  

What's really cool about insurance claim data is that it provides a longitudinal picture of the patient’s experience.  Looking through a clinical lens, there is data all over the place: if I want to get a sense of how that entire experience was for the patient I've got to get the data from the hospital, the physio, the primary care physician and the pharmacy.  Using claims data, all of that is available in one place and so that makes the process much simpler.  And I can look downstream for other events, such as additional hospital stays, more prescriptions, additional therapy and so on. 

Other sources of data healthcare data include: genomics data, data generated through wearables or from your phone, the apps you use to manage your well-being such as period tracking – all that counts as health data.

Because it's healthcare we do have some specific and critical constraints.

The first constraint is privacy. Privacy, it goes without saying, is paramount. This is legislation and regulation that is very well established; the legislation has been around for decades, and the underlying ideas have been around for even longer. What that means in an AI context is you need to make sure that the patient to whom the data belongs must have consented to having their data used for this purpose. Even then, we must ensure that it is de-identified, so that the patient's privacy is protected no matter what.

The other constraint of course is interoperability: just because data has been stored in electronic systems doesn't mean that it is all accessible in the same way. The FHIR standard was developed over the last ten years or so but only popularized in the last four or five. It makes a difference here because it makes it easier to get data out of those systems, consistently and efficiently.  

One interesting development in the last few years is the advent of data platforms. Organizations are building out data platforms that are not only specifically designed for healthcare data, but also designed to store healthcare data in a structure that can be used for AI model development. Some organizations, such as the Mayo clinic, are building these platforms designed from the ground up for those models and designed from the ground up to hold data from different organizations and diverse sources.

Predictive Analysis

The area I love learning about is using AI for predictive analysis to better support the healthcare team, where AI can augment what the humans can do.  There’s a couple of examples I can talk about here that I just love, but they are not the only ones.  

Example: Chartwatch

This is a program in place at St Michael’s hospital here in Toronto that they use on admitted patients.  It watches a hundred variables in real-time and, every hour, calculates the probability that the patient will need the ICU or will die in the next 48 hours.  If the probability passes a certain threshold, the system alerts the clinical team so that they can review the situation and decide what to do.

Example: Pulsedata

This company used claims data to build out a model to identify people who are likely to require emergency treatment for kidney disease, and get them the care they need before it becomes urgent. The idea is that getting that kind of treatment in a planned way is far cheaper, less stressful and has better patient outcomes than if it is only recognized when it has become an emergency.  What I find to be the most interesting is a comment from their CTO:

“It's not just one factor that’s the most predictive, it’s the velocity or trend of that factor, or the ratio of that factor against another, or this factor existing when another does not. ” (source)

I try to imagine defining and testing rules ‘the old-fashioned way’ — explicitly coding them to handle all the possible scenarios when so many variables are in play — it simply wouldn’t be feasible. 

Both of these cases let computers do what computers are good at: go through vast quantities of data and use it.  Then let humans do what humans do best: make informed decisions and help the patient understand what is happening to them and why.

Image Analysis

Radiology, as a specialty, has been using electronic images for decades.  What’s really cool about this data is that it isn’t just the images; with the images you also get the radiologist’s interpretation of those images.  This combination – years of data, along with contextual information – has given AI researchers a lot to work with.  As a result, there are tons of projects out there, working to use AI within this specialty. 

The best known example is reading mammograms:  one recent study showed that AI is actually better than the average physician at correctly identifying breast cancer from a mammogram, but perhaps the better path is to use both.  Part of the reason people are excited about this is that mammograms are notoriously difficult to read. Reducing the rate of both false positives and false negatives would be a boon for the patients undergoing these scans. 

Another neat application is where Mount Sinai in New York City is trying to use AI to read MRI scans to diagnose Alzheimer’s disease.  This is such a keen area of research that a team in Australia is using AI to generate synthetic data for patients with Alzheimer’s, so that upcoming AI models can be evaluated more effectively. 

These are examples where it isn’t that humans can’t do what is needed – rather that it is very difficult to do, or is difficult to do at scale.  

Radiology isn’t the only specialty that generates images.  Healthcare providers have always sketched pictures of their patients’ injuries, wounds and so on, and having mobile phones with great cameras allows them to do so electronically with much greater fidelity. Whether it is using photos of wounds to evaluate healing (here, here), or photos of moles to determine a prognosis, AI is helping improve patient outcomes.  Let computers do what computers are good at, so that humans can do what humans are good at.

There are many more examples, these are just some that we’ve come across in our reading that we wanted to share.  The potential is huge. There's a vast amount of information to work with, and a massive desire to make the solutions work.  We will get better at creating datasets to avoid bias and ensure representation; as we get better at training systems, AI will have a massive impact on health care for years to come.  

Next time, we’ll look at speech recognition and generative AI in the primary care space, something near and dear to our hearts here at Tali AI.  

Read more of our blog posts here

Continue reading

Feb 7th, 2023
Post Category Icon
Tali AI
Time to read icon
5 min read

An Exploration of ChatGPT Application in Healthcare

Join the conversation about the use of ChatGPT in healthcare and explore its potential to transfrom patient-doctor interactions and healthcare delivery. This blog discusses the potential benefits and challenges of this innovative technology.
Author avatar icon
Hesam Dadafarin
Read more
Jan 31st, 2023
Post Category Icon
Tali AI
Time to read icon
5 min read

Which Privacy Laws and Regulations Govern the Use of EMRs in Canada?

To ensure the privacy and security of patient information, a number of regulatory and legal frameworks have been enacted in Canada to govern the use of EMRs.
Author avatar icon
Tali AI Marketing
Read more

Looking to Reduce Time Spent on
Documentation and Administrative Tasks?
Get Started