AI can unlock supply to meet demand, says Johns Hopkins physician IT leader
Dr. Brian Hasselfeld has a little Wall Street in his background and likes to talk about the potential benefits of artificial intelligence to healthcare in economic terms – the law of supply and demand.
Everyone knows how healthcare works. A patient sees their primary care physician, who then issues a referral. That physician then issues a subspecialty referral, which may finally reveal the right answer. Though these days maybe a precision medicine referral also is needed.
That’s a lot of care. A lot of demand. And unfortunately, healthcare faces severe staffing shortages and is very limited in its supply.
“From my standpoint, it’s not about the tools, it’s really about the access problem,” Hasselfeld said of AI. “How do we care for more patients with the same clinical workforce we have today? How do we meaningfully increase productivity? Care for more people on top of the same preexisting resources? And not simply ask our clinical workforce to work more?
“How do we inject really meaningful intelligence into what comes first and what comes next for patients in their journey?” he continued. “And if we can start to extract some of that unnecessary care out of the system, we can unlock some additional supply.”
Hasselfeld is senior medical director of digital health and innovation at Johns Hopkins Medicine, and associate director of Johns Hopkins inHealth. He’s also a primary care physician focused on internal medicine and pediatrics at Johns Hopkins Community Physicians.
We interviewed him as part of our series talking with top voices in health IT about artificial intelligence. In this, part one of the interview, he discusses applying AI overall in healthcare. In part two, which will appear tomorrow, he goes in-depth into how Johns Hopkins Medicine is using AI today.
Q. As a senior digital health and innovation executive, what forms of artificial intelligence do you have your eyes on most?
A. We’re at this phase where we’re not completely sure of the breadth of the problems to be tackled across what we would term the new form of artificial intelligence.
Most professionals tracking the general AI industry across all other industrial verticals are starting to recognize we have what we would call perhaps historical or traditional AI, those tools built on predefined computer science-based rules, inputs and outputs. And now we have our new generative AI, certainly made famous last January with Microsoft and OpenAI’s announcement around ChatGPT, and now all the other competitors in the marketplace.
The technology truly is going to be limitless in how it can be applied to the problems to be solved in healthcare. Instead of thinking about the particular type of tool that’s a priority for us, I’d rather reframe it as really a pivotal moment in healthcare for a major resource issue to be addressed.
I’m a former economics undergraduate that went to Wall Street, so bear with me as we talk economics for a second. We have a meaningful supply/demand mismatch in healthcare today. Anyone who has tried to obtain a visit from any institution, big or small, academic or non-academic, certainly appreciates the difficulty in navigating a relatively complex health system and the wait times that come out of it.
But from my perspective, technology has not yet done the thing that technology needs to do in healthcare, the thing it’s done across many other industries, across the economy – inject productivity and efficiency gains to help bring into balance all of the demand for healthcare from our patients and the supply we have to offer, which arguably has been relatively fixed.
From my standpoint, it’s not about the tools, it’s really about the access problem. How do we care for more patients with the same clinical workforce we have today? How do we meaningfully increase productivity? Care for more people on top of the same preexisting resources? And at the same time, of course, avoid the key balancing component, which is we can’t simply ask our clinical workforce to work more.
Arguably, many of the interventions have been to try to decrease the amount of work on our clinicians. The tools to be applied really focus across that patient access journey as a major priority – how to get patients to the right kind of care at the right time, faster.
Certainly, some early products being tested on the marketplace help patients identify what kind of care they actually need. Now, instead of going through the regular paradigm of visit to referral to subspecialty referral to finally getting to that right answer. I also have a role in our precision medicine initiative, so that might be called a precision referral or precision care planning.
How do we inject really meaningful intelligence into what comes first and what comes next for patients in their journey? And if we can start to extract some of that unnecessary care out of the system, we can unlock some additional supply.
On the flipside, we need to be in a paradigm where it’s not one clinician to one visit to one patient for 15 minutes, right? That does not scale because time and people are fixed. And we need to figure out a pathway to caring for a larger number of patients with greater intelligence between the data ingested and the care plans directed back to our patients.
I agree with one of the former leaders in this series of articles, Dr. John Halamka [at the Mayo Clinic], that patients do not come to clinicians to be read a textbook.
So, certainly not advocating we can care for 20 times as many patients and remove the clinician from the care journey. But I do believe the one visit every three to six to twelve months paradigm is obviously a broken one in a system that should be oriented around prevention. And that really does mean we have a major home data problem to be tackled, which I think is a major area of opportunity as the tools continue to evolve.
Q. You told me digital apps, connected devices, wearables and home sensors have all purported to be the future of individual health tracking – and yet broadly, these methods have had little uptake, rarely found in the clinician/patient relationship. You believe the newest iterations of AI will finally address the key barriers to this new information uptake in clinical care. Please elaborate on this subject.
A. It’s actually a perfect pickup to where we just ended that last question, which is ranging from the watch or the Fitbit on your wrist to devices at your own personal bedside to various historical ways to measure home data, such as home blood pressure cuffs, scales, glucometers and continuous glucose meters.
We have this wealth of home-based data. Certainly, our own precision medicine group at Johns Hopkins Medicine looking at multiple sclerosis put forward an amazing new paradigm about how that data could apply to diagnosis and care treatment planning into the future.
Recognizing that movement tracked by wearables like a Fitbit or a similar advanced movement device can meaningfully correlate with progression of a movement disorder, that all makes good sense and potentially replacing, in the long run, patients with MS routinely needing to get to advanced quaternary neurologic care centers with expensive MRIs.
But how do we take that measurement paradigm and take it out to scale? When we look at our outpatient clinicians today, and I’m a primary care clinician, we may care for 1,500 to 2,000 patients, if you’re a full-time primary care clinician.
And let’s compare that to the hospital. In the hospital, what’s our most intensive area of measurement in the ICUs and the clinical care units? In those units, we have a team of clinicians caring for 15 or 20 at most, with nursing ratios of one-to-one or one-to-two. So that’s the level of staffing it takes to have patients connected to devices on a regular basis, certainly a daily if not hourly basis.
And even on the floors of our hospital, we have nursing ratios of one-to-four, one-to-six, and clinical teams around them, and that’s taking data every four to six hours or every 12 hours.
So how do we go from this environment where we have one clinician to a couple patients with nursing support, to one clinician to thousands of patients with minimal other longitudinal support, and still expect to get data in every day, multiple times a day, and not overwhelm our workforce, systems, practice models and payment models that are not ready for that level of home ingestion?
That’s why we’ve seen things like remote patient monitoring struggle with massive uptake. I think we’ve had Medicare continue to look at how they may optimize change, or sometimes even question whether they should remove RPM coding.
Known potential good information about patients longitudinally throughout their month or year would seem better than the transactional nature of a few visits throughout the year. What’s missing in between is the systems to take all of that data and make it clinically relevant, clinically meaningful and interpretable, and put it in the context of that patient.
So, we could create a system where I give you a blood pressure cuff, and I say blood pressure over X and under Y is bad, and we could pick those numbers and they would be true for most patients. But unless I know you, unless it’s precise to your context, that may or may not be bad for you, depending on your clinical goals and your underlying clinical conditions and our mutual treatment goals.
So, we need systems that both can handle significant amounts of remote data and make it relevant to the context of the patient based on everything we know about you, especially the things we’ve discussed in our visits and around your treatment plan.
So when we talk about the applications of generative AI to solving problems in healthcare, we’ll often hear about the problem of getting the unstructured data in the chart, the written notes especially, and make it something discreet, make it something structured and understandable for many other types of systems, to help optimize care.
That’s the real opportunity here. Part of my job here at Hopkins is also to help oversee our virtual care teams; I led those teams through the pandemic. And what we have an opportunity to do is really unlock the value of those remote-connected devices and in-between-visit volume of data.
If I could have a system, know the notes of your chart and understand what’s been said about blood pressure, what’s been said about weight, goals, what conditions you have, what medications you’re on, and make that a precise layer of intelligence around that incoming data, such that we don’t reproduce the inpatient alarm fatigue that already exists on the inpatient side, then I could take that to exponential scale on the outpatient side.
We have an opportunity, finally, to create a very intelligent layer around home-based data in our clinical workforce, which is not going to expand in size and certainly cannot take on measuring 1,000 or 2,000 patients’ home-based data on top of a full regular clinical day.
I’m very excited about the opportunity to finally unlock what we want for our own family members: having more continual information about meaningful conditions for our patients be interpreted, ready, available and actionable as the year progresses.
To watch a video of this interview, click here.
Editor’s Note: This is the seventh in a series of features on top voices in health IT discussing the use of artificial intelligence in healthcare. To read the first feature, on Dr. John Halamka at the Mayo Clinic, click here. To read the second interview, with Dr. Aalpen Patel at Geisinger, click here. To read the third, with Helen Waters of Meditech, click here. To read the fourth, with Sumit Rana of Epic, click here. To read the fifth, with Dr. Rebecca G. Mishuris of Mass General Brigham, click here. And to read the sixth, with Dr. Melek Somai of the Froedtert & Medical College of Wisconsin Health Network, click here.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: [email protected]
Healthcare IT News is a HIMSS Media publication.