High Alpha Innovation CEO Elliott Parker’s keynote on artificial intelligence and the case for human ingenuity wasn’t the only insightful AI-centric session from Alloy 2024. The Engineered Innovation Group CEO and Founder Jake Miller also hosted an in-depth panel discussion around AI in healthcare and other highly regulated industries.
Jake spoke with Alpine Health Systems (a High Alpha Innovation portfolio company) CEO and Co-founder Humberto Lee, Zoë Foundry CEO and Co-founder Garrett Viggers, and Arkana Laboratories CEO Dr. Chris Larsen to learn how their companies and others in their space use AI to streamline operations and accelerate growth while being mindful of potential legal issues and ethical concerns.
Key Takeaways
- Garrett relayed how concerns around AI utilization in the employee benefits and underwriting space are valid, especially as they relate to potential data exposure. That said, he noted AI can be transformative for businesses, as the tech can help them more quickly and cost-efficiently choose the most appropriate, hyper-personalized insurance plans for their workforce and prevent them from wasting money on unused or underutilized benefits.
- Chris explained the many beneficial use cases for AI in healthcare: from research and development and drug discovery, to disease treatment and patient diagnosis. He added how AI is also alleviating time and resource concerns for the medical community. Tasks that used to take days for diagnosticians like himself now can be done in minutes, thanks to AI. However, maintaining a safety-first mentality when testing AI tools and being mindful of hallucinations remains vital, he said.
- Humberto detailed how AI has the potential to fundamentally reshape healthcare, but it’s important to remember that ChatGPT was not built for clinical care. Developing proprietary, foundational AI models and ensuring they use only the most accurate and timely data sounds daunting. But it’s what will enable healthcare organizations to provide best-in-class patient care while abiding by FDA and related industry regulations.
Learn more about how these entrepreneurs and their organizations use artificial intelligence to transform their day-to-day operations and realize long-term growth by watching the entire session below.
Sign up for our Wavelength newsletter to access additional Alloy 2024 content, including recaps of our other sessions and keynotes.
Transcript
Jake Miller: I'm looking forward to this panel of experts to talk about how AI Is fitting into heavy regulated industries. My name's Jake Miller. I'm founder and CEO of The Engineered Innovation Group. We work with organizations to bring innovation to life by building products, designing products, investing in technology, and making it real and taking it to market alongside our partners.
I'm going to pass down the line here to do quick introductions.
Garrett Viggers: Garrett Viggers, CEO of Zoë Foundry. We're a venture studio building companies in the employee benefits industry. It's great to be here.
Chris Larsen: Chris Larsen. I’m the CEO at Arkana Laboratories. We are a diagnostic medical laboratory based in Little Rock, Arkansas, and really focus on diagnosis in kidney disease and serve hospitals and clinicians from across the country doing that.
Humberto Lee: Good afternoon. My name is Humberto Lee. I'm the CEO and Co-founder of Alpine Health Systems. We are a platform-as-a-service company. We help and accelerate digital health products by providing companies with a suite of NLP products and AI that allows them to quickly get their markets, their products to market. Some people call us the Twilio of Healthcare. Happy to be here.
Jake Miller: Awesome. Well, thank you all. So AI is actually not new. You know, anyone that's used a credit card knows even in the ‘90s, we were doing anomaly detection, right? But generative AI, in the past couple years, has really exploded and changed the landscape.
I'm really curious to hear from each of you. What is the most interesting use case you have seen so far? Which is probably not fair because there's going to be a whole lot more use cases emerging here in your respective industries. Humberto, you want to start?
Humberto Lee: Sure. Wow. That's a good question. So many good examples, right?
Me, personally, when I look at AI, what really gets me excited is the fact that nowadays we have companies such as Genetika+ using AI to basically create new medicines for patients, right?
For example, the fact that now they can take a blood sample from you, they can then use that information to grow stem cells like brain cells. Then, they can give those brain cells antidepressants and look for biomarkers or changes in those cells. Then, using AI, they can then analyze the information. They can look at the patient’s clinical history, their genetics, and then they can come up very specific medications that will address that patient's specific problems at a very molecular level to me.
That's extremely impressive.
Jake Miller: I love it.
Chris Larsen: Yeah, that's precision medicine, and that gets me really excited as well in my particular field in diagnostic medicine. As you said, AI is really not new. I mean, people have been working on machine learning models for years, kind of in image analysis that are really aimed at helping diagnosticians — like I'm a pathologist — to be more efficient or more accurate.
That said, we're just now starting to see some of them actually kind of cross that threshold from the research lab and actually into the clinic. So that's exciting. I hope that we continue to see more of that. I work with nephrologists primarily. So, these are clinicians that are extremely busy and they are just inundated with administrative tasks.
I mean, you know, I've read stats that, you know, 20-30% of a physician's time is actually used with admin work. And for a nephrologist, I certainly believe it's at least that amount, if not more. So I've seen some tools recently. Again, I don't, I think they're just just kind of entering the clinic that are really aimed at efficiency.
I mean, things like creating a clinical note during the appointment so that as soon as it's complete, there's something there ready for them to edit so that they don't have to spend as much time generating the note. I've seen, other, I've seen things that will create a synopsis of a patient, you know, before you, before the doctor walks in that room, believe it or not, they spend a few minutes looking through the chart to kind of remind themselves of who this is and how today's appointment connects with their history.
And when they're doing that, I mean, that can take, depending if it's a complicated patient, that process could take five minutes, you know, Whereas now you can have an AI that can give you a paragraph, very quickly tell you some of the pertinent facts, what's been going on, the relevant labs, and then why they're here today and kind of how that connects.
So these sort of efficiencies, you know, when they're really implemented in the clinic, I think could really help people. Clinicians spend their time doing more of what they spend all that time, they do training, residency, medical school and actually using that knowledge to treat patients.
Jake Miller: 100%.
So I'm gonna skip ahead real quick and then I'm gonna ask you the question. So along those lines, what do you think are some of the blockers to getting, specifically, in healthcare, generative AI technology into physician's hands or really clinicians or even back office.
Chris Larsen: I mean in medicine, in general, medicine's going to be slow with new technology, very slow, five, 10 years behind at least, you know, many other industries.
I think it's just the mindset and it's, it's the right mindset. We're, we're drilled in safety, safety, safety. It's always safety first. And privacy, things like this. And so there are a lot of barriers. You really have to demonstrate safety. You have to demonstrate efficacy. You have to show that you're, you're what the results you're getting or whatever's happening is accurate.
And so these sorts of things, it is a big barrier to cross, to go from something that's kind of more of a research tool to something that's actually used, in the care of a patient. So, you know, back office stuff, maybe not quite as high of a threshold, but in the clinic with a patient, those things are going to be, they're going to face a lot of scrutiny.
Jake Miller: Yeah. Thank you. So Garrett, tell us about what you think is the most interesting use case.
Garrett Viggers: So I would probably say what's most, so I play in the health insurance, dental vision, life disability, voluntary benefits world, actually use cases that are getting through the POC or the pilot phase. Actually closing a contract, actually getting scale and providing real value in the real world.
Um, so a few examples would be like group, life underwriting, individual life underwriting, right? Reading through hundreds of pages of medical records, extracting, letting the underwriter be in that loop to make the art of underwriting, but a lot of the science can, can be automated. And then also group and individual disability, claims guidance.
And it's, there's a huge move there, both in the underwriting and in the claims side where they're getting out of the POC, they're actually getting paid. And it's not just for a little subset of a subset of cases that they bring on new cases. It's for a block of business that's on short term disability. That's on claim, and then the LTD block as well. So better, better experience for those who are disabled.
Jake Miller: So I'm curious, when you, and this is for anyone that wants to take it to start in healthcare or really in the regulated industry. There's a lot of privacy regulatory concerns.
How would you help folks in the audience understand how to navigate that? Do you have any suggestions or examples of what you've had to do in your, your, your companies?
Garrett Viggers: I sat in a room with CIOs last year with a CEO. He was a wizard. Data science founder explaining AI to all these chief information officers and the conversation went from well we just kind of got in the cloud and we think we need to rethink our on prem strategy because the conversation went to the security and we got to, we're just trying to figure out, this is kind of just the barriers to entry.
So I think understanding that reality, I like to think of it as Web2.5 or AI 2.5. My son's big in Web3, and then all of a sudden it was, well, slow it, slow your roll, web 2. 5. Like actually ingredients, not it's not all or nothing. So I think that's a better way to view and probably better to market what it really is.
If it's not AI and it's, you know, NLP or machine learning, it's probably better to, you know, not just say AI. Cause I think it gets confusing and people are, it's a bigger barrier to entry.
Jake Miller: Yeah. I think we're already starting to see, I don't know if you guys are, but. I'm hearing companies say, especially that have been around that are AI companies, we've pulled AI out of our marketing.
Because it's becoming a dirty word now. Because everyone's put, slapping AI on, on what they're doing, you know. It's an AI washing machine. Okay, great. So, pitfalls. Humberto, what kind of pitfalls have you run into or you would tell folks to watch out for?
Humberto Lee: Yeah, so looking back at the trajectory of Alpine, I can basically break it down into two aspects. One, data acquisition, right? Because we had to build our own foundational AI models from scratch. We just couldn't go, you know, you could just check GPT, right? Because there's, you know, a lot of requirements. Check GPT was not built for clinical care, right?
Given the fact that it's got a lot of hallucinations. So having to build this information that this data warehouses. Recourse a lot of data, right? But that data has to be representative of the people we're trying to help. Right. If you look at data, you know, the data will probably tell you that in some cases, minority groups don't really get sick because they don't come to the hospital because of what the data shows, right?
Well, the reality is the opposite. So to be able to find that data, right, you have to work with the right partners. In our case, working with the OSF Healthcare and High Alpha allowed us to get access to that data to build the models, right?
Second part of our, some of the challenges was the fact that now we had to look at regulatory aspects, you know, you know, we had to do with FDA. Mandates and FDA regulations as well. Compliance issues too. Is our data secure also looking at ethical concerns? You know, for example, the data that we're analyzing, right. In the case of a Native American population, for example, right. There's this thing called data sovereignty, right?
The fact is that the data that we're using, right. Belongs to them. And now, are we telling the right stories about this population? With what we're saying via AI, right? So we have to be very conscious of how we use that information too. And then look at also building confidence with the clinicians too.
You know, the fact is that they have to understand, they have to have clear context as to how this information was used to build the AI models. They're able to go through 250 pages worth of medical notes and extract data. Social, clinical factors,, discharge delay risk, right? All that information, you want to know how do you get to that data?
How do you make that assumption, right? I want to know what page, what paragraph, what line that data came from, right? So understanding those, those concepts, right? Uh, those were some of the challenges we had to face as we launched Alpac in the first place.
Jake Miller: It's like explainable AI. There's a big barrier where folks want to understand especially in healthcare, right?
Like, why did you say what you said? Or why was the response the result of what you are telling me? Because I need to have higher confidence. So I'm going to ask you another question. Uh, we, we spoke about this off. Offstage. So a lot of folks will come to us and say, I want to build a model. I want to build a large language model.
We'll very often say you don't need to go through all that effort to build and train and expense of building a model. but there are circumstances where it makes sense. Can you talk more about that?
Humberto Lee: Yeah, absolutely. You know, in the area of healthcare, right? To have great AI implementation, right?
You have to find a very specific use case, right? What are you trying to, to fix? For us, the question is, you know, can we accurately predict a patient's social disposition, right? Based on the data that's buried deep within the EHR system, right? So being able to pull data from every concern, right?
And analyze tons of data. I don't think people understand how much data one patient has in the medical record, right? Data that's created on a daily basis, right? So pull all that information out, and then to build our models, for us, that was a challenge.
The fact is that we had to actually have the AI team go to the hospital at OSF in Peoria and basically work with the case management team on the floor. Then, we had the case management nurses in our staff, basically learn about AI. So now we had a very cohesive team where we all spoke the same language to each other.
So that's one of the ways that we overcame a lot of our initial problems.
Jake Miller: So let's move to risk mitigation. So everyone wants to be innovative. Everyone wants to solve problems. But in regulated industries, there's a lot more risk concerns than a lot of other companies. So Garrett, maybe you want to start from your perspective. What strategies can you offer the crowd on thinking about balancing risk and reward applying these types of technologies?
Garrett Viggers: I would, I'd probably, I mean, most of us would agree that a lot of the focus is on back-office. Like let's figure this out. A lot of the carriers that we talk to have their little knowledge workers for the underwriters and for the actuaries and for the claims, right? So they're figuring it out in the back office.
They're not necessarily. Ready to expose a model or a chat bot to a customer that's going to put data in that, you know so I think that's where there's a lot of efficiency gain to get the goodwill and build trust with, in our case, the employee benefits industry. But I think another thing that's, you know, that we are really passionate about is the small group business market.
It's like 96% of businesses in the U. S. are under 50 employees. And it is the most underserved market for benefits because there's not enough money to be made. So in the current model, it's all manual brokers and underwriters and it's not profitable. So if we think about the, the, the mission to serve small businesses and AI and automation is a way to truly serve employers and employees with access to the best options, not the only options that I can do.
Because I'm a broker that has 500 groups that are renewing in one month. You can't really serve them well and you sure as hell do not want them to change from Blue Cross Blue Shield to UnitedHealthcare. If it's a better deal, I'm just, that's real. So I think understanding the mission and where there's a real opportunity.
And I think that's where it's a super-underserved market. And it is the majority of businesses in the U S coupled with just understanding the realities of back office, get comfortable. And then we have to have a strategy to get the value at scale with the customers.
Jake Miller: Yeah, we are seeing something similar.
Generally speaking, where bigger organizations will say, Oh, I don't want to use the LLM because I don't want my data to be put there and get leaked and it's used by training, especially if I'm gonna send it off to open a I. And the reality is In the last year, the, the technology has evolved or matured to a point where folks can host their own LLM, right?
Um, which is making it more, more secure and, and more accessible. But Dr. Larson, how about you? Risk mitigation especially like, well, yeah, I'll just leave it at that.
Chris Larsen: Yeah, so. So one of the things we have a very active research and development, department at Arcana. And one of the things that we are doing is, you know, we do hope to, to train and deploy AI models.
It, it's, it's not, as I mentioned earlier, there's a big difference between training something that you could use for fun in a research setting, you know, for fun, and then, and then translating that to the clinic. But it, throughout the whole process, if you're involved in biomedical research, you know, You're going to you're just you just know you go through training.
It's constantly on your mind. Privacy, privacy, when you're doing this type of translational research that involves patient data, it's always a risk that you could disclose. I mean, we're aware of that. You don't want to ever say there's no risk.
There is risk, but we are constantly aware of that, trying to make sure having checks and double checks and triple checks that we're not exposing any, you know, PHI in any of these, particularly if we're working with an outside collaborator on some of these projects.
I mean, the other risk, innovation always has risk. The other risk in the work that we do is that biomedical research is pretty expensive. And so deciding to deploy the resources to develop, these models is, It's a tough decision sometimes.
So one of the ways that we've tried to mitigate that risk, it's been mentioned a couple of times today, is through the SBIR grants through the NIH.
And we've had a lot of success with those over the last three years since we started that, and found that to be a good source of funding. Uh, not all of them get funded, but even if you don't, the process itself is really helpful. It forces you to think about the problem, to write a commercialization plan.
So, you know, writing is thinking. So that's very common for us to get started on, an SBIR grant for some sort of assay or AI that we want to develop. And then halfway through realize we've got the wrong approach and then going back and starting over. So it, that even when they're not funded, they're helpful for us.
Jake Miller: That is such a good point that, and this is something that when we work specific with startups, even risk, isn't just the technology risk is the business model risk risk can, can be a lot of different things. So I'd love that you pointed that out.
Chris Larsen: Yeah. I mean, we heard this morning about the fact that you're going to need, you need knowledge and you need wealth.
You need, you need the resources that we have. Available to us to make these things happen are our people's time and the money that we have and so and for us a small Business, those are finite resources. So we've really got to choose carefully how we deploy them
Jake Miller: Yeah, so this might sound like a bit of a tangent, but we spoke about something yesterday, about AI using a ruler. Can you share that example with the group?
Chris Larsen: Yeah, you were talking about explaining the ability of AI models, and that's a somewhat controversial topic whenever you're training a model. Some people, you know, if you're it depends on what type of AI it is. But as many people will say, if it gives us the right answer consistently, we don't really care how it came out.
It's there. It just does it. So it's kind of a black box more or less. But for the most part in medicine, people want explainability. They want to know how did it get to that answer? Kind of what he was talking about, where we need to know where this data is coming from. And there was one kind of well-known example in my field in pathology where they trained this AI to evaluate tumors that were grossly evaluated, meaning not under a microscope, just pictures of the tumors.
And it got pretty good at identifying which ones were malignant and which ones were not. And when they finally went into the explainability and they looked at the pixels that it was selecting to be able to determine whether or not. This was cancer. It highlighted the ruler in some of the pictures. And basically, the AI had learned that they're more likely to put a ruler into the picture when there's cancer present.
And so this is the type of thing that obviously that's not good. That's not very explainable, you know?
Jake Miller: I just love that example because it's very easy to understand. And Humberto, you were, you were mentioning, Being able to audit the output. So not just what data you're putting into the bottles, what data you're training your, your, your rags on, but understanding the ethical implications of the data that's being output.
So any parting thoughts? Before we wrap up, that's a very open ended question.
Humberto Lee: Well, I guess I can say something, right? Look at AI in healthcare, right? AI has the potential to fundamentally reshape healthcare, help so many people, right? The fact is that in the United States, 10, 000 Americans are on 65 every single day, right?
So looking at how AI is advancing that, new drug discoveries, right? You know, the fact that, for example, Uh, we had discovered 190, 000 different ways of, protein folding, right? Protein folding is basically, and Dr. Marci, keep me honest, right, is how you can basically fold a protein, right? Uh, the,, polypeptide chain strain, right?
In a way that you're reshaping a 3D form that allows you to be extremely efficient and uses the least amount of energy, right? So we discovered 190, 000 different ways of folding proteins, right? With AI. Do you know how many we discovered? Take a wild guess. 200 million different ways of doing it, right? So the fact that we can, at some point in the future, you know, cure Alzheimer's, Parkinson's, you know, allergies, right?
And we could even prevent them in the first place. To me, that's something that AI can truly fulfill its promise.
Chris Larsen: Yeah, I totally agree. The precision medicine, there's a lot that goes into that, meaning, you know, the right treatment for the right patient at the right time. But AI certainly is going to play a role that I've only been practicing medicine for about 20 years.
But I can say, even within that time, the complexity of our understanding of these diseases has changed. just been exponential and it's nearly impossible for any physician to keep up with all of this.
So just speaking from a physician's perspective, I don't ever, I mean, I'm really not a believer that AI is going to take a physician's role, but I do think that they, I do think it's going to make us better at what we do.
Drug discovery, to selecting patients for, to the right drug. I mean, helping with the differential diagnosis, meaning what are the possible diseases kind of helping doctors think outside of their box. Maybe it could dig into the chart and see something that was buried two or three years ago that you're not paying attention to now.
I mean, these sort of insights that it might provide, I think is really going to have an impact on patient care. And I think it'll be in the next five years. Yeah.
Jake Miller: Garrett, anything?
Garrett Viggers: I'm excited to be alive and being able to serve in an industry, a specific vertical, and to see this technology better serve and, you know, employers, employees.
And I think, you know, we'll see what the next five years will look like. I mean, so I had another company that was in this space, they got acquired and, and there was always a big debate of, are brokers always going to be there? You know, are they going to be, we're going to go direct-to-employer. And I was a broker like 15 years ago.
And. I, I feel that especially in the small group market, the only path I think is through leveraging automation and humans in the loop, but I think there's a big opportunity. We'll see, people have set up for years and to eliminate brokers. If you know, the insurance industry of benefits, that's way easier said than done there, they kind of own distribution.
So they throw their weight around with the carriers and, and it's kind of dysfunctional a bit. So I think there's a big opportunity for selling and for enrollment. With the use of AI hyper, we talked about hyper-personalization to increase participation not just to sell more benefits and make more money. But it's that the right benefits are being purchased based on the real needs and There's a big opportunity there.
It's done with humans and it's not profitable and because people don't understand it's confusing So We'll see in the next, you know, three to five years, I think there's a big opportunity for selling and enrolling with this technology and it's interacting with these customers and really serving the small group market.
So I'd love to be a part of that.
Jake Miller: I love it. The moral of the story here is human in the loop. We're not going anywhere. We're not going to be replaced. We'll just get faster and more productive. Thank you guys so much.