Welcome to an enthralling episode “The AI Frontier: Ethical and Policy Considerations for a Healthier Future.” In this captivating discussion, we delve into the profound impact of cutting-edge technology on the healthcare industry. Join us as we unravel the fascinating relationship between artificial intelligence (AI) and healthcare, featuring a highly esteemed guest speaker, Dr. Leith Slates, from the U.S. Department of Health and Human Services (HHS).
Prepare to be enlightened as we explore the transformative potential of AI in the healthcare landscape, while also delving into the intricate ethical, policy, and regulatory challenges it presents. Dr. Leith Slates will expertly guide us through an exploration of AI-driven healthcare advancements and their implications for creating a healthier future.
Our conversation will revolve around two riveting topics. Firstly, we will unlock the power of AI in prevention and early detection. Together, we will discover how AI predictions are effectively combating chronic diseases, revolutionizing mental health intervention through early detection, and enabling AI-driven prevention and treatment approaches for substance abuse. Moreover, we will explore how AI can address health inequities related to gender, race, and ethnicity by leveraging data-driven solutions.
We will navigate the ethical, policy, and regulatory considerations of AI in healthcare. Join us as we delve into the ethical challenges associated with the implementation of AI chatbots, understand the policies and regulations that govern AI-driven tools in healthcare and public health settings, and explore the potential of AI in enhancing the mental health and well-being of healthcare professionals.
I appreciate you being here. I’m Hanh Brown, the host of the Boomer Living Barcaz. And today’s LinkedIn Live event is: “The AI Frontier. The ethical and policy considerations for a healthier future.” We’re standing at a threshold of a new era shaped by artificial intelligence. A technology that while promising presents unique challenges from generating misinformation and increasing polarization to job displacement. The burning questions are, how can we harness AI’s power responsibly? And what can we do to mitigate its risk? And to help navigate this complex landscape, I’m honored to have Dr. Leith States from
the U.S. Department of Health and Human Services. So together, we’ll unravel AI’s intricacies, uncover its vast potential, and try to carve a path towards a future where AI is positive, is a positive force in today’s society. AI’s influence spans multiple sectors, including medicine. It offers extraordinary potential, predicting disasters, detecting cancer with position. But these advances underscore the need for thoughtful regulation and transparency, especially in healthcare. Where AI’s role is increasingly significant. With our society aging rapidly, it’s critical to address how AI can serve our older adults responsibly and empathetically, without compromising diagnostic accuracy, or
infringing on patient rights. So we’ll delve into these issues, and much more today. So as we embark on this journey, exploring the challenges and discussing the ethical and policy implications, and envisioning a healthier, more equitable future powered by AI. We welcome and encourage your active participation. So let’s get started. I’d like to introduce Leith States, whose life embodies resilience and transformation. His experiences have inspired a deep appreciation for grace, humility, and second chances, which shape his life and work. And believing in the power of love to dispel fear and nurture compassion, Dr. Leith States sees
AI as a tool that can reflect these virtues and create transformative change. Dr. Leith States, welcome back to the show.
Good morning, Hanh. Thank you for having me again, and thank you for bearing with some of my technological issues for me. I guess very appropriate for the topic we’re speaking to. Thank you again for having me. Very happy to be here.
Thank you. Thank you so much. Okay, so let’s dive right in. So let’s kick things off with questions, as a likely to be in everyone’s mind. So given the rapid pace at which technology is progressing, especially in the field of AI, so what is your personal view on this whirlwind speed of AI innovations? So do you think that we’re keeping up, raising ahead, or maybe struggling to catch up?
Well, I think in general stepping back and looking at what’s required for advancement of many, many people group, any civilization, any nation, we need disruption. And disruption on its own is going to be difficult because it implies the need for change. And unfortunately, I think with AI, our preconceived notions of how quickly that change would come or how impactful that disruption would be have been grossly undercapulated. So I think that when the theoretical impacts versus what’s actually happening in reality don’t match, that creates a reflexive desire to slow down, or just a reflexive desire to retain or regain some type of control, whether that’s actual control or perceived control. So I think that’s where some of the danger lies with where we’re at right now. So I think that your original question would regard to too fast, too slow, just right.
I think we’re doing all three. And unfortunately, I don’t think that they’re all aligned on the same sheet of music with regards to pursuing a reality that leads to the ethical, appropriate use of AI with guard rails in place that protect us from a variety of the perceived threats and real threats. And also maximizes the goods that it could do or the good that it could do.
Great, great, thank you. OK, so we’re going to go through some very important topics. So let’s get started with chronic diseases. So when dealing with chronic diseases it can sometimes feel like the disease is in the driver’s seat. The daily management, the constant diligence, well, it can be really tough. Perhaps there’s a silver lining on the horizon and that’s AI. And it’s not just about data or algorithms. It’s about shifting the power back into the hands of the patients. So Dr. States, can you tell us more how AI might help the patients we gain some control in their battle against chronic diseases?
So that’s a really good question. I think you couch it from the right perspective of gaining control or that capacity of agency, of self-efficacy, right? The inherent capacity to be able to make changes in one’s own life. So I don’t think any individual or sector within health or barma or biotech would really caveat AI as the only tool to use in combating chronic disease burden, either nationally, globally in communities at the hyper-local level. It’s an oughtment to, but it certainly isn’t something that’s going to be a cure all. I guess a good way to look at this is from the standpoint of what health care is often asked to do at baseline, which is for a variety of issues that it has no business providing
solutions for. It frequently is asked to do because the point of quantification in terms of tracking outcomes, whether that’s morbidity, mortality, some type of granular lab value that can be tracked, or reflections of upstream issues that have created those health manifestations, whether it’s a chronic disease, whether it’s exposure to some type of infectious source and environmental exposure. And I think that where we need to have a, I want to call it a strategic pause for health care, is to realize that the greatest potential impact for AI is likely going to occur outside the laws of a hospital or a health system. And by that, I mean, if we can use AI to better create a built environment or recreate
built environments or more adequately assess poverty levels or address starvation, address malnutrition, address inequitable areas around criminal justice, things that lead to baseline stress, increased levels of malignancy, inability to buy the correct foods or have meaningful employment, to reduce recidivism, to reduce imprisonment to begin with, to allow for better capacity to care for that transition, key transition stages in life going from adolescent to adult or from, you know, independent living to assisted living or to assist the capacity to maintain an independent lifestyle for aging populations. I think that those are meaningful considerations that I hope are not lost on health care at large. And it seems that that’s not the case, but there’s always that risk that we develop tunnel
vision by function of mandate or by function of reimbursement and payment that creates a sense that health care is supposed to be the agent to change all. But I think we just need to better engage. And this is a normal refrain that will hear across many issues in public health and health is that we just don’t speak well to other sectors of life. And that’s partly because we don’t go out of our own cocoons from time to time. And then a part of it is we just speak a different language in general. So that’s part of my hope around AI is that it maximizes the good outside of the walls of the hospital to improve quality of life and mitigate those occurrences that require us to have increased touch points later in life with the health system.
Great, great, thank you. And also the idea of patients taking control, making informed decisions about their health, that’s a future that we’re all hoping for. So thank you, thank you.
All right, so let’s talk about the mental health. It’s an issue that’s very close to many of our hearts. It’s often silent, invisible, and it can affect anyone. The sooner that we can detect an intervene, the better the chances of recovery. And the real game changer could be AI, it’s not the potential, it’s got the potential to step in early and to alter life trajectories and to bring back hope. So can you share your thoughts on how AI could potentially be the begin of hope in early detection and intervention?
Well, I think it does parse out in a similar fashion with regards to where we see the greatest impact for the amount of concentrated, I guess, AI utilization. And while I think that, you know, with AI, you know, the devil is obviously in the details of how the data is presented, how it’s coded, what the algorithm or the, you know, the ML developers are, you know, seeing as important, right? So a lot of that gets back to the demographic makeup of the teams. So there’s a very human touch on AI, whether, you know, I have generative or if we have other types of AI that would, you know, provide some evaluative component to the black boxes that we typically see with, you know, regard to like, how did the AI come to that decision? Well, that’s not easily discernible in some cases.
And that is one area of caution where I think that, you know, the area of mental health, behavioral health, suicidality risk, substance use risk, the predictive aspects of it can be problematic with regards to what data is used and then how that data is used. So while I think that there’s a large capacity for it to impact those touch points where you would have more of an acute intervention or even some sub-acute intervention where you’re hitting the mark before crisis sets in, there is not from my standpoint a clear understanding of AI developers or just, you know, entities that are invested in utilizing AI frameworks to apply to these types of problem sets. One example of GF is, you know, Facebook and their approach to algorithmic development around suicidality risk prediction in social media, the lack of willingness to be transparent
with what data, how that data is used, what is going into the coding leads to identification of one versus discounting of another. I think that there needs to be a bit more of a transparent and trust-laden framework that allows for those types of sensitive touch points to be leveraged in an ethical way. And that’s the big concern for me from specifically mental health is ethical implication because it’s a sensitive space, being someone that has a history of a lot of mental health diagnoses myself and having been on and a depressed sense of psychotics, having, you know, history of substance use and being a sustained remission, it’s a sensitive area where you are part of the agency and independence is, you know, not disclosing what you feel you shouldn’t have to and there’s a risk there inherently around these diagnoses because of what goes
into making an informed decision. So you either want to have the right data or you don’t want to have abstractions of unmeaningful data points. So that’s one concerning area I have. So while I think it has great capacity for good, that’s even, that’s where a slow walk I think needs to occur to an even greater degree than other healthcare application considerations.
Your comments brings me to the next question, thank you. So when we think about the struggles faced by communities across the world, like you mentioned, substance abuse is an issue that often comes to mind. It’s a very complex problem, one that destroys lives and tears families apart and it’s not just about the individual user, it’s the impact reverberates through society. Perhaps AI would be under horizon as the glimmer of hope, I guess like you describe analyzing patterns, predicting outcomes and offering timely interventions, this has a potential to intervene and be a game changer. Do you have others that you could add, Dr. States, whether, you know, how this could be the light that AI might serve as for substance abuse prevention and treatment?
Well, I think, you know, with substance abuse in particular, you know, there’s, you know, if for anybody that’s had a touch point as a family member or a person struggling with substance abuse themselves or a friend, anyone that’s close and has been exposed to what, you know, you could distill down to a series of patterns and cycles that are evidenced over time, right? So it’s one of those things, it’s kind of a, it’s a, it’s a perfect example of pattern recognition, but I think that the patterns of use and the patterns of risk and the patterns of relapse or slip ups or increasing risk profile that would lead to some type of increase used or danger issues or potential for, you know, overdose in the near future is a worthwhile endeavor in part because of those patterns and because of the things we see as well defined
based on the literature, risk factors for XYZ. So I’ve seen, we’ve seen that with the ability to do some predictive modeling around, you know, one month risk of overdose after discharge and using that in a way to help provide targeted increased utilization of wraparound services and plug-ins for support for whatever may be needed to provide that bridge to next step in treatment, whether that’s, you know, intensive outpatient, residential treatment, CBT, talk therapy, whatever the case may be. So I think that there is a lot of value there to optimize what’s already a overstretched and overextended infrastructure for behavioral health across the country with regards to, you know, the implications for other stressors that are maybe further upstream and now I’m speaking to things like ACEs, speaking to those things in childhood and adolescence
that really do provide the backdrop and the foundation for many individual subsequent use, you know, I think, you know, the jury would obviously be out there, but I think it gets back to the broader implications for how can you disrupt generational cycles of trauma through a mechanism that marries AI into improving our capacity to provide good governance as a nation or as a state, as a locality, as a municipality. And it’s not an issue of, for me, it’s not an issue strictly of health equity or anything like that. It’s more doing, doing right by the folks that are within your constituency. It’s just doing the most good for the greatest number and maximizing that with the tools that you have.
And if AI can be matured and leveraged in a way that does so, I think that would have downstream impact on the prevalence, the actual overall influx in the pipeline of individuals that may have a series of experiences that would facilitate or bring about the likelihood of substance use disorder or other associated mental health diagnoses.
Thank you so much, Dr. States. I’d like to acknowledge the folks that are in the audience. Thank you so much for being here. Who should be the owner of the information? And what kind of value chain transformation do we need? Can you address that?
So in terms of the owner of the data, that’s always an interesting question, especially when you’re potentially leveraging hyper-local data sets. And I think one of the ways that folks can actually develop agency and buy in is to be interested in the core owners of that data. That’s always, and obviously there are a variety of barriers that come along with the custodial chain for data, right? So things like appropriate data use agreements or a burdensome look at MOUs or having some federated data stream that requires a data lake and who has capacity to query that and for how long and what are you able to utilize and leverage as opposed to what will remain redacted in PII in a PII format.
I think a question of ownership of also around the type of data that is used. So we get the things like public health data that’s more population health in nature or law enforcement data, which I think has a great capacity to inform the broader implications of AI and its utilization, but that gets to issues like, well, what if we ended up with something like predictive policing, right? So then I get, you know, my, my, my spidey sense goes off and I get worried about things like, oh, minority report, and are we going to be in this type of police state where we have data leveraged against us? And I don’t think that’s too far afield. I mean, the AI, Godfather’s that has spoken out recently about the tears of the likely
foot of threat from AI. I think that it’s kind of well placed with regards to the potential or theoretical risk of harm. At the same time, I’m cognizant of the fact that, you know, it does require a certain amount of data quality and volume to inform AI with the best chance to make an informed decision. And I know that sounds, you know, like I’m giving AI consciousness. And I don’t think that’s too awful far off in the, in the future. But, but I do think that we need to be very careful around that question of ownership. But I think it ultimately reflects back on that need to have good data with the appropriate
human touch in forming AI capacity to identify patterns and come to decisions that align with what a reasonable, centered, holistic, healthy human might also come to the same conclusion on. And then the second question, Hanh, I can’t remember. You remind me.
Sure. Absolutely. Ron is asking, and what kind of value change transformation do we need from linear economy to move or to more preventative with crossover finance? And how do we tackle parties who are not having a license to change?
So that’s interesting. The consideration around the early adopters versus kind of the late or the hard adopters, the folks that maybe are going to be more standoffish and not willing to integrate or adopt novel ways, novel ways of applying AI in whatever industry they might be in. Part of that comes through the need for governance and policy. And I think that’s one of the obvious areas in the U.S. where we’ve been behind. If we look at our counterparts that are in the EU and how they’ve applied the GDPR around their pursuit of AI Bill of Rights. And then also, that can then relate and fuel and provide the backdrop for a more significant policy framing or other types of, you know, admin engineering controls that would maybe
permit for a guard rail driven value chain that allows folks to find themselves with some type of ROI as these things are starting to be built out in different sectors. Because right now I think the narrative is a little bit too nebulous and we don’t know enough about what AI is good for me. And so that kind of what’s in it for me, I think is a bit absent because the narrative is just saying, oh, AI is going to change everything. It’s going to be good for humankind. But without some capacity for the granularity of folks to see themselves reflected in it and understand the capacity for good for them or for their company or their startup for their patient population or whatever the group is that they are responsible to or accountable
for. There’s going to be a lot of difficulty in applying value across a supply chain or across an industry. So hopefully that’s a helpful response. But happy to clarify further if there’s a follow-up on that.
Great. Thank you, Dr. States. All right. So in the realm of healthcare, accessibility and inclusivity are two significant challenges. Everyone deserves access to the best possible healthcare, but yet disparities exist. Their social, economic status, geographical location, race, gender, and many other factors that impact the quality of care that one receives. So now with the advent of AI, we’re hopeful or we envision a world where healthcare is not only universally accessible, but also personalized and effective, I guess, regardless of one’s background or circumstances and to predictive analytics, personal, personalized
medicine, telemedicine. So AI has the potential to democratize healthcare and contribute where everyone feels seen, heard, and cared for. So now, Dr. States, how can AI work towards providing this quality healthcare for everyone? Irrespective of what there are circumstances and make them feel valued and heard.
Well, so when I hear you say democratize, what goes off for me is another D-word, which is another disruptive one, so right? And where things are too disruptive, too fast, I think when you especially have quite a few incumbents in a given field that are accustomed to realizing financial gain through an established way of doing things, it’s going to create problems. So I think that for AI, it’s no different with regards to operating within a current insurance, farm, healthcare delivery, and research industry that maybe we’ll see some areas more responsive or receptive to, and we’ll have those as kind of the early spots of maybe reflected gains. And one thing I could think of is the way you may have predict, you know, with patient
flow, with the actual capacity to triage and better allocate resources at that first touch for pre-hospital care, acute care, even, you know, surgical delivery of services. So across kind of the fee for service, you know, still framework, I think that you have a way to optimize there, with the consideration around things like, you know, one of the things for medicine that frequently gets brought up is like, improve clinical decision support, right? Or improve capacity to tailor a EHR for a given health system, right? Or having, you know, your AI doctor working alongside your human doctor, and the argument that the two of them together are better than either alone. So there’s some, I think, semblance of where you have those touch techniques at the bedside,
right? And to help improve, maybe not just outcomes with regards to a disposition or treatment plan and utilization of medications and other resources that a patient might need, but also perhaps that could improve the capacity for trust in a patient provider relationship to be a partnership to be a bit more of a team working in, working in concert towards a goal of health for that individual, right? And that’s one of the things that we’ve seen, I think, that’s, you know, been helped by tech in general in the advent of the Internet and the capacity to work through issues on one’s own volition, which is the asymm, which was the asymmetry of knowledge in that patient provider relationship.
And now that that has been, I don’t want to say it’s not on even putting, because I still think there is a wonderfully great value in having a clinical practitioner that has been through clinical training and understands the nuances of what the entire picture looks like of a patient’s health as opposed to getting the siloed lab test results or the siloed result from a PHQ-9 or the siloed result from an imaging study. They can help, you know, provide the broad strokes for a patient in that provider relationship that allows for kind of that true empowerment. But I think that, you know, there’s also been a lot of power from patients taking agency and being able to research on their own, because it’s, you know, it’s one of those things where, you know, the rising tide, you know, raises all ships.
So if we have knowledge being available to all, then I think that helps bolster the trust, that bolsters the capacity to do good and the capacity for us to have an effective utilization of what has historically been a much more asymmetric interface. I think that’s probably been seen, you know, for our purposes, when we think of folks with neurodegenerative change and cognitive impacts of whatever insult, whether it’s organic, if it’s exogenous, something that has happened that has limited capacity over time, especially for aging populations, the, there’s great, great utility there for AI to do good. You know, one of the things I’m reminded of is we all saw the story last week with the gentleman that was previously paralyzed and has been able to work over time with the AI-driven development of the tech that’s allowing him to, you know, have this feedback where pattern
recognition allows for this tech to pick up on what the brain is saying. And you will move here. You will move there. It’s absolutely amazing to me that this would have probably taken like a million post-docs working four years at a time in the past and you have a capacity for a high volume of computing with the capacity to identify patterns and work through a very complex series of data points that allows for a tangible benefit to a patient. So that’s one of the more impactful ones for me. I know it was just a recent last week, but it’s certainly one of those checkboxes for oh, look what it can do.
The healthcare experience that we are, you know, discussing, it’s not just about technology, it’s showing empathy, it’s humanity. So thank you. Thank you so much. All right, so I’m going to go on to the next question. The healthcare professionals have always stressed the importance of early detection and managing health conditions, catching a disease in early stages often allows more treatment options and better outcomes. Now with the power of AI and machine learning, we can analyze vast amounts of data and identify subtle patterns that might indicate an early stages of a disease or even before the symptoms
appear. So this can potentially alter someone’s life trajectory and making early intervention possible and increasing the chances of better health outcomes. So with that in mind, Dr. States, could you elaborate what your thoughts are in AI could potentially alter someone’s life trajectory to early detection?
Well, yeah, I think, you know, there’s certainly been even in my experience over the last, you know, 15 years seeing in the example I might bring up this, you know, in the area of radiology and some of the early observations, I think with a lot of skepticism that you could have pattern recognition utilizing machine learning and AI to allow for a consistently improved capacity to detect early pathology, whether it was an incidentaloma or you actually had a screening study that you were being seen for, right? So that’s always the, considerate, and this is also, you know, one of the, I guess, the more apparent areas where you want to minimize harm, right? And that could be the screening test because obviously, pop-offs positives can lead to quite a bit of harm.
Think of the controversy over the last 10 years around the age for mammograms to be conducted, you know, is it 50, is it 40, why do we have a varying level of support or distrust based on whichever professional society or the USPSTF is chiming it on? And one of the things is, you know, it always takes data time to mature, and there are different ways that data can be interpreted obviously, and there are different conclusions that will be drawn based on, you know, methodologies, sample size, power, statistical significance. All, you know, the variety of things we use is clinicians, researchers, scientists to make these informed decisions about what can be done with a research classification or with a clinical diagnosis, right, a clinical test. And I think that there’s a obvious utilization in the early diagnosis for a variety of conditions.
The one you might think of in a more concentrated way or focused way, rather, is something like a cancer across the board and the capacity to do a high throughput screen for a variety with a single test, right, it’s something like Grail that’s, you know, currently being evaluated with the national health system in the UK and in England. And I think that it has the absolute capacity to, and it absolutely will disrupt our screening paradigms for disease. To go from looking at a single cancer, at a single point in time, at a single age range, and I’m thinking of something similar to the way that USPSTF, the recommendations typically will come out and other professional societies as well, I think that it will change the way we fundamentally think about our screening programs and public health approaches to cancer,
diagnosis, treatment, prevention, and it already, I think, has started to do so. So things like, you know, the cancer moonshot that have, that’s come out and, well, I guess, shouldn’t say come out. I think, you know, it’s, you know, redoubling efforts, providing a renewed focus with understanding that things have changed, that treatments are being developed and they’re being developed much faster, that diagnostics are being developed and being done much faster in large part because of, I shouldn’t say large part, but in increasingly informed and matured by machine learning and AI and whatever outputs we see in the near future, for a variety of, you know, medications or tests or devices that are being developed, increasingly there is an element that’s informed by that increased capacity for computing, pattern recognition,
and then operationalizing those observed effects.
Thank you so much. So the technology that’s been gaining considerable attention is AI chatbots. These digital assistants have the potential to revolutionize patient care, providing around the clock support and answering queries and even helping with initial diagnosis. So, with any new technology, there are some concerns, one such concern is privacy. Given the sensitive nature of healthcare data and ensuring privacy is huge and the challenge we face is finding a balance between the exciting potential of these chatbots that it presents and the necessity of protecting patient data. So in this context, what do you suggest, like how do we navigate the difficult balance between innovation that AI chatbots bring to healthcare and then also the privacy concerns
that it presents? What’s your thought?
So the application of these chatbots in the clinical setting has been, you know, I think partly the comical in terms of some of the observations. I think there was a piece in JAMA that looked at patient satisfaction with like their after-visit summary letters or like a letter from providers and these chatbots generated letters being received more favorably than the actual ones that were written by the doctors. And you know, that might be a more levity-laden, you know, example, which I think probably demonstrates some of the utility. I think that by and large, with these chatbots, it gets to that question. I think I mentioned in my previous response that it’s talking about the white coat doctor versus the AI doctor and then working better in tandem rather than isolation.
And I think the risk that is run is that a chatbot becomes an exclusive, it becomes a surrogate in its entirety for a touchpoint with an actual human provider in any variety, right? NPPA, B-O-M-D, right? There still should be that leveraged touchpoint that allows for communication and feedback that is fundamental to that provider patient interaction. I’m not, you know, I’m not so, not even my approach that, you know, I think that there could not be applications that would be inherently useful to work flow management, to delivery of care that would not compromise patient safety and you could retain patient privacy, you know, respect HIPAA considerations.
But I think it’s a very difficult series of critical touchpoints that need to be managed well before anything like that would ever be rolled out. So I think that while it’s possible, I would hope and think that that’s not likely in the future.
Does AI and chatbot functionality, further complicate, hang on here, the Let Me Web MD, my issue instead of going to the doctor notion?
Yeah, I think, you know, so I think that’s always the, so where there is increased agency and command of literature and understanding, I think there’s probably a good back and forth that could occur from that like chatbot functionality and capacity to, if it’s a, and this is where we get to the issue of who determines what is a verifiable or judicious AI interface. So where are the good chatbots, you know, where is the bad chatbot because you may want to utilize the good chatbot to verify some things, you may be, it may have cleaned and researched from whether D or from Google, whatever the case may be. And that I think gets back to one of the other considerations, which is, okay, if patient X does a Google search and they find something out and they have, let’s say they have 40 pound weight loss over the last six months, unintended night sweats and they’re otherwise becoming
conjectic and they’re not entirely sure why. For many providers, we would have alarm bells raised about some type of malignancy or potentially even infectious process. So there might be, you know, public health issue at play there. So let’s say that all this chatbot, you know, was, school dawn was to think of malignancy, malignancy, malignancy. This patient decides, you know what? I don’t want any parts of that. I’m not going to check it out any further. While all along they had, you know, pulmonary TV or maybe they had some other type of disseminated
TV where it was actively transmissible and now they become a public health risk, well, who is responsible for the lack of an informed AI chatbot? I don’t know. Is it the company? Is it, you know, the provider not having been nice enough that the individual didn’t want to go see their primary care doc? I think that that’s one of the other issues is around the accountability for AI when it goes wrong and when it goes well. So that’s, hopefully, that was a little bit of an off-the-fly example, but hopefully that resonated while it’s in public.
What do you see the FDA’s role in AI, I guess, however you define it? Regulation as to both indications for use and associated warnings for product labeling.
Yeah. So I think that’s interesting, especially around the idea of tech as a medical device, right? So there are certainly defined implications where FDA would likely or potentially have some regulatory reach. By and large, I don’t think that they should be children, the burden of primary federal engagement on policy or regulatory stances on AI broadly. And part of the reason I think that is, I think something that I mentioned earlier is that I get this sense, at least from much of the government’s standpoint, that many of the examples frequently cited because they’re the ones that have the greatest touch
point with humankind is health care. So many of the political considerations that are driving the optics here are around health care. And inevitably, the FDA gets implicated in these things. And while they have a few novel areas where they will need to and have started to chime in, I think Dr. Caleb actually provided an interview, was it WebMD or MedPage, one of those earlier this week, looking at some of the implications for AI and the FDA. The other piece where I think that it shouldn’t, well, I guess the added thought around if it’s not just the FDA, then who, I think this is where Congress comes into play. And there needs to be a uniform buy-in or something that’s more than just a fear driven,
oh goodness, we need to do something about AI. There needs to be an intentional, thoughtful organization around next steps that has funding behind it. You know, our research agenda that informs a policy development, even lessons learned application from the EU and what can we be doing that the dance for best practices here in the U.S. And I know that the White House has taken steps to make that at least a step towards reality. But until the proof and the pudding being from Congress around it, I think that’s going to be something that will continue to be a difficult issue to parse through for the FDA. Because the FBCA can’t handle everything.
Now is AI further integrates into healthcare? The key question is, how do we ensure technology is primarily serving people? Well, AI can drive significant efficiencies and advancement, but there’s a risk that it might be misused for exploited or profit, possibly affecting patient care negatively. We have policies and regulations guiding AI’s ethical use, prioritizing patient welfare and preventing misuse. So we’re moving towards safeguards, setting safeguards as required for ensuring AI acts as a force for good and doesn’t widen the existing healthcare disparities. So doctor States, how do we ensure our policies prioritize people over profit and safeguarding these?
That is a, okay, well, then that, so let’s we can step back and take a look at the whole health system, right? And this, I think, gets back to an area where I believe that the vast majority of people in positions of decision-making authority and power over time have not been overtly out to create systems that created inequities, disparities, a greater likelihood of one group suffering versus another. But we are humans and we do things in certain ways and we don’t necessarily do them consciously whether it’s an implicit bias or there’s some other driver that really creates a situation where once snowball is in motion, it just continues to pick up speed. So I think that that’s the risk we run with AI, right?
One thing I’ve said in the past to colleagues is, you know, AI has, through machine learning has, and the generative learning is, it has a capacity to reflect the best of us and the worst of us. And so much of it in its capacity to do good or bad is going to be found in the foundational elements of what it’s learning from. And if we have processes in place that have propagated or perpetuated inequities and disparities, it’s going to, despite our greatest efforts, find a way to reach those same conclusions to a greater degree, unfortunately. So with regards to the potential safeguards there, I mean, you know, effective governance doesn’t come through just a series of rules, right?
It has to come through the thoughtful application of trade-offs in what demonstrably provides value and where there’s demonstrably a risk that’s either acceptable or unacceptable based on the potential for good. And the maturation of AI and its further incorporation and extensive, you know, more extensive use by many other touch points across the health system needs to be couched in a series of administrative and engineering controls that allow for intentional adoption of what we would, I’m not going to call it a utilitarian good, but I think where it is able to drive towards the greatest, creating the greatest opportunity for good outcomes, right? Because when it comes down to it, all we can get people is the opportunity. All we can look for populations to do is have the opportunity.
We can’t necessarily manifest the good, right? I mean, there is an element of action and agency and ownership here for individuals and communities and for nations to do best with what they have. The other consideration I have here is, oh goodness, it left me. I hope it comes back to me because I thought it was going to be a good one. Let’s go to the next question, maybe.
Sure, sure. No problem. Well, like you said, we hope that AI will not only enhance health care provision, but also spark radical change in how we approach health and well-being. So let’s say looking to the future, what’s your greatest hope or how AI can transform the health care system for the better?
Oh, that’s also a very tough question, but a really good and meaningful one, right? Because I firmly believe that if we just have a series of goals in mind based on what we view as possible, we ultimately don’t get as far as we could have. Some of that is not believing in what we bring to the table. Some of that is not necessarily believing in the good that others bring to the table, but I think that when I look at hope, some people think of it as a unnecessary or ineffective way to approach problems, but I think it’s really kind of the only way to approach a problem because it allows you to have the problem set based on what folks can do outside of your own self because I think that when it comes down to it, that’s kind of what hope is. We can believe in ourselves a lot, but hope is something that looks outside of ourselves
into others, right? And to other entities and to other ways of thinking into something that we have to release our ownership of and our own capacity to create good or change. So when I think of hope, I think of what we all can do together, right? So my hope is that when it comes down to it, this doesn’t become a polarized issue, that it’s not something that will be weaponized, that it’s something that can be rallied around for what it is and not as a fear driven parallelization that just sees us sit by the wayside or wait for something else to happen that drives us to be reactive as opposed to proactive. That’s the lesson that I think has been learned so often in healthcare and in public health is the inherent bad outcomes that result when we act out of reactive fear.
Call it, we could look at the opioid crisis, opioid epidemic. We could look at things like how cannabis legislative efforts or scheduling efforts have been couched over time and who were on drugs. We can hope for the best in others and see ourselves as a partnership. By the example here I’m making, it’s the federal and the state governments, right? Where we all have our inherent value and good, there’s something we bring to it that the other does that we do not have and that the other does provide. We get to a much better reflection of what an appropriate steady state with effective governance looks like at the end of the day. That’s my hope for AI.
I know that’s a little bit more broad and outside of the health spectrum, but I think that’s the only way we get to a space where we effectively have a landscape that allows the health system and public health systems to mature in AI in a way that’s healthy with governance and ethical considerations that really reflect the collective desire and meaningful pathway to go forward.
Thank you, thank you so much, Dr. States. Should AI be used in healthcare to draw transparency for consumers or does AI further complicate transparency based on each organization using AI differently, do transparency and AI correlate?
So I think they have to correlate and I to be successful, I think that transparency is an absolute necessity to allow for adoption across a large swath of the population, right? Today we get to that critical mass as we might put it and I’m thinking of like the late adopters, the heart adopters. Some folks will obviously just jump headlong into this thing without any skepticism just because of the potential for good. But I do think that, you know, and we’ve seen this in, you know, Congress and we see it and they’re reflected opinion polls that have come out or maybe even just the reflections that they comment on from some of their constituencies that they’re, I think, absolutely as a need for transparency, if not to bring folks on board, but to allow for the continued effective
growth and understanding of what AI tools, AI and form tools can be, because some are going to be quite limited in their AI ML utilization and some will be, you know, largely driven and have great impact based on those informed decisions that they’re coming to. So as it relates then to kind of equity across, you know, the consumer space, I think that transparency is huge with regards to, you know, safeguards for consumers. So thinking of like the CFPB, the consumer, wait, CPFB, I’m messing up the acronym, but the consumer protections branch and even, you know, folks like the FCC, there are, you know, very clear considerations there that need to be adhered to and met and likely developed, because there’s not a great capacity for policy in this space at the moment. And that’s partly because we don’t know what we’re getting into, right?
It’s hard to create policy for something you don’t understand.
So when we’re talking about mental health care, we’re wrenching into something very sensitive territory. And it’s kind of an area where we need to tread carefully, especially when we’re talking about bringing AI into the mix. So it’s not just about getting the diagnosis or the treatment, right? It’s also about making sure that we’re respecting people’s privacy and trust that they’re putting into the system. So on top of that, we need to think about how this might change the dynamic between the person and their therapist or doctor. So I guess in your opinion, what are the key things that we need to keep in mind? Ethically, you know, when we’re thinking about using AI and mental health care, what do we need to be careful giving the complex and the sensitive of this field?
So first, let me apologize for the sleeping bulldog in the background if anybody heard a snore. Second, I think that, you know, it’s interesting with regards to the mental health consideration specifically, right? And some of that, I think, is going to be evidenced. And maybe in a smaller part with regards to telehealth utilization for mental health delivery of care, that’s the one area of telehealth utilization that is very clear in the data that, you know, there is a either there is parity with or there is security to the in-person mode of delivery for mental health care. And some of that can be because it allows for gradualization to decrease on stigma or think of LGBT populations that maybe are a little less willing to be seen in person because of fear for, you know, either in speech or stigma that they’re feeling in the office, there is a, I think, an equal consideration around, you know,
outside of the delivery of care for these routine touch points with mental health, whether that’s, you know, just straight talk therapy or being part of group therapy sessions as part of a substance use program, there is the acute interventions like we were speaking to with regards to suicidality, prediction and intervention, and how that could be implicated ethically with regards to respecting an individual’s autonomy. That also gets to, you know, what is another kind of mission creep issue of harm to others, right? And this is not necessarily the clinic touch point, but this is the other areas of life touch point that have intersections with mental health and behavior health, which is a very sticky issue that I’m quite concerned about. Because I have a feeling that we’re going to make some mistakes before we see successes, and it would be much better obviously to be intentional, proactive, and develop a system with
safeguards in place that is informed by best knowledge at the moment as opposed to waiting for something to go wrong, which is what I think we’ve seen with the way social media and these predictive practices have been leveraged for other touch points through social media, right? So whether it’s targeted ads or, you know, misinformation, disinformation, there’s a lot of potential for bad, and I have this going that it’s going to equally be manifest before things get better. So that’s those are my two concern areas there.
What practical steps can we, as individuals, take to ensure the responsible and save utilization of AI in our everyday lives?
I think that the greatest capacity for us to do good around our current touch points with AI is in our contact with our devices, right? So the capacity to be aware of and to understand that that landscape is not, you know, something on, on equal footing, that we are in what I would call, you know, a more asymmetric landscape as it were. We aren’t in the EU where we have GDPR protections where there is a bit more of a capacity for government or for other entities to bring folks that are accountable for those applications to a to a point of being held accountable for wrongdoing or withholding or mistransfer of personal data and use of that data, right? So that’s the greatest space I would say. And while we have, you know, engagement with AI tools at the level of the health system, I don’t believe at this point, that’s where I would be, have my most care or concern. That said, I do think that we have
obviously an increasing capacity for things to go wrong within those systems. And I think that it’s partly due to the existing landscape of, you know, ransomware attacks, things that are already plaguing, you know, data, data systems and infrastructure and health systems. So that just lends itself to all the more need for cybersecurity, CIOs at health systems and institutions to be quite wary and interrogate with, you know, a great level of scrutiny the systems they bring into their institutions.
Let’s dive into a topic that’s really at the forefront of modern healthcare, personalization. So this term has become a bit of a buzzword across many sectors. But when it comes to healthcare, it’s not just a passing trend. So we have unique bodies, health histories, unique health needs in our healthcare experience should reflect that individuality. But we find ourselves in an era where technology specifically AI, making personalized healthcare more attainable. So it has the potential to tailor treatment plans, predict health outcomes and even communicate health information in a way that resonates with unique patients. It’s a powerful tool. It needs to be handled with care, balancing with personalization, with individual privacy and preferences. It’s a challenge. And I guess we have to navigate effectively. So in this context, could you share your perspective on how AI can be used to make healthcare more personalized?
And then how can we improve patient experiences and outcomes?
So, you know, it’s funny. The whole area of customer experience, human experience. I mean, that on its own as a method to improve patient satisfaction, patient engagement, patient utilization of care and health outcomes is, you know, it’s, I don’t want to say that it’s well established, but it’s used in a variety of sectors, right? Not just health, right? And I think it’s been done for a good effect, either with a variety of outcome metrics, whether that’s the bottom line for something like, you know, IBM or some other provider of a customer interface. Could even call it epic, sirner, you know, they all have their capacity to leverage data gathered from those touch points with the customer and to inform that in a way that hopefully does more than just, you know, line pocketbooks. And I think for the, for the, the best majority, there’s a focus on that in terms of creating good and workflow,
for creating good and the backstops that allow providers to do their best work with the best information, right? Because, you know, providers, we are fallible. We have knowledge that goes out of date. We potentially will miss things based on exhaustion because of overwork, because of burnout. So there is some inherent good there that I think will be married well to support patient autonomy and in overall health. One thing that is of concern, I think is the way AI might drive and change the workforce landscape. And the reason I say that’s important for patient experience and outcomes is because of the potential to impact diversity of the workforce, whether that’s through the the manifestation of implicit bias that is already inherent in a hiring practice that allows for that to be steeped even further through, you know, the initial screen of applicants. They missed this buzz turn that’s not necessarily used widely or there was
something that, you know, it just went missed or went unregarded as a potential bias when the original algorithms are being developed by the team, by the AI team or the ML team. So I think that those are a couple considerations around customer experience and then the kind of that experience model in general that I think are important.
Now, as we integrate technology into healthcare, the way we work and the skills that we need are changing. And it’s not just about creating or implementing the technology, it’s also about preparing the workforce to use it effectively. And healthcare is multifaceted with professionals ranging from doctors, nurses, technicians, administrative staff and so forth. And each role might interact with AI differently. And the benefits of AI and healthcare are huge. To unlock these benefits, it’s important that the workforce is adequately prepared to utilize these tools. So this involves not just the technical skills, but also understanding the ethical implications and the potential biases in AI, like we discussed, and also the ability to combine AI insights with professional judgment. So what steps could we take to get the healthcare professionals ready for this AI revolution and making sure that they’re equipped
with the skills and understanding to make the most of AI?
So this question always makes me chuckle because I remember when I, so I started off my medical career in the Navy as a medical core officer. And almost all of the notes I wrote in my first few years of practice were picker notes, which I subsequently had to transfer to, you know, an EMR or scan and upload. And there was nothing efficient about the process. And actually the first, you know, electronic medical record I use was something called CHCS, which was a dos-based format. And it’s still in use today in the military. And I think, you know, so the things for me that comes to mind considering, you know, that that was just, you know, that wasn’t that long ago, is the capacity for the workforce to change, right? The capacity to adapt and see themselves in this systems change. And that’s where I think a lot of the drop-off comes in the way these transitions are handled. And for folks that are on the call that are engaged
and have experience with these EMR EHR transitions, a lot of the times things that go off the rails is that the team bringing in the EHR is a group of well-meaning data scientists or informaticians that hasn’t necessarily been adequately informed by all provider groups, right? If you build it based on making doctors happy, the nurses are going to be unhappy. The text will be unhappy. The EMS folks will be unhappy. Somebody is going to be some bluser always if you have one group always winning 100%. So, allowing for a mutual buy-in and bringing folks along slowly is, I think, abundantly important. And giving them the time to see themselves in the change, right? To see that there’s some worth in this for them. And that’s just motivational interviewing. That’s just behavior change, observations in general. That’s for anything, right? If you try to pull people along with a stick, I mean, they’re going to buck and push back. And if they’re unionized, well then
you’re dead in the water, right? So, there are a lot of, I think, very sensitive considerations that go into making a workforce proficient in an area that we’re asking them to change to. So, the existing workforce, I think those are some of the considerations there. For the developing workforce, for the pipelines, I think that there’s an absolute need to increase the curriculum requirements, the training requirements. And I’m not saying if there’s anybody from like ACGME WAMC on this call. I’m not saying add more curriculum requirements that just add checkboxes to getting people through the pipeline and training. I think there need to be cross-cutting considerations around where there could be augments in text for existing touch points in curriculum that are helpful to improve competency, to improve comfort with these discussions. And then actually having the capacity to relay that information to patients in a way that builds them up and creates a sense
of empowerment instead of fear. That’s always the problem of delivering something in doctor speak or in provider speak and clinician speak rather than in plain language that makes sense. So, I think those are some of the more, I guess, frequent observations and conversations I have with colleagues on my side.
Is there harnessing AI to improve and optimize alternative treatment suggestions? There are so many ways to harness data information breaking down the topic deliverables could help everyone better navigate considerations and development of operational framework, if you will. This is from Linda. Could you take that one?
Sure. Well, I’ll do my best with it. Thank you, Linda, for the question. So, you know, alternative treatment options are always a sticky area, right? In large part, there are a few considerations that might be germane care. So as a first response to that, what I think is the spirit of the question, I’m not aware of any available tailored platform that would inform those types of discussions with regards to an alternative. So, something outside of, you know, the pure evidence base, right? What I do think has started to happen in a variety of spaces. And I might use the example of traumatic brain injury. That’s an area that I’ve been fortunate to work with over the past year. And applying the lessons learned from the National Research Action Plan, which was a research initiative that occurred over the past 11 years and started with the Obama administration. And the capacity to take all that knowledge learned from DOD,
from DEA, from intramural and extramural research at all levels across basic science, translational science, implementation science around what does and does not work around characterization, diagnostics, treatment, and long-term disposition. There’s great capacity to leverage AI at unique touch points there, right? So, one would be the development of diagnostics. So, potentially looking at biomarkers as in a appropriate way to drive down beyond what historically has been, you know, a concussion or M-T-B-I, T-B-I, severe T-B-I, classification scheme that might be based on imaging and some, you know, clinical signs and symptoms, actually having something that would allow for a, you know, serum plasma-based biomarker evaluation, similar to something like troponins, right? So, look at severity of MI or just an embarked in other types of tissue and trend that over time. And then looking at the capacity
for, you know, changing the classification scheme to better reflect the actuality of the increased knowledge from pathophys and informing what the pathology is involved with the actual biochemistry of, you know, the pathways and implicated in the development and persistence of T-B-I, and maybe the differential application of nuanced types of pathways that are in different age groups. So, you know, with children, with adolescents, and then looking at aging populations and their capacity to bounce back from T-B-I, and then looking at the capacity for the distillation of lessons learned from large volume of data sets, from like, from patient, from patient records in the VA, looking at over three million records, right, to distill some lessons learned that, oh, great, I’m seeing some deep assures mentioned here. It’s absolutely, yeah, absolutely of what I’m speaking to is, you know, the capacity to take what previously was unused data or
previously was thought not impossible and develop these constructs and new frameworks that actually can do good to reframe the way an entire disease state is studied, right? So, T-B-I is no longer in acute injury, right? It’s a chronic disease process. It’s something that is with you across the lifespan. And while there are improving capacity for diagnosis, for supportive care management, the last thing I think, and this might be an interesting one to finish on, is around that capacity to develop treatments at a more rapid pace based on high throughput screening that’s leveraged by AML-formed platforms and the capacity to say in five to ten years we will have a treatment that’s tailored for T-B-I, which will help impact the downstream implications for, you know, quality of life, for years of productive life lost, for the implications of a T-B-I to an aging population, right? Because we are aging as a country. So, I think that there’s a numerous
variety of these applications. And I just wanted to pull T-B-I as an example, who’s one that I’ve been working with and getting excited about. Hopefully that answers a little bit of the question.
Thank you so much, gosh. Thank you for the folks that are here asking these great questions and thank you so much, Dr. States. I know you made the time to be here and participate. I appreciate your time so much. Do you have anything else that you would like to add before I close out, Dr. States?
The only thing I might say is a plug for folks that might be attending the AWS Summit next week. If you are attending on day two, I’ll be there and I look forward to engaging with any of you that would like to chat.
Great, great. Thank you. So, as we ramp up, we reflect on the transformative potential of AI and healthcare from battling chronic diseases to addressing health inequalities. We’ve also discussed the ethical regulatory implications of integrating AI tools like Chatbot. So, the path forward involving a balance act which with care and foresight, we can ensure humanity remains as central to this technological advancement. And remember, this is a stepping stone and is an ongoing dialogue about AI and all of its possibilities. And so, thank you so much for joining us. And until next time, stay curious and continue innovating. Take care.