SCCMPod-552: AI in Critical Care and Education

visual bubble
visual bubble
visual bubble
visual bubble
10/04/2025

 

In this episode of the Society of Critical Care Medicine (SCCM) Podcast, host Diane C. McLaughlin, DNP, AGACNP-BC, CCRN, FCCM, welcomes guests Kaitlin M. Alexander, PharmD, BCCCP, and Ankit Sakhuja, MD, MS, FCCM, from SCCM’s Leadership, Empowerment, and Development (LEAD) Program, to discuss the use of AI in critical care education and clinical practice.

Dr. Alexander is a clinical associate professor in the Department of Pharmacy Education and Practice at University of Florida. Dr. Sakhuja is the director of artificial intelligence and informatics at the Institute for Critical Care Medicine and director of clinical informatics research in the Division of Data-Driven and Digital Medicine.

The discussion highlights how critical care educators and clinicians benefit from learning how to use AI and understanding its benefits and limitations. Incorporating AI into critical care education teaches students how to use AI responsibly in school and later in clinical practice. Clinicians should understand the utility of different AI models for patient care and be well versed in the ethical and legal treatment of patient data.

Drs. Alexander and Sakhuja provide examples of practical uses for AI in critical care. AI can help students test their knowledge with interactive case simulations paired with discussion with instructors and peers. AI can analyze vast amounts of patient data, supporting clinical decision-making.

The guests encourage clinicians and educators in critical care to engage with AI and contribute to its responsible use. Listeners will gain valuable insights into the uses of AI.

Transcript

Dr. McLaughlin: Hello and welcome to the Society of Critical Care Medicine podcast. I'm your host Diane McLaughlin. I'm joined by Drs. Kaitlin Alexander and Ankit Sakhuja from our Leadership, Empowerment, and Development program, our LEAD program, to discuss real-world applications of AI in critical care settings, enhancing patient outcomes, and streamlining workflows. Additionally, we'll discuss innovative ways AI is being utilized in experiential teaching and how it provides immersive and effective learning experiences. So Dr. Alexander is a clinical associate professor at the University of Florida's Department of Pharmacy Education and Practice. She joined UF in May of 2021 in practices in the trauma ICU at UF Health Shands Hospital.

Dr. Alexander earned her Pharm. B from UF, completed a PGY-1 at West Virginia University Health Care, and a PGY-2 in critical care at Wake Forest Baptist Medical Center. Previously, she was an assistant associate clinical professor at Auburn University Harrison College of Pharmacy.

Dr. Alexander is passionate about integrating AI into education and received the inaugural UF AI Teaching Integration Award in 2024. Now, Dr. Sakhuja is the Director of Artificial Intelligence and Informatics at the Institute for Critical Care Medicine and Director of Clinical Informatics Research in the Division of Data-Driven and Digital Medicine. He is also Principal Investigator for the Augmented Intelligence in Medicine and Science Lab and Associate Professor at the Icahn School of Medicine at Mount Sinai in New York.

Welcome. Before we get started, do you guys have any disclosures to report?

Dr. Alexander: None for me.

Dr. Sakhuja: Neither for me.

Dr. McLaughlin: All right. So we're going to jump into it. Everybody's talking about AI lately and how they're starting to utilize it.

So I want to hear a little bit about how it inspired you to begin integrating AI into critical care, into education. Was it a specific challenge, a patient case, just the trend in technology that sparked your interest?

Dr. Alexander: So I can start. I would say for me it was more so the trend in technology given that I don't have a background in technology and otherwise my background's in clinical training and education. But I participated in an AI learning community at UF that really helped inspire some of my ideas where I started to learn more about artificial intelligence and technology.

And I think that showed me the importance of incorporating this or finding ways to try and incorporate that with our learners so that they are prepared to know and understand AI and also evaluate AI-generated information.

Dr. McLaughlin: And what about you, Dr. Sekuja?

Dr. Sakhuja: For me, the reason I started really looking into AI and integrating AI was because I have been in the field of informatics for a little while now. And it is very interesting because an average patient in an ICU generates over 1300 data points every single day. And as a clinician, if you have an ICU full of, let's say, you know, 15 some patients, which is not unusual, that quickly adds up to nearly 20,000 data points per clinician per day.

That's an overwhelming amount of information for anybody to process effectively. And that is exactly what inspired me to begin to integrate AI into clinical practice, to be able to find, you know, smarter and more scalable ways to harness this data.

Dr. McLaughlin: Well, and that's interesting because I think one of the topics that come up often in critical care is do we have data overload just because we can't process everything?

Dr. Sakhuja: I would say so.

Dr. McLaughlin: Can you talk about a specific way AI has transformed patient care in your work settings? Do you have a real-world example that AI really changed the outcome or saved a lot of time?

Dr. Sakhuja: Yeah, absolutely. At our institution, an AI model runs in the background to identify patients that would benefit from being evaluated by a nutritionist in the hospital. And nutrition, I mean, as we know, it's a very, very important component of patients' overall care, but it's something that I would say we aren't really as good as we would want to be.

This is something that we have seen that really helps make sure that patients are receiving nutritional therapy that they need to get better.

Dr. McLaughlin: And then what about you, Dr. Alexander?

Dr. Alexander: So I think for me, I don't have necessarily a specific patient care example, but I certainly think that AI has transformed how I'm integrating technology with my learners in the critical care setting and just how we are incorporating the use of technology into that rotation experience. I know that a lot of our providers at Bedside while we're rounding are pulling up AI technologies and utilizing that to help guide decisions and look for information. And so I wanted to have that experience for our learners where they are able to practice utilizing the technology, but in a supervised way where they're still able to responsibly use the information and kind of provide their own evaluation of the information before making a recommendation that would reach the patient at Bedside.

Dr. McLaughlin: Can you walk us through a particular tool or strategy that's been especially effective when you're utilizing AI for educational purposes? Sure.

Dr. Alexander: So there's a couple of ways that we've integrated or I've integrated AI with my experiential learners. One way is I revised a topic discussion that I have with them where we pick a topic and typically I'll have the learners review guidelines or primary literature to prepare for that session and discussion. And I still have them do that or ask them to do that.

But what I've done instead when we sit down as far as just facilitating or asking questions about what they learned from that preparation is I've come up with a few short patient cases that they can use as a prompt in an AI tool like ChatGPT or Copilot or even a more clinically based tool like OpenEvidence where they can then ask the prompt while we're sitting there having the discussion and receive an answer and then evaluate it through the discussion with myself and the other learners that are participating.

And so through that I feel like that's been really helpful for them to kind of test their knowledge and also share what they learned but also evaluate that output on the fly. And it helps me as the preceptor recognize kind of where the learning gaps are and how they interpreted their preparation for that discussion. And then they also can compare and contrast different responses that they receive especially if you're doing that in a small group because there could be differences in the output or the answers that are generated.

So that's been a really effective way to show them some of the uses of AI in making clinical decisions but then also demonstrate the limitations and the need to evaluate the information.

Dr. McLaughlin: And I think that brings up an important topic. It seems like you're both utilizing AI in actually both innovative but different ways. Can you talk about what the challenges or barriers you've encountered in incorporating AI into critical care education?

Are there any regulatory, ethical, or logistical issues that have come up?

Dr. Alexander: So I guess I can jump in first just to share that we want to make sure that our learners especially are aware of how to responsibly use AI. And so in the clinical setting not providing any confidential information to the AI tool is incredibly important. So anything that's HIPAA or FERPA protected information the learners shouldn't use or incorporate.

I think also just ensuring that they're aware of the potential limitations and biases that exist within the algorithms that are openly available so that they can critically evaluate how they're utilizing the tool. So just building that AI literacy so that they know and understand the tools. I've encountered challenges I would say with learners that maybe aren't considering all of those aspects of utilizing the technology just because it's become so widespread and ubiquitous within practice.

Dr. McLaughlin: And I have a follow-up just because I actually don't know the answer to this. Are the algorithms open or are they proprietary?

Dr. Alexander: It depends on what AI tool you're using. So that's where kind of knowing and understanding and selecting the right tool I think also becomes very important because it can certainly change the type of response that you're generating and then also some of the limitations potentially of the response or of using the tool may change.

Dr. McLaughlin: Yeah I know reading for some of the journals doing some editing and peer review work now you really have to double check every reference to see if it actually exists or not for people that aren't familiar with some of the limitations of AI which has actually increased workload more than expected from that end.

Dr. Alexander: Yeah that's a great point. I think that's one area where you may know the reference or resource even that you're trying to cite but AI may change it slightly if you put it into the tool once it generates the response back. And so you'd want to really make sure that you're double checking the references are correct or alternatively if you provided a question or a prompt a lot of the resources or citations that AI may provide especially if they're not directly linked may sound very real or relevant but if you actually go and search for the article it doesn't exist and it may just be based on something similar that's out there. So I think that's a really important point not only for our learners but also just all providers to be aware to double check those references from AI.

Dr. McLaughlin: And then you also brought up the point about putting HIPAA or FERPA information and so Dr. Sakhuja you're utilizing this in the clinical setting which inevitably is pulling actual patient data like you said 20,000 data points a day. How do you make sure that it's safe that patient information is still protected?

Dr. Sakhuja: That's a very very good question. Depending on the AI tool some AI models could be run locally and if they are being run locally you could be a little bit more sure that the information that is being input into the model is not being shared with you know anybody else. But with these models like the large language models it can be very tricky sometimes and you really have to make sure that you speak with the right people in your institution you know the data governance committee in your institution to see which large language models might be safe for your particular institution to input patient data into because your institution may have collaborations with various companies for these large language models where they may be able to input certain patient data elements in a certain way or within the institution's firewall that then it won't go to the company but in other large language models that may not exist. So it is very very important especially if you're working with various large language models to be very very cautious about this.

Dr. McLaughlin: It's something that's kind of come to the forefront especially over this last weekend. I don't know if either of you ever did the DNA swab to figure out your genetics and send it in. Did either of you do that?

I did not but my husband did. Yeah so you're aware then that the company went bankrupt and now all of that genetic data is out there for sale. So make sure he goes in and this has to destroy his sample.

But it just tells you now with so much technology are we really being careful enough with where we're sending it. And when you're critically ill that's absolutely not something you're thinking about. But maybe as a family member and us as the provider team need to be super aware of that.

Dr. Alexander: Right. I think we have that responsibility to really be aware of how we're using their information and ensure that we're using technology responsibly.

Dr. McLaughlin: How far do you think we are in the AI journey? Do you think we're at the beginning that we're in the middle of this or do you think we're nearing the end?

Dr. Sakhuja: I think we are trying to still figure out how to best use AI. Right now AI is that shiny toy that everybody is excited about. I don't think we have a very good sense of what would be the best use cases even for AI in a lot of situations right now.

Depending on how the technology is evolving it's finding its niche into various streams including within healthcare. But this is a point where you know I take some background knowledge from my time in informatics and really try to impress upon learners and my colleagues that we need to approach AI as a tool. If you take a sledgehammer to a nut is it really the sledgehammer's fault?

Right. So we need to identify in our workflow what the bottlenecks are and then how to best integrate AI or other technologies right to be able to clear out those bottlenecks so that our patients do better and our work-life balance is better. I don't think we are there yet which is why I want to say in this very long-winded answer that I think we are still at the very beginning stages.

Dr. Alexander: Yeah I would agree with Dr. Sakuja in thinking that we have the technology now and I think we're just at the beginning of learning how to best utilize it in practice and for education.

Dr. McLaughlin: So then let's all pull out our crystal balls here for a minute. Where do you think AI is going over the next five to ten years? Are there emerging tools or concepts that you find especially exciting?

Dr. Sakhuja: I can take that first. I think in healthcare some of the mundane tasks could probably get automated soon right. For example, summarization of charts or creation of discharge summaries.

Maybe even billing and coding to some extent. Maybe we will have somewhat better models that identify early decompensations for critically ill patients and maybe I think you know in the next five to ten years we will have AI even help with some personalizations of treatments. That last part right now is still very lacking in critical care because most of the use of AI has so far been in predicting outcomes but it is slowly moving towards identifying what clinical actions may be beneficial for a patient who is right in front of us right now.

That is where I think in the next five to ten years especially in critical care there will be the most impact.

Dr. McLaughlin: So personalized medicine, truly personalized.

Dr. Sakhuja: Yes.

Dr. McLaughlin: And then what about in education?

Dr. Alexander: We're going to see maybe some of the same trends within education where learning becomes personalized as well and so students can have adaptive experiences that are enhanced based on their performance and really target areas where they potentially need more time attention or to improve upon. So providing I think a more adaptive learning experience is a way that technology can really be utilized within education as well to provide more personalized learning and also potentially provide some more predictive analytics where that allows for early intervention before students find themselves not passing a milestone right or not being able to achieve a milestone that's necessary within their education or within their training. So I think technology can really be leveraged to help us in some of those ways.

Dr. McLaughlin: So I think anybody listening to this is going to be like oh this all sounds great. I want to use all of this. What advice would you offer to people just getting started exploring AI?

Are there small steps they can take to start making an impact?

Dr. Alexander: I mean I think if you are someone who hasn't really used the technology a lot or access many AI tools that's kind of the first step is just to start with some low stakes tasks or prompts. Maybe it's just creating an image seeing what you can come up with or creating an agenda for the next meeting that you are set to run and seeing what the tools are capable of is a great place just to start to explore and also be able to understand then more how the technology works and functions to kind of know and understand it better.

Dr. Sakhuja: I will second that because I think it is very very critical to understand the basics how a particular model works because that creates a foundation to understanding what it can actually do and even more importantly what it cannot do and what its limitations are because you know most of the people want to explore things so when they have a model or a tool in their hands they'll want to explore different use cases but that can be a bit of a rabbit hole if you do not at least at some conceptual level some intricacies of the model.

Dr. McLaughlin: So knowing that is there a common misconception about AI that you guys would like to clear up while people get started?

Dr. Alexander: I think when it comes to incorporating AI with our learners I think the common misconception is that all AI use is bad and it's going to lead to reduced critical thinking or problem solving for our next generation and so I would challenge that thought and in turn say or think about how we can be utilizing AI to augment learning and improve students critical thinking abilities and skills so not just thinking of the AI or the technology as a crutch.

I read a quote recently that shared that we should be encouraging our learners to be better than AI and so that's really stuck with me because I think that there's still need for a lot of human input and oversight when using the technology but our learners aren't going to have that experience unless we start to incorporate it into our experiential education.

Dr. McLaughlin: And then what about in the critical care of the clinical setting?

Dr. Sakhuja: Well I will second what Dr. Alexander shared and build upon it with the other extreme another common misconception that if something is broken let's throw AI at it and it'll fix it but again it's very important to understand and realize that AI is just a tool. It's not going to replace the clinical expertise or educational expertise. Its value entirely depends on how skillfully we use it and how thoughtfully we integrate into practice.

So again we need to start with the problem not the tool. We don't go to a hardware store buy some tools and then think what we want to build. We do the other way around right.

We decide what we want to build then go to a hardware store and then buy tools according to that. That's exactly I think we need to approach AI.

Dr. McLaughlin: Well I tell you I'm going to remember the sledgehammer and the nut analogy for the rest of my life so that was a good one. And actually so I always end every podcast asking if there's one thing you want somebody that was listening to this to take away from our conversation today. What would it be and already that sounded pretty good.

Dr. Sakhuja: I can take that first. What I would say is that AI is not the future. AI is the present.

It's already here and it's shaping how we deliver care and how we educate. So the key takeaway for me is that we can either react to it or actively shape how it's used. But either way it's here to stay.

Dr. Alexander: I totally agree. I think when we're interacting with our learners sometimes we kind of just choose to ignore the technology ignore that elephant in the room rather than having a discussion up front about what appropriate use looks like and how to embrace the potential of AI and technology but in a responsible way. And so I think that's kind of one takeaway is don't ignore.

If you haven't explored the technology and played around with it don't ignore it. Don't ignore it with your learners. Don't assume that they're just searching AI for all of the answers but instead teach them how to use it the right way so that it's helping improve your efficiency their efficiency for patient care and then everybody's learning something from that process.

Dr. McLaughlin: This has been a great discussion. I think when we're done I'm going to jump on and start playing around a little more with AI myself. With that this will conclude another episode of the Society of Critical Care Medicine podcast.

If you're listening on your favorite podcast app and you liked what you heard consider rating and leaving a review. For the Society of Critical Care Medicine podcast I'm Diane McLaughlin. Thank you.

Announcer: Diane C. McLaughlin, DNP, AGACNP-BC, CCRN, FCCM, is a Neurocritical Care Nurse Practitioner at University of Florida Health Jacksonville. She is active within SCCM serving on both the APP resource and ultrasound committees and is a social media ambassador for SCCM.

Join or renew your membership with SCCM the only multiprofessional society dedicated exclusively to the advancement of critical care. Contact a customer service representative at 847-827-6888 or visit sccm.org/membership for more information. The SCCM podcast is the copyrighted material of the Society of Critical Care Medicine and all rights are reserved.

Find more episodes at sccm.org/podcast. This podcast is for educational purposes only. The material presented is intended to represent an approach, view, statement, or opinion of the presenter that may be helpful to others.

The views and opinions expressed herein are those of the not necessarily reflect the opinions or views of SCCM. SCCM does not recommend or endorse any specific test, physician, product, procedure, opinion, or other information that may be mentioned.

Disclaimer

 

Recent Podcasts

^