false
Catalog
The Rapid Evolution of Artificial Intelligence (AI ...
The Role of Associations in the Use of Artificial ...
The Role of Associations in the Use of Artificial Intelligence (AI) in Healthcare
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good. I'll move down there. I so love when that works, I just got to tell you. I wonder if I should just sit—oh, there's Kelly. Wow. Welcome, Ami. All right, so I'm going to briefly introduce our panelists, and thank you, Mike, for joining us. Ami, who is joining us remotely, having just returned from Greece for the ACC Middle East Innovation Summit. Ami is a pediatric cardiologist at Mass General, who is now the Chief Innovation Officer of the American College of Cardiology. Mike Tilken at the end there is the Chief Innovation Officer and Executive Vice President at ACR, which has an extraordinary data science institute, and has probably been in this space among the longest of most of our societies, I think. Kelly Rose is the Chief Scientific Officer at the American Society of Hematology, and Judy Keene is the Director of Healthcare Association Engagement and Innovation at Novartis, but formerly had science roles at both ASH and ASTRO. So she's going to give us a very interesting, different perspective, sort of as an outsider, former insider. So you've had a chance to listen to this incredible presentation, and I think we're just going to start with really just a general question overall for those who've had a chance to listen today of really giving us a sense of, you know, having heard this, what's your society doing or really thinking about in terms of this AI, ML space? And some slight modifications, Judy, obviously from where you've been sitting, what you've been hearing from industry in terms of what you think society should be doing in this space, and I'll save Mike for the end for a slightly different modification to that question. So, Ami, why don't we start with you? Great. Thanks for having me. Can you guys hear me okay? Yeah. Great. First of all, Mike, that was fantastic. And I think the way you break it down for everybody to understand, especially if there weren't those many hands going up, in terms of use of generative AI was wonderful. When we think about really AI in all its different forms in cardiology, we recognize that machine learning, really supervised machine learning, has kind of been the basis of what we do. And so that's an area where we have a lot of data, and we can teach and train things, but it takes a lot of time. And the excitement to us about generative AI coming is we really feel the burnout in cardiovascular disease. The cardiovascular disease burden is rising rapidly, globally, and at the same time, the workforce is declining at a comparable rate. And so the thought that we can ease administrative effort has been first and foremost in our thoughts. And so generative AI is an area that we're really quite interested in, ranging from voice to text to prior authorizations, things that, you know, we would say mundane, but take up an inordinate amount of time and keep us away from the patient. So the other part that's really important to us is we do have a lot of data in cardiology, and it comes in a variety of forms, and how do you bring that in in a way that is going to be clinically actionable, rather than technologies that are square peg in a round hole? And so we spend a lot of time talking about collaborative intelligence, and no, Mike, we don't have an acronym called AI, called CI for that one, but we use the full word, collaborative intelligence. The idea being a lot of what you talked about, which is, as we are designing these technologies, we need clinicians at the forefront. We want cardiologists to be there saying, this is important to us, this is the data that needs to go into it, this is what we will consider relevant. And then when the output comes, not throwing the baby out with the bathwater, what we tell most people is, we have 57,000 members at the ACC, the likelihood that the same out-of-the-box AI is going to work in every one of their practices, or honestly is going to work perfectly the first time for anyone, is almost zero. So we have to be willing to work with it and say, what did work, what didn't, what type of AI did we use, how are we building it? So we really want people to be AI-enabled. You don't have to be an expert, but you have to be AI-enabled. And so those are some of the areas that we're focused on that just really, I think, were highlighted in Mike's talk. That's great. Kelly, reflections from Ash? Sure. I would say that Ash has been focusing on education for our clinicians and for our researchers. And one of the things that we have done early on is to create a task force on AI, as CMSS and others are doing, to get the experts in hematology who are actively using AI and start to think about ground principles of what do our hematologists and blood scientists need to know. And so one of the first outputs of that is education at our annual meeting. We've had, for several years now, some highlights, a special interest session on AI and hematology, which has been, I can say, really well attended by our early career investigators. They just really can't get enough of it. The next thing the task force took on was to write a paper. And a lot of in the space is how I treat, how I treat certain disease spaces. This was how I read. And it was how I read AI machine learning papers. And so that allows our hematologists, who may not be in the AI space, some basic tools to be able to understand. Here's a glossary. Here's things you should be looking about in the methods, the methods these people used. So you could be able to assess, is this a large enough data set? Was this the right tool for it in the context of your hematology thing? So that's where we're focusing now and lots more to come. And Mike, I know this is a space you've been doing some work in as well, thinking about how, sorry, thinking about how to help people read the literature as well when people are using research on AI and ML as well. Yeah, in the AI 2.0 era, I think many of us in early in our careers will have read the JAMA Interpreting the Clinical Evidence series. We had a team that did one on how to read an ML paper. There's some different things in the AI 3.0 era, I think. We're deeply interested in how to think about health professional educations. One of the people I recruited a few years ago, former assistant dean of Harvard Medical School. And so you might imagine we have a lot of conversations about what are the things that learners will need to know and that practicing physicians will need to know a year from now, five years from now. Yeah, definitely a question we want to come back to because that's definitely a core competency of our societies is what we do, certainly around the practicing physician piece. So Mike, I know ACRI, again, as I mentioned, pretty far down this path. And as we were talking about what ACRI has been doing, I'd love if you could share the way you really thought about this, beginning with clinical use cases and your vision of sort of a lifecycle approach to AI and ML. Yeah, sure. So our journey started in about 2017. It came on the heels of a lot of advances in computer vision with deep learning and the like. And there were predictions about the demise of radiology and the fact that radiologists, in fact, Jeff Hinton had a great quote that was radiologists are like Wile E. Code who's gone off the edge of a cliff and just hasn't looked down yet. So five years, you know, AI will solve it all. And so we took a look at the technology and we saw the promise and we're excited and we said we leaned into it. We basically said if this is going to be effective tooling for healthcare, then we wanted to help make sure that it advanced in a healthy and productive way. And we launched the Data Science Institute. And so one thing we looked at immediately was what were all the things that needed to happen for AI to be successful? And we looked at everything from, you know, what are the right clinical use cases that were going to be most productive and really move the needle? What are the ethical issues? What are the economic issues? What are the data validation issues? And so we kind of looked at all those different dimensions. So if you kind of go through, if you will, the life cycle, really starting with the use cases. And this came from discussions we had with vendors who wanted to know what were the most useful use cases and our members really looking for solutions that would be useful. And so we assembled panels much like we do with other clinical content like our appropriateness criteria that we asked them to start to put together use cases, structured examples of what they wanted, the data considerations, the workflow considerations, really something that we could use as a bit of an anchor to help inform the vendors and help us think about once we start seeing these models and we're beginning to see them, how do we think about the validation process? So if you kind of go through everything from, you know, we know these are data hungry, certainly in the deep learning, you know, AI era, and they still are. So we needed to make sure we thought about how data gets provided, how data is shared, what's safe to share. Once algorithms are created, how are they validated? What are the pre kind of clearance and FDA issues? We worked actively with the FDA to think about how it goes through a clearance process. And then after, you know, once we get these into production, one thing we know is there's a lot of variation based on your patient mix, based on a whole lot of other factors that make these models somewhat brittle. And so we wanted to make sure there's a monitoring process so that you could continually evaluate use and improve. So really thinking about that entire life cycle, and we have projects and programs and activities kind of happening throughout. Great. Thanks. And wrapping up on this first question, Judy, what are you hearing from industry, given your background in the society world, of what society should be doing in this space? This brave new world is the 3.0 world that Mike described for us. Yeah. I mean, clearly in pharma, we're thinking about a lot of the same things around education and around, you know, how do people incorporate it? What do patients want? But you know, I think societies, what we work with a lot of these societies, and we work a lot with AMI and listening about workflow and listening to what sort of barriers there are. And I think that's where societies really play a really, really important role. You know, you're an honest broker. You work with everybody. You work with the patient groups. You work with the physicians and the clinicians. You work with pharma and other partners. And there's things that we can come and talk to you as a trusted partner. You can talk to everybody else as a trusted partner. We, you know, we're pharma. We're not going to go and be able to have the same type of conversations and have people really believe what we say. So it's really, really important for us to really hear and understand what are the barriers that you're seeing? What sort of ways are clinicians taking up these things? What sort of things do they need? And as we are developing clinical trials and we're developing products, you were talking about R&D and doing protein folding. Like, we're doing R&D and trying to figure out, you know, what's our next drug discovery? Is this something that's going to be useful? Is this something that's going to be taken up? And, you know, how do we actually manage putting these things into clinical trials and into clinical practice so people will use them and it will improve care? So we really see societies as a very important central aspect of that that is a trusted partner for us to be able to come to and understand what ways that we can improve outcomes for patients. As you said, you know, it really comes down to patients and how do we improve health care? Great. A question, starting, I guess, with both – I'm going to call you Michael, so I don't get confused with Mike and Michael. So you're Michael for now. So for Michael and Ami, one of the things we talked about in our prep calls was this question of what can societies do to really make sure that these tools are ready for prime time for use by your members, both in terms of, like, are they ready, but also I think particularly for Ami, you had some great thoughts about what we can also do to help iterate and provide the feedback to make sure that we can make them better. So maybe your perspectives, Michael, about what could we do as societies to make sure that they hit the right evidence bar, that really okay before we let them kind of into the wild with our members? Yeah, I think, you know, if we look at the work that we did around credible content on social media, I think that tells you how we like to think about the world. We don't want to be deciding which channels are credible and authoritative or what the process is for that. We want to rely on the experts in partnership to define those criteria. And then what we're really good at doing is then implementing that at global scale in ways that touch tens or hundreds of millions of people. AI similarly, you know, in the MedPalm paper that I showed, we do invent the evaluation framework because it's not something that has been studied before. It's a first in human history problem of having a computer that could write a long-form answer to arbitrary health questions. And so we had to invent an evaluation framework as part of that. We think it's pretty robust. But as you all are thinking about three years from now or five years from now, if these things continue to get better in a way where they might be a co-pilot or an assist, how do you know if they're good enough? Like what would be, is there a board exam for it? Like I don't know these things. But I think that we, in the way that we reliance on societies for guidelines, I think societies have a role to play, a very hopeful role to play in how would you tell and how could societies be used, the honest broker. Could societies be an honest broker around this? I think there's a really good opportunity there. Yeah, I think it's a great point. Ami, do you want to add to that and then think about the iteration piece as well? Yeah, I'll start by just adding a little. I think, so in March of this year, we launched ACC Anywhere. And we like to call it in-house the Netflix for cardiology. Because we have so many great videos of talks from the many conferences we put on being such a large global organization. And the uptake is less than what we'd like. And so once we started this, it really is the same AI in the way that Netflix uses. It senses what you need, what your interests are, what your gaps may be, and starts to lead you. And people will spend up to four hours going down what my teenager does with TikTok, but watching cardiology. So I think I want to remember that there are ways that we can use AI that actually just helps educate our members where it's in the background and doesn't actually require us to do anything different than we do in the rest of our life, but is likely making patient care better and doing it in a pleasant way for our clinicians. So I just want to point out that that version of AI exists and is our responsibility as societies. The second is the logistics. When working with the companies, we really do need to co-develop with them. And so we have a robust co-development program where we will talk to companies that are in an area that we're very interested in and say, can we work with you on a model of this where our members will just come up against it and give you feedback? Now, as you can imagine, some companies are just running, and they're not necessarily interested in that. They may do well. They may make a lot of money. They may not be used. They may sit on a shelf someday. I think the smart companies recognize that there is a point in time where they need clinician input. And societies can provide that, like Judy said, in a somewhat objective way, but that is also all-encompassing. Societies understand that there's rural medicine and urban and global and academic practices. And if you get a couple clinicians, then you get those couple views. And so the benefit of having a society inform these companies is really important. So I kind of encourage everybody out there that that is a role that we have and an important one. When we do that, we then create use cases. Hey, your friend Mike used this. Let us show you how he did it. And that has been incredibly helpful because we don't want a grassroots effort, right? We really want to take things that are going to be implemented by large systems that are going to be business successful, but we do want a groundswell. And how do you create that? You create it through storytelling. At the end of the day, you have to let people know. And so that's a combo, co-develop the technology, storytell how people get on board. I think that's really important. And then the last thing I'll say is it's a little different than iteration, Helen, but I'm going to say it. There can be danger. We talked about the benefits of AI, but there can be places where it's not going to be right. I often like to ask people, would you fly in a plane without a pilot, but with AI? Maybe people raise their hand. Would you fly in a plane with just a pilot across the Atlantic Ocean and no AI? Nobody will raise their hand. And I think eventually medicine will be that you'll want your doctor to have all the knowledge at their fingertips, augmenting their own intelligence, because the human brain cannot capture all these data points. And that's just because of how great we've done in medicine. But with that in mind, I recently met with some postdocs who are working in AI, and they are working on figuring out when do you lift the veil? So when do you say this is a situation in which you should be prompted by here's what the clinical decision support is in AI, and then there should be cases where it's probably not going to be in anybody's best interest to confuse the matter further. And so I think that area of the science is one that I want us to keep track of, and I want us to offer to try. Because I don't know that AI will be applicable in every situation all the time, and it's part of our responsibility to both iterate and co-develop and teach, but also to think about when do we use it and when do we not? Because there are likely paths there that we haven't figured out yet. That's really interesting. I mean, in many ways, it does change our relationship in the way specialty sites interact with tech companies in ways we've really not interacted with them before. Any reflections from anyone on the panel about that concept? Please, Shelly. Yeah. Kelly, please. I would say it's very similar to what we, the kind of ethos that many of us do with our patients, right? We want patient input from the beginning when we're developing clinical trials, when we're doing other things, in order to make sure that the end user has input at the very beginning. And it's the same in this system. If our societies have input at the very beginning as these companies are building, it's going to be a better product. Mike? Yeah. I think, you know, obviously societies play an important convener role. And I think what we found, and we do a lot of meetings and conferences and sessions pairing together companies and our members and talking about the issues, talking about the opportunities, educating our members. You know, I think this issue of, you know, these algorithms aren't perfect. You know, helping, I love the talk and the description of how the LLMs work, because it's math. It's not magic. And I think we keep trying to tell people that, that, you know, really isn't much different than when you learned how to do linear regression or you learned how to do, right? And understanding that, that it's data dependent, how it was trained. Your data may be different, so we got to test it locally. What does it mean to test locally? And that's true of the LLMs, that's true of any other form of AI. And so just, you know, both educating the members and frankly educating the vendors as well in terms of what's important to the members, because you get very excited about a new feature, a new piece of technology, but understanding how it actually works in a clinical environment, there's amazing amounts of gaps we find when we kind of bring these communities together. So I think there's an important role there for us. And I want to say from the pharma side, I think kind of the tech and the validation part is really, really key for us. You know, we also, we could rely on societies to understand what sort of barriers, or what sort of gaps do these fill? What sort of technologies are you helping with? And getting input and developing these things out, because a lot of times what we have is, you know, somebody, X company comes to us and say, we have this platform that does this, and we have no idea about clinical validation, and we don't have the time to go back and sit with every company, because there's a bazillion of them coming out with their own individual platform. So I think that's another really, really strong area to help with that vetting and understanding. And if we can partner with societies to be able to identify that, and then also partner with the tech companies together, I think that would be really, really beneficial to be able to really substantially move things forward. I'll only add to one point, that we've done a fair amount of quantitative work in clinician AI teams. So most of the, we've done it mostly in pathology, and it's very clear that people in AI have different strengths, and they're better together. We've been able to show that. And so that's in the AI 2.0 era. We haven't seen work in the 3.0 era, but I think that it'll play out the same. It's very interesting. Oftentimes when I precept my medical residents, I often think about, they pull up, they immediately go and pull up the details of the differential diagnosis, but I often feel like they sort of lose the concept of sort of the wisdom of, I've seen patients like this before, that I don't think you could teach sort of an AI model. It's sort of interesting, but maybe it's because I'm old. Another question for you guys, and I actually start with Kelly on this one. I think we often, we've talked a lot about sort of AI and ML sort of on the outside. I think one of the issues you brought up, Kelly, that I thought was really interesting as well is there's so much data flowing into our societies as well from our members. They submit abstracts to us. They submit journal articles to us, and I think you raised a really interesting point about how you could potentially look at the data flowing into you to look for signals of what's most important to your members using AI. You want to talk a little bit about that? Yeah, absolutely. I think most of us in our societies do this as humans naturally, right? We see the trends in our research areas. We see what hot topics people are interested in, and we try to highlight those. This would be an opportunity to do that systematically, right? Can we take data endpoints from our annual meeting as far as what people are attending and what people are interested in? Can we look at, use these large language models towards all of our abstracts and publications to see what the trends are, and can we use that to build our research agenda into what next steps there are? I think there's just tremendous opportunity there. Obviously, there are issues with being transparent and how we're using this data for our members and making sure that we're easing any concerns about privacy data or anything, but there's just tremendous opportunity to use these ... I hate the word use, but to gain insights from our data that we already have in there. have a lot of complex rules for things like accreditation, a lot of practice parameters. So how do we make that information easy and accessible to different audiences actually, right? So we have internal audiences that are answering questions that have to go through pretty complex documentation to look stuff up. Great application for an LLM kind of model. Likewise to the end consumers, how do you ask an English language question and get a very sophisticated answer back on your accreditation process, for example. So we're kind of working on those kind of problems. Great. I think, could I say something? Yeah, please, go ahead. I think there's three areas that you just made me think of these. Wait, are we going with Mike or Michael? Mike. Okay. That was Mike. So Mike just made me think of three good things that people are asking us to look into that I just want to share with the other groups. Curious if anybody else is doing that. So the first is actually credentialing and thinking about ways to streamline that process because it's an extraordinarily kind of painful process and leads to a lot of frustration among all types of clinicians. I think the second area is the use of our guidelines, right? How do we help people infuse the technologies that are coming with the guidelines, but do it in a way that makes sense? And those algorithms are going to be really complex, but I think that is probably a job that we have is to start to figure that out, right? How it fits and what we're going to use it for and how we do that. And then a third thing that I'll say is when we look at just the work that is being done administratively by our colleagues, one of the things we do is we write a lot of questions. So questions for the board and questions for practice tests. And I think that's probably one of the near term places where generative AI can at least give us a draft, make sure that we've covered all the different topics and the right percentages that exist on a test. And then we can add the nuance because the nuance is so important. But again, it goes back to what Michael was saying about we're always kind of necessary there, but that combination can be so much better and ideally more efficient for our clinicians in the kind of work that they do outside of patient care. Yeah, I think that's a great point. I want to go back to the point you raised earlier, Michael, about your assistant dean who's on your staff now. One of the issues I talked about when we were having our planning call was this question of what can societies do since we do so much education as such a core function of our work to ensure that we achieve an AI enabled workforce. And it isn't just medical students, but clinicians, it's physicians in training, it's those who are already in practice. And I would argue it's really also patients who we take care of who are increasingly going to the web and using these tools. So maybe I'll start with you, Ami, since that was sort of one of the topics we talked about and then the others, I'll turn to you. I'm sorry, did I surprise you? Sorry, AI enabled workforce. Oh, we lost her audio. Oh, OK, there we go. Is that better? You're back. You're back. Sorry. Sorry. I had an echo, and then my mute. Oh. Oh, we lost you again. Oh. All right. This is being a little weird. I have a delay in something. Can someone else go first? And I will. I'm going to just try and read it. OK. Yeah, I'm happy to start. So, yeah. So, you know, education's huge, and that's kind of something kind of from the start. And so we've done, you know, everything from in-person stuff to online videos and the like. But one of the things that's been actually pretty successful is we put a tool set together we call AI Lab, initially really designed to be a easy access point for our members to come in and actually try and play around with AI, actually be able to create an algorithm without coding anything, kind of a point-and-click kind of thing, be able to evaluate algorithms. We actually ran some challenges just for really educational purposes, frankly. We had folks annotate images in the tooling and then see how it performed when you added more data, things of that sort. And actually just recently we added some chat GPT interfaces to kind of see what happens there. So really trying to get tooling in folks' hands just to kind of demystify and help build some kind of really tangible sense of what the technology is. Great, thanks. Right back, Ami. All right. I'm back. Sorry, that was a little weird. So a few things. One is I mentioned kind of ACC Anywhere before and the idea of, you know, Netflixing education for our members. The second area that's part of kind of our upcoming strategic plan is really thinking about clinical guidance at the point of care. So if we know what diagnosis somebody has, how do we push the right information to them? And at the same part, we have another branch of the ACC called CardioSmart that is aimed directly at the patients. Also be able to push information to the patients about their diagnosis kind of at the same time. In doing that, it's kind of a chance for us to really reframe, you know, before we're talking about the idea of a consumer and thinking about the consumer facing and the provider facing. But we're really working on thinking about it not as consumerism, but patient agency. And so as we are working with our patients, they are not going to, sorry, I'm going to use the phrase, Michael, Dr. Google, right? It's not a bad thing. In fact, with Mike at the helm, it might be a good thing. They're going to learn good things. But at the same time, they're also going to have an Apple watch and other things. And so we're really working on how do we utilize the data that we have and push the right data at the right time, both to our clinicians, but also to the patients to help them engage with each other almost at a slightly elevated baseline when they have these conversations. And so I think that's kind of maybe another slightly more unique area than the traditional pushing of education that we're thinking about. We're encouraging people to use ChatGPT. We've already started a trial of using it to make questions for the boards that Joyce Dinellon is running at ACC. So we're trying it in several different areas. But I think that's maybe one of the newer areas that we're working on to share. Great. Wonderful. I'll jump in here. We think about education quite a lot, obviously. And I think it's educating both physicians and patients, not only in providing education, how do you provide education differently, but also the education around the digital tools, how to use them, how to incorporate them, what sort of things do you want, how do you want to use it, how do you want to get that information. The Consumer Technology Association did a survey earlier this spring of patients or consumers and asked them, where do you get your information about digital medicine in general? And they said that for medicine, they will use it and incorporate it most of the time if it comes from their doctor. They're saying 60% of patients will start to incorporate some sort of digital tool or some sort of technology if it's what is told by their physician. So there's a real need for education of physicians to understand what this is, how these tools work, why they're beneficial, and then also so that they can educate their patients about it and actually get the uptake of those tools. Because there's so much stuff out there, you can't find the trees that are there through the massive forest or the big wave of stuff that's hitting all of us, to use terrible analogies. Good analogies. And I think the first steps for most people who, you know, so I think we're a great example of societies who are across the spectrum. And first steps for people who are completely brand new to this is, you know, you need to meet people where they are. And how do you meet people where they are? You need to know where they're at. And so step one is always going to be to find out where you're, you know, if it's the patient population, if it's your physician population, whoever you're interested in serving in this moment, find out where they're at and then meet them where they're at. And we think about the digital literacy as well, not just health literacy. I mean, we're talking about health literacy in education, but it's also digital literacy. Two people know how to use these digital tools. You don't necessarily have to know the details of the algorithm, but do they know and understand how to use that and incorporate that? And I would almost dare say that they know because they use it in every other facet of their lives. We just have to get better at healthcare as making it as user friendly as every other industry insists on doing. And I think we're not used to that. We're not particularly, you know, we have come from a paternalistic clinician centered, hospital centered aspect of care, and we are very steadily and nicely moving to care in the communities where people live, patient agency, but we're still on that journey. So I think the usability is a key feature in healthcare that we need to emphasize like we haven't before. Yeah, I agree. So, Michael, knowing what you know about specialty societies, having lived in many of them all these years in academia, if you could think about what we could do as specialty societies to really educate our members around this coming 3.0 world, what would you recommend we, how should we begin? Yeah, I think when we're sitting around three years from now talking about this, I think there'll be three buckets of things that we will know a lot more about. The first is what do you need to do to educate your members about using these tools in practice? That's bucket number one. Bucket number two is what do learners need to know about AI itself? And I'll give an example that I think I could follow the JNC8 guidelines for hypertension management and not know that most people have two kidneys, but it's probably better to know that kidneys exist rather than just follow the guidelines. And so there may be things that need to be incorporated throughout the curriculum about AI itself because it's such an important piece of technology. And then I think the third thing will be how does AI transform the process of learning? So Netflix for cardiology is a great idea, but what if we had a tutor for every learner that adapted and helped along the way, those kinds of things. So I think that it'll be three buckets. And I think that the point that was brought up that there is a chance that AI will help level information asymmetry between clinicians and people. And that that will be learning how to deal with that will also be an important professional development challenge. I think Helen has heard this story before, but I will tell it of when I was a pretty junior ICU attending in one of the Harvard hospitals. It's like a little bit before 2010. We had this guy who's flown in from Western Mass, really sick, like intubated and on like 12 of PEEP and blood coming up out of the endotracheal tube. We sort of figured out what we thought he had. Microscopic polyangiitis, super rare, one or two in a career usually. And then his wife got in from Western Mass. We sat down and did a family meeting and like any good family meeting started with, we want to know what you'd understand about what's going on because we don't want to assume the other doctors told you things. She goes, I think he has microscopic polyangiitis. We say, oh, are you a pulmonologist? That's amazing. And she goes, no. Do you have a lot of pulmonologists in the family? Because that's what we think he has too, right? It felt really good going into this. She goes, no. What do you do? She's like, I teach elementary school. How did you figure it out? She's like, well, I listened to what the doctor said and I wrote down like respiratory failure and they kept saying this word that meant coughing up blood and they kept saying kidneys okay. And I put those into Google and eight out of the first 10 hits were microscopic polyangiitis. And so I figured that's what he probably had. And I felt like I was seeing the future there because she had been able to extract the salient things and then get a translation into something that was what he had and he left and lived a long and happy life. We know that that's whatever, one out of 100, not 99 out of 100. But some of these tools may help with things like that because the generative AI can ask questions back to clarify. And so I think the fourth area will be how do we, as a profession, deal with the leveling of information asymmetry, which I have to tell you, your new mission statement centers the patient in a really important way. And I think AI may help with that if done well. That's great. All right. We've got about seven and a half minutes left. I have one more question in my back pocket, but I want to invite, it's been such an amazing discussion. I want to invite the audience. If anybody would like to come up to the podium, you're welcome to ask a question. Otherwise, I'll use my last back pocket question and we have Dr. Thornworth coming right up. There we go. Please. Bill? Thanks very much. Terrific session and thanks to all of you for participating. One thing we haven't addressed yet is we're sort of looking at it as a snapshot at the beginning of this journey. But as these things get out into so-called out into the wild and in real case use, do you see any role for specialty societies in monitoring that performance? Because as it was mentioned, these can be brittle and God forbid there now be an algorithm that's going to detect breast cancer and it goes off the rails. It may be a year or two years before that patient now has a clinical finding that you look back and say, oh my gosh, that was there. Is there a role for specialty societies in defining the methods of post-market surveillance, for lack of a better phrase? Great question. Michael? That's my boss, so he knows my answer to that. I know. I'll tell you my answer to it. I think absolutely. I actually think that we do a lot of registries on a number of fronts. This in my mind is another quality registry, that continual monitoring, understanding how performance relates to other facilities and what people are generally seeing, what's acceptable, what isn't. Whether you see drift, and that's one of the big worries, is you put it in and then your data changes, things change. Scanner upgrades in the case of radiology or patient mix changes, and so your performance degrades. I think it's critically important that we're doing this kind of post-market surveillance. The FDA looks at that way as well, that the more you know there's a safety net of surveillance, the more you're comfortable with the pre-market clearing process. I think it's absolutely critical. Ami, did you want to say something? Oh, yeah. I'm going to say yes, too. I'm going to add, I think ACR has been kind of doing it right. I think one really important part of being cardiology or another specialty that has the imaging but then also has other responsibilities is, yes, the registries, and we have large registries. We can't have a registry of an AI technology alone. Our trials can't be done without AI involved in how we were picking patients, how we were recruiting patients. I really think, actually, in the future, we're going to have to figure out how AI fits into some of the existing workflows we have because that's where they belong, how we're going to be more modular in our guidelines so that we can update them more regularly, and we can include digital health and AI into the specific disease processes and monitoring, not saying, here's a digital health or here's an AI guideline or algorithm. We have to stop keeping it separate, and so what I'd love to see as societies is, yes, post-market surveillance, yes, watching what's happening, but doing that by actually incorporating these technologies into the care that we deliver. We don't necessarily have the infrastructure, do they? We're building it. We're building the plane as we fly it. Now, Judy, I'm adding on to the really horrible ideas of how to say these things, but I think it's true. I don't think we have ... At some point, it has to not just be distinct. It has to be part of the all-encompassing care and then measured within that, and I want us to remember that. Hallelujah. Michael, anything to add before I call on Patricia? No, it seems like a straightforward yes. Yes. My question for you is, is how can we ensure that AI is a decelerant for bias as opposed to an accelerant for bias? Great question. I knew that you were going to ask that. Thank you. It was my last question, actually. I was glad you came up to the mic. There you go. I mean, I'll start. I think, first, it's a great question, and I think it's absolutely critical. I think it starts with an awareness and understanding of why and how these can be biased. I think it then comes in your testing and validation process, in your monitoring process, that you're continually looking for elements of bias. I think it's awareness and baking it into the validation and monitoring would be my thought. Yeah. I think Michael did a great job in his presentation of showing under the hood what's going on and why. I'll add on to that. We also need to make sure that the people working on these mechanisms are also from a diverse background, so that we can make sure that we're catching things, that it's not just from a homogeneous population. Yeah. I also thought it was fascinating, the data that you showed, Michael, that, in fact, it looked like there was actually less implicit bias on the part of the model compared to physicians at times, which I thought was interesting. Yeah. I really like the framing of this question, and so I'll answer two parts. One is, when we moved from MedPalm to MedPalm 2, our health equity team built out an adversarial question set that's designed to bring out bias and problems in the model. We also gave those questions to physicians, and the model was better. We tested for that. The other piece to think about, and for things about how to mitigate bias and use, let's say, AI 2.0 to promote health equity, we wrote this fairly detailed piece in Annals of Internal Medicine a few years ago with exactly the premise and the question that AI can promote health equity, not only minimize bias. The second thing to think about is that one of the greatest biases in the world is based on where you live and access to care. Examples of this that we've worked on, one would be making it possible for untrained sonographers to do fetal dating through blind sweep, which gives you a critical piece of information for WHO maternal care pathways. The second would be, what's the largest killer of people on the planet ages 5 to 29? Accidents. Accidents. Auto accidents. It's large enough that it is one of the specific targets in UN Sustainable Development Goal Number 3. Well, it turns out that Google Maps can tell when you slam on the brakes, and so we know where the dangerous intersections are, and 90% of those deaths are outside of the United States. If you have a place you want to go, we will route you around the dangerous intersection with a goal of preventing 100 million heart-breaking events per year. That would be an example of a specific use of AI to promote equity and also evidence that health is more than health care. Wow. We'll take one more quickie question, and then I think we have to wrap up, yes, Julie? Yes, please go ahead. Yeah. I think this mic is not working. You're good? Yeah. Keep talking. Yeah, let me. Okay, now it's working. Thank you. Some question for Michael. Again, goes back to Ami's point, like when it comes to integrating this AI into the practice, what do you think from a patient and from a provider standpoint, where the ethics comes into play? Of course, ethical component and regulatory is an important component, so what do you think about ethics into the AI? I think ethics are wrapped into any and all care interventions in fundamental ways. I think the principles of non-maleficence and justice and all the rest of the bioethical principles are foundational to AI. Whether it's different than the ethical issues of a new diagnostic test, of a new therapy, I think that there are challenging interpersonal issues related to new categories of AI, but I think that the way that our ethics team work through it is like any other intervention because we deeply believe, and I hope my talk conveyed this, that it's like part of a complex adaptive system that is healthcare delivering, and it needs to be considered in the context of all the rest of the work. All right. Helen, may I? Is there one second left? You can have one second, Ami. I want to echo what Mike said, but just make a point because I'm hearing a voice in my head. The first is, for a global workforce, the up-training of community health workers throughout the globe using AI is already happening. We have Pakistani workers in villages who can identify SDG Goal 3 babies at risk of kind of neonatal mortality before they're even born, so that exists and it's happening. There is an equity play for AI that's so important, and we have to think outside of the U.S. sometimes. I'm going to end with a voice in my head. One of my good friends works with the NAACP, and she keeps reminding me that you cannot be responsible for bias in AI until you understand that the majority of data sets that exist, especially in the U.S., have structural racism. And so we have to be really careful about reusing the same sets we've used for generations and think about how we're pulling data and how we're building it. So I just want to say that that is, you know, not directly my words, but words that I'm really trying to remember and live by. Wonderful. What a wonderful way to end this extraordinary session. Thank you to Mike Howell, and thank you to our extraordinary panel. Please join me in thanking them. Thank you.
Video Summary
The panel discussion focused on the role of specialty societies in the adoption of AI and machine learning in healthcare. The panelists discussed various areas where societies can contribute, such as education and training, post-market surveillance, and addressing bias in AI algorithms. They also emphasized the importance of collaboration and co-development between societies and technology companies. The panelists highlighted the need for continuing education and upskilling of healthcare professionals to ensure an AI-enabled workforce. They also discussed the potential of AI to improve patient care and enhance patient-provider communication. The panelists recognized the ethical considerations involved in AI adoption and stressed the importance of integrating ethical principles into AI technologies. Overall, the panelists emphasized the need for societies to play an active role in understanding, evaluating, and implementing AI advancements in healthcare.
Asset Caption
Panel Discussion after the keynote. Panelists are:
Ami Bhatt, MD (ACC)
Michael Howell, MD, MPH (Google)
Judy Keen, PhD (Novartis)
Kelly Rose, PhD (ASH)
Mike Tilkin (ACR)
Keywords
specialty societies
AI adoption
healthcare
education and training
collaboration
patient care
ethical considerations
AI technologies
×
Please select your language
1
English