false
Catalog
Thinking Outside the Registry Box to Support Expan ...
Thinking Outside the Registry Box to Support Expan ...
Thinking Outside the Registry Box to Support Expanded Needs and Uses – Video
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome to the final session of our meeting. We are really delighted to have you with us, and as we mentioned, all these sessions will be recorded, so we will put them out to all of our members and anyone else who'd like to see it, because I think this issue in particular, actually I think the meeting's been, I can say I think it's been great. There's been so many wonderful discussions that I think we've had at this meeting, but this one in particular I think is, I think will be refreshing in some ways for you. There are no slides. This is, look how happy they are. There are no slides. The intent here is really to have a dialogue. I think all of us think about what's next. How do we make sure that we have the right kinds of metrics, the right kinds of approaches to really take the next step in terms of ensuring our registries, our approaches to quality really can move on from some of the initial work that we've done, thinking about a broader lens. I am delighted to introduce my esteemed panelists today who are truly experts in their field in wonderful ways. First to my left is Matt Nielsen, Dr. Matt Nielsen, who is the Chair of the Science and Quality Council for the American Urologic Association. He's also the Chair of Urology at the University of North Carolina. Next to him is Mai Pham, who currently is the President and CEO of the Institute for Exceptional Care. I'm honored to be part of one of her new PCORI Awards, focusing on how to provide exceptional care for those with intellectual and developmental disabilities. But prior to that, she was the former Chief Innovation Officer at CMMI, so has thought a lot about these issues, and she tends to bring a real revolutionary spirit to these discussions about what can be. I think you are one of the people who I always think of as thinking way outside the box, Mai. So delighted to have you here. And then last but certainly not least, Cliff Coe, who's the Director of Quality American College of Surgeons, Colorectal Cancer Surgeon on faculty at UCLA. Our plan for this session is I have posed really some questions to each of them. We're just gonna have a dialogue, and I think our hope is that we will open up dialogue with all of you as well. So I think, do we only have one mic? I guess we just don't have one mic. Okay, so I'll ask the question, and maybe just keep passing the mic along. So here are some questions we thought we would start with. Maybe I'll give you a sense of the lay of the land, and then you can think about if there's anything we're missing. We want to get a sense of the current state of clinical registries, what capabilities and uses could advanced registries support. Oh, thank you so much. In an ideal world, how could we evolve to support a wider set of use cases? What types of new measures and approaches are needed to meet emerging needs in terms of quality? How do we drive towards some of those new quality measurement, quality improvement approaches, really both inside and outside the registry box? And boxes are a really important term, because I think even the concept of what is a clinical registry has changed so much that that box itself, I think, is evolving. A data repository, a way to collect information from our patients to drive improvement. And then finally, thinking very much, thinking about the goal of exceptional care as our end goal, and working backwards, what would exceptional care look like? And what kind of metrics do we need? What can we do inside? What can we do outside of registries? And that's really, that's our approach. So I'm gonna start, let's start at the top here. You guys now have your own mics, so I'll keep mine. I guess I'll stand here. So as you think about where we are currently in terms of this current state of registries, what capabilities and uses could advanced registry support, the ones we have now, the ones that you're evolving to, and maybe we'll just go down the line. So maybe starting with you, Matt. So really honored to be a part of this group, and congratulations on a great meeting overall, Helen. You know, I think just reflecting on conversations we've been having within AUA, and even in my sort of home organization of challenges that we have in the delivery environment now, we have, I think a lot of our current participants, the MIPS solution is a big, big part, very big part of the value proposition. And our growth in the registry has been really great, but we wonder if we're starting to potentially hit a ceiling where really having a very robust value proposition for other members to participate is there, and thinking about what those are, including quality improvement and the like. At the same time, we're having conversations within our organization with our own ACO, and then with other really innovative players on the field that are getting a bigger and bigger footprint in our geography, like the Allidades of the world, and the like, of just trying to understand how do we define quality and value in the different specialties beyond just cost. And I think that that is a space where it would be a mindset shift for us to be thinking about what are the universe of quality measures that are also attentive to cost. Most of our organizations have choosing wisely, recommendations and the like. They have not necessarily guided our design of measures, and so I think thinking about the sort of payer stakeholders, the management service organizations like Allidade kind of stakeholders, where more and more and more primary care physicians are sort of looking at their PMPM every month, and seeing this universe of very opaque specialty care where they don't really have a good guidepost of where it's not just cost but value, and I think that's a big challenge that we're thinking a lot about right now. Absolutely. Okay. So let me start off by saying the context in which the College of Surgeons has registries, so we have about six or seven different registries, whether we clump or split them. All of them except one are hospital-based registries because most of our programs address in the hospital, so what we don't address all that well is ambulatory outpatient care, and we are still trying to figure out how best to do that. One of our registries is a clinician-based, surgeon-based registry where they participate in MIBS and things like that. We have developed registries based on what is it gonna be used for, how are we gonna implement it, how is it gonna be used, and so the hospital-based registries, let me start by saying one thing is that in order to improve care, data are necessary but not sufficient to improve care, and so a lot of times what the UK has learned over and over again in other countries is that audit and feedback, if we collect the data and we feed it back, even though in their really nice, colorful benchmarking reports and so forth, that doesn't necessarily lead to improved care, and the fallacy that we thought originally, decades ago is that that will improve care, people will look at and see where they are on the curve, and it hasn't, and so when we develop these registries, we've put them with a quality program and with quote-unquote an accreditation program, so maybe the best example that most people have heard of is the Trauma Verification Program, Level 1 Trauma Center, so if you're a Level 1 Trauma Center or a Level 2 Trauma Center, all these centers meet standards that the College of Surgeons come in and verifies every three years, but part of that is also embedding a registry into that program to say, all right, you meet a standard for having an emergency department and you have a place for your helicopter to land and you have this process of going from the ED to the OR to the ICU and you have appropriate process in there, but it's not really helpful unless you always have kind of on-demand data to show that, all right, it's working, our outcomes are okay, our processes are being adhered to or whatnot. A lot of that stuff is in the registry, so that we use registries to kind of inform once we have the structures in place, are the processes being done and are we getting the outcomes? So that's kind of how we're using registries in a lot of our quality programs to inform and hopefully to produce the feedback, the benchmarking to say that, all right, all your traumatic brain injuries are staying too long or you're having bad outcomes or you need to blah, blah, blah, blah, blah, those types of things and it's constantly this learning and feeding back but then in a setting of, for example, in this, in a trauma center where hopefully they can do the actionable part of this and of course, registries are currently for research and so they do a lot of research of like, well, what helps the traumatic brain injuries work better? In our center and across the board of the hundreds of level one trauma centers, what can we do with that? And then obviously comes the learning and using evidence to provide more evidence. So that's kind of the registries that we have, how we use it currently. No, that's great, Cliff, and I think a lot of us who have spent probably way too many years deep in the quality measurement space have often worried that their measurement alone without a significant equal investment, Heidi is smiling, an equal investment in how we use those measures, how we drive improvement, how do we link to it, falls flat. And I think that's been a big piece of what we've done so far, it's been so much about the actual building of the collection of the data that I worry at times how we get there. So maybe Vice. So can I ask, just add this. So where we really learned our lesson is that all of our registries except one is embedded into a quality verification program. The one registry that is not is NISQIP, this National Surgical Quality Improvement Program, which is our most common registry because hospitals figured, oh, let's just join that and we'll get good. And so we had almost 1,000 hospitals join it within the first five years, and that was the fallacy. They joined it, data are necessary, but not sufficient to raise the level of care. And so over the pandemic, when we had less to do, we still had a lot, but still less to do, then we developed an accreditation program to go with that on helping places to use the data to make it actionable and actually improve. Yeah, that's great. And actually, coming to you, Maya, as we think about sort of an ideal world where we wanna be able to satisfy this broader set of potential use cases like the quality improvement we just talked about, links to targeted education if you're not doing well, value-based performance for specialty care is an area that certainly CMI is looking at now. Any thoughts of where you think this sort of data approach, registry approach could help, even though you don't actually live in the world of registries, as an end user, how would you think about how these could really help support that set of expanded needs and uses? Yeah, I guess I would encourage the sector to take a step back. And I think it was not, for sure, not wasted effort to pull together this richness of resources. I think that's incredibly important. But I don't think you have anywhere close to an accurate sense of its potential. But to really realize its potential, you're gonna have to take a step back and reassess some of the assumptions that you walked into building registries with. And many of us helped to facilitate, enable the measurement industrial complex, guilty as charged, but in my middle age, what I have come to really appreciate, and this relates to my disability work, but also just years and years of feeling underwhelmed with our care transformation efforts, is that a standardized approach will only get you so far because the system, as we've designed it, in every which way, is geared toward pretty much the fictional person right in the middle of the bell curve, for whatever conditions, subpopulation, whatever. And none of us live there for more than a hot New York minute. Instead, what I wish we would be building is a system that is resilient and flexible to meet the needs of individuals and help them achieve their personal goals in health and healthcare. So from that perspective, if you step all the way back, think about the power of registry data, right? Now think about how you can multiply that if you linked it to other data. And if you did that, of course, in a privacy-preserving way, but it allows you to unlock things like, you know, you talk about the value-based payment use case. Well, Alidade and others, I think, have pretty much hit the wall in terms of what they can squeeze out of the system using population-based methods. What will take them farther, because there's really only so much ceiling, budget ceiling, that CMS and payers can give them, is if they can target individuals. But that requires, you know, a predictive ability that they do not have, and they can never have with just EMR data or claims data. And that registries won't have with just registry data. But registry data combined with either of those sources and or very rich socioeconomic data, which is out there and vended, and you all can afford it. I can't, but you can. It is standard for now predictive models to include socioeconomic data, but not routinely. It's a known thing, but it's not being used, I don't think, by the smartest clinicians and population health managers in the world. So imagine if you could predict before that patient becomes a level four heart failure patient, right? Who is at risk for getting there? Yes, we have the RCTs. It doesn't tell you that, you know, that person has a mortgage balance that is really stressful right now, and they're gonna be forgetting to take their meds, or they're holding back on filling all their prescriptions because they're trying to save up for their granddaughter's wedding, right? There's just, the majority of what will predict health outcomes and health utilization is actually not healthcare. It's the everything else outside of healthcare. So I just wanna pull you back and encourage you to imagine that potential of helping the system find ways to be more tailored, more flexible, and to also, in that regard, think about your, you know, we talk about use cases, but I didn't actually hear anyone say the use case of what the patient can get out of the registry. I know you all assume it. I know that it's in your hearts, but it's not at the tip of your tongue, and that's for a reason, because they're not paying you for access to the data. But they actually should be your primary patient, your primary client as you design this. So if that's the case in the ideal world, they aren't a condition, they aren't a heart valve, they aren't a joint, they are a whole person. They may be in multiple registries. Do you know? Do you know which patients you have in common? Nope. Yeah. So that's kind of a problem, right? And they have goals that extend beyond health and healthcare but that drive their healthcare behaviors. So think about opportunities to use your data, either collectively or in creative ways with other data sources, to see that whole person. And then what you can contribute uniquely is the specialized data you have to contribute to that whole person set of goals. When we, as Helen knows, when we ask people with IDD what health outcomes matter most to them, nothing they told us was specific to IDD. Not a single one of them said, I want someone to diagnose me earlier with IDD or I want someone to give me this specific drug. That's not what happened. They said, I wanna be able to do the things I wanna do. I wanna feel like I'm living in a safe community. I want a doctor to stop and just listen to me and make me feel like a real person. And I don't think registry data is irrelevant to that. I don't. I think registry data should be in service of that. And you all are smart enough to figure out how to make that happen. It's just a reorientation. Those are great points, as I expected my, Cliff or Matt, lots to unpack. I'm so happy you said all that. And, because when we think about how we're gonna give quality care, and so this is not really a quality session. I mean, it's about registries, but it's gonna inform us of how to do quality. One of the things, and I'm sorry, I just keep on remembering all of our programs because we have way too many to think all at once, but one of the programs that really kind of resonates with what you said, Mai, is our geriatric program. And when we developed this program, we had the geriatricians in the room and said, you surgeons, like, you missed the boat. And the first thing that we should really do is exactly what you said. We should ask the patients three questions. What are your goals in your life? What are your health goals? And what are your goals for this procedure? And so we developed a program, but that's the crux of what, how we should be caring for older adults and for undergoing an operation. And it really opened the eyes to the whole kind of community of we had all these stakeholders and helping us design the stuff of what exactly we are trying to do, what is our end goal, and how would we measure it? So if somebody comes in with a small tumor and blah, blah, blah, blah, and they're 99 years old, is it okay to take them, is it right to take them to the, is it good quality to take them to the operating room, do this humongous operation for something they may or may probably won't live long enough to kind of see the effects of that? Well, if the patient wants it, then if that's their goal, that's different than if the patient says, yeah, forget it, I'm leaving, see you later. And so we will define quality very differently, depending on those first three questions. And then each of the processes that go through will be different from those first three questions. The way we have registries designed now and the metrics will not get to that. It's all wrong, because it's like, if you're doing, if the person has a cancer, our metric is, did we cure them of their cancer? Are they living, what, five years? Is there a complication from it, blah, blah, blah, but it doesn't take into account a much more patient-centered way of doing that. Our registries are too static and too standard in terms of the metrics that we have for those. And so how do we move to that next step is, I think, what you're saying, and how do we integrate the data that we have to do that? And maybe, Matt, a question for you in thinking about ACWA and what, you know, the conditions in urology. So many of them are about the patient-focused outcomes. So could you give a little context for how the, how AUA and others, really thinking about how do you incorporate the voice of the patient? How do you incorporate the PROs into what you're doing to get at some of what Cliff and my? Yeah, thank you, Helen. We definitely have a lot of conditions, even in our oncology space, where the sort of oncology outcomes, mortality and things like that, prostate cancer in particular, are way distal. And so, you know, the focus on patient reported outcomes and quality of life is huge and has been, you know, as a specialty, we were sort of early on, you know, developing and validating PROs. But the extent to which we've been able to pull all that into our registry is still a work in progress. We have, we've had some pilots with it and, you know, trying to get the interface going. I think it does speak to that issue of the patient being the customer and sort of, you know, having some way to encourage their participation, you know, or just hear from them what is important. For many, many aspects of our registry, we've looked towards the Michigan Blue Cross Supported Urologic Surgery Improvement Collaborative, you know, one of dozens of models there. And they have developed a piece of that where they collect the PRO information. It's, you know, fed back as part of the registry to the clinicians. They have a lot of patient stakeholders involved in their work that I think is a model for us that we're trying to start incorporating. And then what they're able to do is then provide that registry, you know, using the prostate cancer example, radical prostatectomy, to be able to show patients, you know, sort of where they are in their trajectory relative to patients like them. And the feedback from something like that has been incredibly valuable, you know, from the patient's perspective, just to understand, you know, is this normal? What do I have to look forward to? And I think looking to that model as a roadmap for us is certainly an important. Yeah, absolutely. I was just, I was gonna say one quick thing. I mean, yesterday, we had one of the representatives of the Patient-Led Research Collaborative been working on long COVID. Those scorecards they put forward from the patient perspective of how they want to engage in patient-led research, or what are the guardrails for them about engaging with us, I think could be really informative here, right? That they don't wanna just be invited to be the one patient on the committee as you develop these metrics. They wanna come as equal partners to the table. So I just wanted to mention that, and I'm happy to share those scorecards. But I think you're right, there's a different element here when you're invited to come and be part of the process of this development, as you talked about, Cliff. And I think one of the things you said, Maya, about it, they're not paying for it, and therefore they're not getting it. I actually don't think it's a money thing. I think it's just that we've not consciously built them into the process. It's not about money. There are ways we could do this. It's really about how we do it. I think the costly part is how do you reconfigure some of these registries to allow that flow of patient voice in, and that's still really complicated. So please, Maya. Yeah, so I mean, when we're trying to derive a consensus-driven health outcomes framework for people with IDD, and we purposely began with just people with IDD. There is nobody else in the room, them and a graphic illustrator. And then we asked caregivers, and then we talked to clinicians, and now we're gonna talk to payers and regulators. It's about whom you put at the center and whom you make primary as the voice. So clinicians don't get to alter the outcomes that people with IDD raise. They get to express how they, where they are confident about being able to support those outcomes and what resources they need, but they don't get to edit, right? That's the difference. I would also encourage you to learn from groups like PatientsLikeMe that really are patient-led, that have no trouble collecting a lot of data. It's probably not as standardized and rigorously defined as registry data, but they have no trouble collecting patient-reported, meaningful outcomes. And I think there could be a lot of fun for you in thinking about novel methods of data collection, you know, secure text or whatever, you know, simple pings here and there without heavy-duty, ginormous, expensive instruments that only capture a snapshot in time. And we shouldn't assume that they're less rigorous either. I mean, I think what was striking yesterday about the patient-led research team on long COVID is they have defined long COVID. They have the seminal paper in Lancet defining what is patient, you know, what is long COVID. So I think, I just want to caution that we should make those assumptions that in fact they're less rigorous. In fact, they bring different skills to the table, but not necessarily, rigor doesn't necessarily go down. Matt or Cliff, reflections on how we might rethink the way we, outside this box? I think, you know, there are some things where we want the patient's voice and it doesn't need to be in a standardized, structured way. There's other things where we are getting in data and, you know, the garbage in, garbage out that we always hear, where it will hurt us. And so those are the times, I mean, and somebody really smart at some high level should think about how we're gonna differentiate the two. Because, you know, for example, researchers, probably the answer to a lot of these things are mixed methods. We need quantitative things. We need to kind of know the numbers and have p-values. But much more rich and much more contextual is the qualitative piece. The qualitative piece is, to all the quants, and I am, the qualitative piece is absolutely necessary. And how do we get that in, whether it's a registry piece or a PRO or, you know, a text or whatever that is, needs to be integrated well. We still need the accuracy and structuredness of the quant piece, but the qualitative piece has to be added to that where we are, where we're falling short of that. One of the things that I think is really a problem, and we are struggling with this mightily, is just when we get even the PROs, and that's really so nascent right now, just, oh, get PROs, okay, how do you do it? What instrument do you use? Oh, use a validated instrument, but there's a lot of validated instruments that are not hitting the mark of what we're trying to do or what the patient's voice truly is, but we just like, oh, it's validated, let's use it, it's good. So then, let's say we call those, and it is good, just the response rates. As we have focus groups with these patients, they're like, yeah, we get too many surveys, we're not filling yours out. And so, if we hit a response, at ICHOM, the work that's being done in Europe, and what we're trying to do, short of paying them, short of knocking on their door, we will answer this, short of, we don't wanna have the bias of like, here, I'm your doctor, here, will you fill this out? And I'm gonna, and then give it to me when you're done, and then I'm gonna see you, is really getting good data, but getting enough data that it's actionable for each individual patient, or do we kinda only have data of the people who fill it out, and then that'll be a bias, and disparity there. So, there are a lot of challenges in this, but this is what we need to do. That's been a big issue for you guys around PROs, and method of collection, and anything you wanna add to that? Yeah, I think Cliff summarized well, it's, I think there's no question that having the voice of the patient is valuable. I think we are still in sort of baby steps of figuring out how to actually take scientifically valid, validated questionnaires that people have made their academic careers on, when we sort of put those in front of patients, and just sort of ask them the sniff test of like, how well does this work? We use them a lot in our practice, and it's just interesting, just in sort of anecdotal practice of how well patients feel like that is capturing their experience. And so, I think there's more work to be done, but really, I think, to your point of just recognizing that whether it's patients like me, or other sort of disease-specific efficacy groups, we've had a tremendous experience in our field, partnering with the Bladder Cancer Advocacy Network, and I've had, and like the work that you shared yesterday, there's been, in the last five, 10 years, with PCORI support, I think that sort of universe of patient-led organizations, and caregivers also, I think, being a really, really important voice for a lot of the things we're talking about, and just starting those conversations, and learning from each other, and hopefully, a group like CMSS is a place where we can learn from the things that are more cross-cutting. Yeah, so not even just always the patient voice, I think the other thing that we often lack is the longitudinal nature of the patient trajectory. So, we often start with a diagnosis, we're in one setting, as you pointed out, Cliff, it's hard to get hospital to ambulatory, but that's really where we get that full trajectory. We've been doing some work with the Moore Foundation, and several of you have awards as part of that, looking at diagnostic excellence, diagnostic safety, quality, equity. Some of that, the way we've set up a lot of these data structures, registries, and others, is we begin with a diagnosis. Like, here are our patients with bladder cancer, here are our patients who've undergone bariatric surgery. I think that, you know, as we think about that more patient-centered longitudinal view, how do we kind of recraft the way we do this, so that in some ways, it begins with a symptom, or a patient concern? How might we look at that differently? Mai, you wanna start? Yeah, so, you've got some potential natural partners out there. I can't speak to the business side of it, but organizations like Epic or Vizient, you know, have somewhere between a third and two-thirds of the country in their data. And in this day and age, you can have, you know, a trusted third party connect the tokens, so that nobody sees everything, and you can get that ambulatory view, right? You can, I know we're bordering on creepiness there, and I'm, like, very cognizant of that, and maybe it's a stretch for government to do that, but I think, you know, clinicians, especially if this is being, you know, used in a privacy-protected way for quality improvement and research, and not for commercial reasons, I think there's, we live in such a different era, than when registries began. You know, maybe it's worth just taking a step back, and doing an environmental scan of what all potential data linkages exist. Great. Cliff, thoughts? Oh, I think we had a question. Oh, I'm sorry. No, I, are we, is it okay? We, I think we're open to however you'd like to do it. Please, David, come on up. Yeah. Yes, actually, no, you can't come up. Just speak loudly. I'm loud. I'm on a mic. Oh, yeah, maybe we want to mic it for recording purposes. Can you hear me? Here, I'll. Oh, we're just gonna give you this. Okay. There you go. All right. No, I'm, if I have a look of consternation on my face, it's not because of you guys. It's simply because of the situation we find ourselves in. I'm David White, American Society of Nephrology. So, we do not have a registry, and actually, that's, what you just said about stepping back and thinking about what this is all about, is what, really, what I came in here for, is, because I think that's exactly where we are. I'm not sure if it necessarily makes sense in nephrology for a registry, when you look at what we have in USRDS, and other pieces, too. The QIPP, within the ESRD program, I think, I think the ESRD program and benefit is in a lot of trouble. I don't think it's able to flexibly deal with the crisis that's ongoing, and I don't think the QIPP is really prepared to capture what it needs to capture, particularly when it comes to kidney failure patients, and that's not necessarily, that's not CMS's fault. It is just, it is a morass, and I'm, we are trying to figure out how to step back from it, and come up in the next year or two to CMS, and really come up with some much stronger ideas. We just got through the situation where CMS proposed to suppress six of the QIPP measures for dialysis facilities, and one of them was long-term catheter rate, but they were not going to suppress, they were not going to suppress the fistula rate, and we're like, well, you only have two ways to get inside the body, to circulate and purify the blood. One's either a catheter, or one's a fistula, so if one's going up, the other one's going down, that's just the way it is, so that's my little diatribe, and I just, it's just because we're trying to figure out ideas about how exactly to come forward with the government with some really strong ideas about how we might reevaluate this system. Yeah, no, thanks, thanks, David. I think it's an interesting question. We've talked about this because a fair number of folks in our societies don't have clinical registries necessarily, and yet, I think, we had a session yesterday with some folks about the idea that regardless of whether you have a registry or not, given everything that's happening around the interoperability rules, we have a responsibility to build our lexicon, as Jimmy Chang likes to call it. How do we define what we think are those most critical data elements that should be available, freely flowing through different sites of care to be able to get at patient outcomes? Couldn't agree more. Please, Matt. Yeah, I think you summarized what, I was actually gonna follow, Dr. Johnson had a question. Oh, please, go ahead, sorry. Hi, Karen Johnson with the American Academy of Family Physicians, and the doctor. As a PhD in health policy, I'm not a family doc, so I though love, Mai, what you were saying about stepping back and thinking about why we have registries and their intended purposes and looking back, and I've been doing that in another view of the world, so I've only been with the AFP about a year. I came from a very different world where I was mired in claims data, so I spent 20 years working with employers, looking at claims reports and understanding trends, and then 10 years working with payers on payment models and understanding how all that fits together, and I will just say that I just feel like I got jumped in the chopper and went to the other side of the Grand Canyon where there's this other view of the world and data and healthcare data, because the idea of connecting to a registry as a way to sort of augment claims data is not something that's top of mind for those people doing that work on a day-to-day basis, and when you talk about a holistic view of patients, even before they have a symptom, when they're well, and when you want to understand how they're doing, claims data are really the only data that we have that exists that does that, and so it strikes me that there is an opportunity to sort of walk around the Grand Canyon or do something to bring folks together in ways where there's a better understanding of those data sets and the richness of them, and I feel like this takes us back from the importance of the patient conversation, but I think my point is, before individual data sets or registries start thinking about how to incorporate the patient voice, start thinking about the patient as a person, not a part or a condition or a symptom, and I think you have to start thinking about then the foundational data structure, which is way beyond registries, and I think if the registry, I think about the registry of registries, like if you could sort of take that and marry it with the claims data of claims data, HCCI, the Healthcare Cost Institute, has a lot of claims data, which I think I would advocate thinking about that before I would think about going to the HRs for data, probably, but these are just some thoughts that I think are really sort of swirling around in my head that I think are really important to how do we get to the next step, and I'll just say the last thing, the claims data measurement system is HEDIS, as we know. I was in the business before HEDIS was a thing, and we were looking at claims and saying, or looking at healthcare and saying, where's the quality? So having data inform quality in HEDIS was sort of what we could do in 1991 and got it better in 1993 and maybe a little bit better again in 2007 as these updates happen, but it's really time to go back and look at that and figure out how we can do that better because we have much, much more than claims data today, so sorry for the long winded, but I just think there's so many connections here. Keep my mic to the next person, then we'll move on. Hi, and thank you for the presentation. I think you have a lot of great ideas, especially looking at the patient at a holistic level, and I think what you mentioned, I'm sorry, I didn't capture your name, but you look at when these, when the patient does come into your office for cancer, you know, heart conditions or urological conditions, we have to look at the patient before that actually happens, and it starts usually at the primary care level, and then we track the patient's history through there and whatever medications that they're taking and what treatments that they're getting now, how are they feeling on that day, and a lot of the times, especially, we are in, I'm in nuclear medicine, so society of nuclear medicine, molecular imaging, our patients typically see multiple doctors or have multiple points of care that it's hard to kind of backtrack that point in time, and usually, you know, unfortunately, you know, the way that we are in our literacy and how the education level in America, sometimes it might be difficult, and depending on their circumstances, it's difficult for the patient to articulate themselves in a way that they could provide that kind of feedback of how they're feeling and what to do, so I think I wanna kind of mention that step because these are all really great ideas, but how, but registries aren't gonna be able to capture that data, but how can we help the doctors and the hospitals and the primary care facilities help our patients find the way to articulate themselves, to be able to ask those right questions so they actually get a good point, a picture of the patient, and how do we track that patient history from the time that they come in with a complaint, track it, and then until they get into a specialty doctor, so that's just kind of also where I'm thinking of and how can we do that in a meaningful and efficient way. Great question. Comments, reflections on both of these great points. Patients, patients will tell you a lot if you just stop and listen. If you just ask them, how's life going, and then just shut up for 10 minutes, you will learn a lot about what's driving their health or not, so I guess I don't, I have great confidence in patients' ability to articulate where they are and what is concerning them, and then it's a matter of the clinician being able to interpret that, and if it needs to get into data, documenting that in a way that is informative when you go back, but I'm not worried about patients' ability to articulate what the primary issues for them are. So I think that, absolutely, and it's great just to kind of sit back and hear the patient. There's obviously the challenges, like I only have 15 minutes to do everything, and then I have to type all this down, but I have to look at them while I'm typing, and then I can't read my typing because it's wrong, and then all of these things that go into it, and then once it's in the EHR, how do you get that into a registry? You know, how do you get this free text into a structured registry that'll never happen, although I'm looking at software, you've probably seen all the software that writes your dissertation for you, and so can this then, what they say, get recorded, put into a transcript, then this software takes it and puts it into structure, so then it then goes out, and all that magic happens, and I can just ask, hey, what's up? And then I just sit there, but until that day happens, we are still kind of mired in a lot of things where we have to crawl, walk, run, and you know, when I look at things in the world, it's dichotomous where it's validity, is it right, and feasibility, can we do it? Because we have a lot of things that are right, but we can't do, and we spend a lot of time at the College of Surgeons dreaming of these things, and then like, okay, let's get back to reality. And so I'm like, what can we actually do in the next three years? And it gets to your point of like, well, we would love to know all the people who can't come in and have bariatric surgery, and what's the access problems, and all these kind of things, and what's there. We have enough problems of the people who have bariatric surgery, and their complications, and what happens to them afterwards. So it is hugely all of these things that everyone is saying. I think that one of the things that I know that helps focus us, and I think it's exactly what everyone is saying, and Maya especially, why are we having this registry? What is a registry, and why are we having it? I mean, there are people at the College of Surgeons like, we should get rid of our registries, put all our data into a lake, and then we just kind of get out what we want. And like, you know, many of you will know Franco Pelica, and I'm like, Frank, how's that gonna work? It's like, I'm not sure yet, but we just put it in a lake, and we kind of scoop it out. I'm like, okay, is this gonna be ready in three years? Well, no, not really, but so it is getting to this question, and why are we doing it? I can tell you that for a professional organization of the College of Surgeons, and where we're taking care of the trauma patients, or the bariatric patient, can we, in the six domains of the IOM, can we do it safely, can we do it effectively? Are we, you know, is the trauma patient living? Are we kind of having the person resolve their diabetes and weight loss for effectiveness? Is it efficient? You know, all those things, and that's what we're looking at in our registry. I do think that we need to look more longitudinally, but that's kind of more of the walking and running part later on. But, you know, I think what is each profession doing? What is nephrology needed with their pieces? And then of the registry of registries, how do you get all these things together? Even with our six registries in the college, we have a hard time, if somebody is a NISQIP and has an operation for colon cancer, merging that person to our cancer registry, which is basically the same thing as SEER, the state registries, is not difficult, is not easy, because we have, then the lawyers come in, because we, behind the firewall of the hospitals, do not get PHI. So then we got our statisticians to kind of do this probabilistic matching, and they're kind of using magic and hope and prayer to get these things together, and so getting these things to merging them. It gets to those, so we have the feasibility part that's still kind of this big, scary monster for us. I agree. Matt, anything, a question? Did you want to say something? It's okay, because I don't know what we're recording it, so we'd like to factor it in. Hi, Judy Burleson with the American College of Radiology, and I'm glad to be able to sit down and talk, but we have similar six registries that has a variety of aspects, and some of that has been MIPS stuff, but it's mostly quality improvement, and listening to your openness and ideas to change and bring in patient information and really make a difference is really exciting to hear, but then I'm like, well, what about this, and what about that, and getting very concrete about what you're saying, Dr. Koh, and then there's the security and privacy and merging all that, and these are things that we're constantly struggling with to make our registry better, making some advances and some not so progressive, but I think what we're coming back to is making baby steps of making the registry data more useful to the people that spend so much time collecting that data. If we add one data element, what the effect of that is on our participating sites is huge, and the vendors that help them collect the data, so it's so incremental, so incremental, so the data that we do have, what do we do with it? We've been spending a good bit of time last year or two in visualizing the data and having better performance feedback reports, and then building in actual templates for QI with run charts and control charts that are associated with the measures in the registry, and do this, now do this, now look at this, so that they can actually do something with it, and what I'd like to hear from you three is, in particular, Dr. Nielsen and Koh, is getting that user engagement, getting our users to tell us, what do you need, what's missing, why can't you use this, why is it so hard, how do we step back and look at this huge thing that we built, and nobody can get into it and use it, how do we fix that? So there's so many things with the engagement, I really, we really want to get the people invested in it, give us their ideas, and tell us, and have that interaction, building that user base. It's a great question, so much of it is about, is the intent of this really to collect the data to serve the needs of the clinicians, are they using it in real time, there's some examples, like Iris, for example, that has a long history, right, of engaging like this, I see Dr. McLeod in the back there, but you're directly providing information to clinicians that they see, they're benchmarked, if you don't do very well on this, you should do this CME program, right, I mean, that to me is, I think, in some ways, I love that model of it being very clinician-centric. Matt, Cliff, thoughts? Yeah, I think, you know, great points, and I think, you know, the Iris example is, you know, sort of aspirational for a lot of our organizations, in terms of really, you know, having it to the point of, you know, this is your performance, and then, you know, click on this link to go to the targeted CME that's there, or, you know, the example you give of, you know, the sort of, the actual implementation of QI, I think, thanks to Helen's leadership and sort of representation of the perspective that we have with the ABMS effort to, you know, to add the new standards for certification that are kind of in this space, I think it's actually a refreshed opportunity for all of us to think within our specialties, and, you know, then across, you know, learning from each other of how we can, you know, sort of march out those examples of, you know, this is sort of the, you know, the toolkit that was used to implement, you know, focused activity in this area, focused activity in that area, and I think, you know, different pockets of our societies have, you know, have had more or less, you know, success or maturity in that space, but I think the sort of addition of the new kind of quality focus for the boards is gonna just be more wind in the sails for all that work. Yeah. So I think that's a really great question in that it's how good and how much data do we need to get to ultimate, if we want, what do we want to get to ultimately? So if we want to get better, I don't know what it is in radiology, more accurate reads or faster reads, I know in surgery we want better outcomes, more effectiveness, all those, again, all those domains. How much data do we need and how much, what do we need it for? So if we give back mortality rate of a hospital, and it's not adjusted, it's easy, right? We just have live, die, boom, we could give that back really quickly, but we collect more data because most people will say, well, our patients are sicker, they came in with this advanced disease, and, you know, we have this type of neighborhood, so we collect this, this, and this, and this, and now, yeah, we can add in social determinants of health from those indices, and we can add in all these things to make it better, but a lot of it almost, to me, is like we have to defend that you have a high mortality rate because now we risk adjust, we case mix adjust, we adjust for everything humanly possible, and then we adjust more, and then we give a fudge factor because they can put in their opinion on things, and then we still say, yeah, you have a high mortality rate. So then, what happens then? So we learn from the UK that they have audit and feedback every year, that's part of their certification, and they give the data back, nothing happens. So then it gets to what you're saying, it's like, well, what do we do next? How do we get the people to act on the data? And then if it's a change, then it's all this change management stuff where just like psychology and behavioral health and all this kind of stuff, behavioral science goes into it, of how do you actually affect this thing so that ultimately you end up with good care? But the question remains is like how much data do we need, how much effort do we spend on this piece, the first part, to give it back to the front lines or whomever to fix something that needs fixing? And so, again, we've learned that we've spent a lot of time on the data piece, and we haven't touched nearly what Maya's saying at all. But even if we give back a mortality rate, one of our quality improvement projects that we got sent back to the college was this institution that tried to disprove the data that we gave to them. And they're supposed to do a quality improvement project each year, and they submitted to us, this was the project. And at the end of the project, they said, yep, we do have a high mortality rate, period, and submit. Like, what? But that's where we are, that happened in 2019, the year before the pandemic. And so, that second piece, exactly what you're saying, is that's where, in our opinion, at least with the surgical community, we are falling so short on that, of what to do with the data. And if we give back reports, it's not enough. If we, you know, what is it that we're gonna do with that? And there's a whole huge box of things in that regard as well. But if we're talking about the registry, it's really how much and what is it that we need to kind of feed back for us to do the change for high-quality care. Yeah, I agree. I have two other questions, and I'm curious of others in the room. Do you wanna say something, David? Here. Introduce yourself. Sure. Hi, everyone, David Kola, I'm with IQVIA. So I run strategy, and I sit at the intersection of life sciences, patient advocacy, and specialty societies. And so, one of the interesting things that I've seen is kind of the definition of registry starts to evolve. And I absolutely agree with Dr. Koh and others that we are, even with very mature programs, still in a crawl, walk, run. But I do think we're also at a tipping point in that registries are the way we think about registries, and especially in the medical specialty society space. The ability from a technical perspective to bring together the types of data we're talking about already exists. That's the challenge, and Helen alluded to this, is that for specialty societies, kind of by definition, the unit of analysis is the clinician. The patient is an attribute that we bring into the data set to say, how much can we learn about this patient to determine whether the physician did the right thing at the right time, et cetera. For the other industries, they invert that. The patient is the unit of analysis, and then everything else becomes ancillary information about whether that trajectory happened. And so, I see in medical specialty societies, when we think about where do we go with registries, the main question that we, the royal we, needs to think about is, what is the relationship between the national and the patient that is receiving care from a member? Up until now, it has primarily been, we do not wanna get between the relationship that the local clinician has with that patient. Everyone is an aggregate, everyone is part of a data set. But in order to actually start to really get to that level of quality, I think we actually need to invert the model to say, patient at the center, what else can we bring to it? That actually requires different infrastructure, unfortunately. The registries that were built five, 10 years ago, it's not just flipping a switch, you are actually changing the unit of analysis. But the good news is that there's been tremendous advances in terms of, when you have a patient at the center, that's why you can have 23andMe, you can have all these things. It's actually easier to think about a patient and all the things about them, than to try to bring that in when you're primarily looking at process and outcomes. The other thing I'll mention is that the life science industry, normally I say, we've learned from the advocacy organizations and life sciences are just a different animal. Thanks to cell and gene therapy, we are now, they are now facing this temporal challenge that specialty societies are as well. We normally have this window of what happened and that's what we're interested in. What the heck do we do when we have to go really long? Well, when you have 12, 15, 20 year follow-up periods, when your drug goes on the market, your therapy goes on the market because it's a cell and gene therapy, that model doesn't work anymore. The old, ask someone to enter some data, put it in a form, doesn't work anymore. And so they've had to really adapt to those things. So I'd put it out to the different specialties that are represented in the room and the panel. What do you think it would take to actually, from a membership perspective, justify or have that conversation around the patient being the unit of analysis when we think about quality as opposed to the clinician? That's a great question. Maybe because we're almost out of time, I'll also round that out a bit and say I think that's one key question. I think to the point of the theme of our meeting of being stronger together, I'd also love to just get your final, as we sort of do a wrap-robin, please answer that, but I'd also love your thoughts about what could we do together to kind of make some of this, I know pivot's the most overused word of 2022, but I do think, it really is, I think we're thinking about both a data pivot, a sort of, almost a sort of conceptualization pivot, and a pivot as to who the unit of analysis is. So we'll go down the line. Matt, please. Yeah, well, I'm glad I can go first because I think that that comment there, in my mind, kind of connects it really well. When we talk about, I think it was said earlier, how many of our patients are in multiple registries and do we know that? I mean, I think we've kind of talked about the idea in principle but haven't gone there yet, but if we were to focus on the patient as the unit of analysis, it potentially creates that window to understand a little bit more of the interface that we have and start to think about some of the sort of diagnostic excellence things, not defining bladder cancer at the time the path report drops, but understanding that sort of access, potential delays in care, trying to improve outcomes from our perspective as urologists is just a different way of thinking about it, and that, I think, would actually be a really interesting opportunity for us as a collective. So I confess that I had assumed all the registries used patients as the unit of analysis. I mean, I understood that you did provider profiling as well. I think I, and this is, Helen knows this is unusual because I'm usually the pessimistic bitch in the room, but, you know, it is true, but I actually think that making the patient the unit of analysis and then looking for, to generate those new insights, those new predictive insights, is actually what will both make your registries more valuable to the clinicians and offer you the path to improvement because now you're not saying, you, hospital, go figure out why your mortality rate is higher and go fix it, which, you know, I mean, it's not what most hospital administrators sit around thinking about, but if you say, look for these signs and characteristics among your patients and really worry about these five people that just got admitted, that's a thing I can do, right? That's a much clearer path. Okay, now I gotta really focus on, you know, whether I'm following the right process. The standardization is still helpful. The clinical guidelines are still helpful, but now I know who to worry about and how to, where to dig down deeper and really focus my attention. The last word. Yeah, so coming from a very quality improvement piece and when we look at 1,000 quality improvement projects, how many of them actually define what the problem is before they start moving? 20%. So 80% people start doing quality improvement without even really knowing what the problem is, and so when I think about, and I hear about what we had mentioned about patient unit, how do we get all of our registries together, how do we say, somebody had surgery and it was a urologic surgery, they have a kidney problem, they got a scan and a nuclear scan or whatnot, how do we put this together? It's gonna be real, I'll be the pessimist here. It'll be really, really difficult and figuring out what is the problem that we're trying to solve and focus and everyone like, well, then they have a use case, all right, well, it's a patient who has a cancer and it's urologic cancer and has blah, blah, blah, all this stuff, and is that the use case and then how do you go from that textbook use case to the universe of all of our patients? And I think that it's gonna be very daunting, but I think it's starting off with the focus and what does good look like will be really important in this because it sounds to me like boiling the ocean and we so far have not been able to do that, although with climate, who knows, maybe one day soon. That's way too dark a way to end this session. No, no, no, here's another offering. Choose one place to pilot, just one. Do a tiny little experiment, watch it all together, co-invest in it, just do a proof of concept and reassure yourselves that there's a path forward. Yeah, actually, and just to end on a brighter note, we actually have been in some discussions with the Moore Foundation about thinking about can we do a couple of prototypes that don't start with a condition but instead start with a symptom and think about what it would take for our collective registries to be able to provide that longitudinal feedback back to clinicians, right? So maybe let's just pick one, not boil the ocean, pick one problem, see if we collectively can work on it. So I think that to that end, I think really, I mean, not being a pessimist now, but really having hope in this because I absolutely do is that if we, it's kind of like the white paper. A white paper of what good would look like and the steps there of just taking one example of what this is and how this then expands. And so if it is that patient-based thing and it's longitudinal and we have that and what are the three registries or something kind of in a feasible way how we would do this and what are the challenges there, I do think that there has to be some regulation in this to kind of support this and protect the places doing this and getting the kind of the profit and the non-profits together in a way that is going to be an advantage to everybody. And making it a feasible thing and not just a pipe dream is gonna be the way to move forward. And that'll be, I mean, I think identifying what it is that good looks like and what is the problem that we're trying to solve and the problems that we're not solving and getting that down so that everyone understands it and then moving forward with that is definitely a feasible way to go. I love it. All right, well, we are at time. Thank you all. What a wonderful session. On the screen here, we've got our dates for our spring meeting, April 18th to 20th in Chicago. We're gonna focus on, again, research, registries, practice guidelines, journals, all the sort of, how all these things are evolving for specialty societies and I love this idea of just kind of maybe getting a group thing together to pick one of these and just work it through and so I knew we picked some of the smartest, I love smart people, some of the smartest people I know on this panel. So join me in thanking them and thank you all for coming. Thank you.
Video Summary
The meeting concluded with a discussion on the evolution of registries towards a more patient-centric approach. There was a call to consider the patient as the unit of analysis and focus on collecting data that helps improve patient outcomes. Suggestions were made to start with small pilot projects that address specific problems and to work together to develop innovative solutions that benefit all stakeholders. The idea of starting with a white paper outlining what success looks like and taking incremental steps towards achieving it was also highlighted. The session emphasized the importance of engaging users and clinicians to gather feedback and ensure that registry data is truly useful and actionable. The next steps involve collaborative efforts to shift towards a more patient-centered approach and leverage the advancements in data analysis and technology to drive meaningful improvements in healthcare quality and outcomes.
Keywords
patient-centric
registries
patient outcomes
pilot projects
innovative solutions
white paper
engaging users
clinicians
data analysis
healthcare quality
×
Please select your language
1
English