false
Catalog
The Digital Evolution in Clinical Guidelines and Q ...
The Digital Evolution in Clinical Guidelines and Q ...
The Digital Evolution in Clinical Guidelines and Quality Measurement
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
My name is Jason Murray, I'm the Managing Director of Clinical Quality and Value at the American Academy of Orthopedic Surgeons. I'm extremely excited to moderate this session on the Digital Evolution in Clinical Practice Guidelines and Quality Measurement. I'll introduce our esteemed panel here. So we have Dr. Pawan Goyal from the American College of Emergency Physicians, Mary Kratz from the Executive Vice President of the Interoperability Institute. We have Randy Kudner from ASTRO, and Dr. Elizabeth Dry and Shaysha Siddique, who will be our reactors today. So without further ado, I'll turn it over to Dr. Pawan Goyal and Mary Kratz to give us an overview on the Data-Driven Digital Quality Measures and Computable Guidelines. Thanks for the kind introduction, Jason. Good morning, everyone. I know we are in between you and lunch, so we'll make sure we follow our timeline here. So we have no conflict of interest here. A little bit about ASAP, American College of Emergency Physicians. We are one of the physician societies, Lucky Yours, 40,000 members, 23,000 out of them are active practicing emergency physicians, 17,000 are our student body and residents, trainees, and fellows. We operate through 53 chapters, one for each state and three special chapters. And we have around 35 communities of practice, which we call sections. So with that, let me ask my colleague, Mary, to talk a little bit about Interoperability Institute. Good morning, and thank you, Dr. Goyal, and thank you for the privilege to be here at CMSS to share this presentation. Interoperability Institute is a nonprofit entity. We work with organizations and communities to harness all aspects of interoperability. So I'm couched more within the informatics community and within the technology communities, looking at, as we have new health innovations, how do we use health information technology to deploy and implement those solutions? So we achieve our mission through a very large early career workforce program, doing sandbox or digital twinning, standardization, and data standardization. Thanks, Mary. So in yesterday's session, we learned a lot around AI and machine learning principles. So what's the core here? So the core is really data. Data is nothing but facts and figures that could be collected in raw format. But ultimately, it's driving that data to value. We get data into various formats and various volume, various velocity, but it has to be managed and converted into value. That value could vary from various stakeholders. There could be a different value set we are looking for our physician. But then there could be a different value when that data is for the patient or the consumer. And obviously, all of us have to work with payers also to bring revenue home. And there could be different data definitions. So we'll learn a little bit more around what is our data footprint and how are we applying principles and practices to make that data more valuable. So some of the tools that we are applying into converting that data to value, one of them is natural language processing engine approach. And I'm sure several of you are using that. The best of best NLP engines are 60%, 70%, 80% accurate, especially in healthcare marketplace. One of the things that's not helping, especially those of us who have QCDRs and we have to comply with CMS regulation, there is zero tolerance for error from CMS perspective. When we submit our scores to CMS for MIPS compliance or our participant groups are doing that, we have to attest that our data is true, accurate and complete to the best of our knowledge. If we apply these NLP engines, they could be more of a screening tool. But still we have to have human intervention and we have to validate whatever output is coming out of these NLP engines is usable. The other aspect is data standards and especially there are two areas of data standards. So one is the element level data, what we call data dictionary. And all of us are aware that there is a United States core data for interoperability standards that are being released by Office of National Coordinator and CMS is interested in us adopting those standards. So first two versions were released a few years ago. Version 3 is going into implementation and that's where quality is represented. Version 4 draft is already out for comment. So we should carefully as societies and QCDR owners should look into that, what are the opportunities and what are the implications of US CDI. I won't go into a lot of technical details. Our next speaker, Randy, will be touching with some very specific example around application of US CDI. In terms of the data transport, we have to acquire data from various sources. Historically we have been acquiring data from claims. That has been the basis of quality and misreporting. But we have started to transition to more of clinical data capture wherever data is captured into EHR platforms. A lot of it is in cloud environment these days. How do we get it? HL7 has been a standard protocol historically to collect that data and bring it into a registry platform. But there has been a lot of buzz around FHIR as the next transport standard for data interoperability. Again we'll learn more details around FHIR from Randy and we can always come back during Q&A, what are the best practices around FHIR or what are the challenges in adoption of FHIR. There are also other standards that are being released. One of them is TAFCA, which is the Trusted Exchange Framework. The Trusted Exchange Framework, I'll let Mary touch upon that. It's really about interoperability at a scale, but then crossing the state boundaries and how do we have national level exchange framework because our patients move. They will be in Florida during winters and they may have healthcare episode, but when they come back to Michigan we want that data back into care flows. So how do we do that and what does it really mean? The other aspect is how do you apply all these frameworks and standards and start to build smart applications? What are the resources that are required or what kind of repositories we can use where we have standardized, harmonized, normalized data set that is available in the transaction set. Whether that transaction is happening for quality improvement, for MIPS compliance, for research purposes, for guiding our clinical policy development, it's very important that we put our repository framework in that perspective and align our resources through that standard approach. These are some of the examples. How do you build apps and how can we enable some of those apps at bedside for our clinicians so that they are serving our patient better? And the clinical guidelines are reflected and are available to them bedside rather than physicians are expected to go to a society website, download 80 page PDF, and try to understand how do I treat this patient with chest pain? What are the three things I need to do or I should not be doing? So ASAP does work on quality measures in our registry platform. Currently we have enabled 26 QCDR measures and 22 or so QPP measures which are into various component bodies based upon the quality framework. Some of those are process and outcome measure. We know that CMS is really interested in more and more of outcome measures. But do we need to have processes at the back end of the even outcome measure? Something has to happen for an outcome to be achieved. How do we combine outcome and processes together but have a more patient-centered, outcome-focused approach to that? Also aligning our quality measures to CMS Meaningful Measure Roadmap, that's also extremely important because CMS is trying to push the industry more towards the patient-centered journey and not really around a specialty-specific measure set. So we would be interested in learning from all of you what are other societies doing in that aspect. How do we report to MIPS? We have seen that the bar for MIPS is going up every year. CMS is adding more and more testing requirements, especially element-level testing. And then audit requirement also, how do we comply with that? And the other big piece for us to develop quality measure is to really understand the trends in the industry and enable research for our researchers. One of the standard approaches we have started to work with the Interoperability Institute is called Business Process Modeling or BPM Plus Health. So BPM is a framework for standardization of clinical policies and clinical practices. So there is a community of practice that is working with us to really break the barriers around data standardization and kind of liberating data to bring it to a platform approach. In healthcare, you know, when it comes to data, we are still in a cottage industry stage. If we want to be Amazons of the world or Uber, where we want to connect a consumer to the enterprise, we need to have artifacts, we need to have digital footprint, where the digital capabilities can be consumed by a platform. And there is a standardized data coming as input. There are processing engine in the middle, and then there is an output that gets generated through consumption of that standard data set. So that's the approach in BPM Plus that we are trying to apply here. And we'll learn more on that from Mary. With that, let me bring in Mary. Thank you very much, Dr. Goyal. As you shared, we met at the Business Process Modeling for Healthcare Standards Organization. A little bit about my history is I started with Health Level 7, the data standards organization, way back in version 1, and the industry doing flat file data sharing to be able to take data from one entity and move it to another. So unfolding some of the wonders of this FHIR standard, these Fast Healthcare Information Resources, there really isn't a lot of mystery there. The world had evolved beyond flat file transactions, and Interoperability Institute is a nonprofit, is couched within the health information exchanges. So my affiliate nonprofits, they run all of the state infrastructure, and a lot of that evolved from flat file transactions to what is referred to, I come from the technology community, so stop me if I'm too technical, a protocol that runs the World Wide Web. I see many folks here on your computer, you're on your phone, you're probably on a web browser. So behind that are technical standards called REST or RESTful APIs for Application Program Interfaces, and what HL7 FHIR is, is it's leveraging those underlying World Wide Web or RESTful API technologies and protocols, and then putting a layer of these FHIR resources. That FHIR resource is the specific data content that is healthcare content. So in this space, recognizing HL7 was defining use cases, and defining it in simple language, English language, and you talk to one entity, maybe the American College of Emergency, well, no, because they've moved to business process modeling, but talking to one provider organization, a payer organization, they're talking like this, and trying to define the requirements for software that's interoperable and works together. You can't have definitions like this. So taking these use cases, which define our workflows within healthcare, and bringing in this business process modeling standard, what BPM Plus provides is a standardized language to define your business process. Now, why is that important? Well, it's important because, one, you can be talking apples and apples, but it's also important because the engineering and the technology community have evolved their tools, these business process modeling tools, so that you can actually generate, you can auto generate software code. They're commonly referred to as no code or low code tools that know where all the libraries exist and pull from open source repositories of code to give software developers a running start at developing working viable code. So in this context, the Interoperability Institute built an open source platform, what's referred to a sandbox, or you'll be hearing more and more with the advent of AI, a digital twin environment. A digital twin is a simulation of your healthcare enterprises and all of those different component systems that come together, be it provider organizations, payer organizations, reporting of communicable diseases, reporting to a cancer registry, reporting to a trauma registry, et cetera. So Interoperability Institute came together with some of our collaborators from some big technology companies like Amazon Web Services, Integrating the Health Enterprise is a standards organization that currently many of our legacy systems that do interoperability run on these IHE protocols, of course the engineers at IEEE, et cetera. And what we did was formed a collaboration to develop an open source digital twin for healthcare. So that open source digital twin, we had the whole branding discussion, and the name of it is MELD, and MELD is not a four letter acronym that means anything. It's meant to be a melting pot where different entities within health sector, different technology companies and viewpoints, public sector can come together and in a simulated environment we can learn together, we can deploy and stand up proof of concept, and we can put solutions on a pathway to production. And often in order to get to solutions you have to fail first. And it's really nice to have a safe simulated environment where you can learn those lessons and fail before you get onto your pathway to production. So this MELD global sandbox is open source, if there's anyone technical in the audience it runs on an Apache license, which means it's open source, it's in the public domain as far as the code, but there's nothing to stop in the licensing that a commercial entity can't pick up that code and develop a software product. It provides synthetic data, so example clinical data, what's being more often referred to as the data fabric, and recognizing that within CMSS many of the repositories, the publications that are curated are such a critical component of this emerging data fabric because it contains all of the knowledge of health sector. We also have a library of these FHIR applications that are in the public domain and freely available. We simulate in the MELD environment a statewide ecosystem. So it has all of the players, be it the pharmaceuticals like a CVS or a Walgreens, labs, both public labs and private laboratories, and payers, providers, community-based organizations, FQHCs, et cetera. And then with this open source sandbox or digital twin, we make that code available and developers, software developers, those code repositories are generally in what's referred to as GitHub repositories. So the Interoperability Institute, my technical team, curates that GitHub repository for MELD and for the global development community. We take some of the scenarios, these business process models, that run on MELD and formulate with academic partners. These environments are really good at training, and so we package up learning labs. Right now, a lot of the focus is around FHIR, the U.S. CDI, Common Data Standard. What is an implementation of TEFCA going to mean to be compliant with the new TEFCA regulation? And we can put together discrete, what we refer to as course packs, discrete lessons of here's what you need to know. So I mentioned the learning labs and the course packs, recognizing that CMSS, all of the medical professional societies, have training as part of their mission. So our partnership with ASAP, the American College of Emergency Physicians, has been amazing. We have taken two digital clinical practice guidelines, the first on first trimester bleeding and the second on lower back pain, and have taken those clinical practice guidelines in the standard business process modeling language and have been able to load them into this simulation environment to be able to use synthetic data to run scenarios and to be able to improve clinical practice. So we're at the point where we've stood up a proof of concept and are now looking at some of the next steps to be able to enable that proof of concept, both on the clinical side and introducing it to technology companies. So just really briefly on some of the reasons that you might use a digital twin environment. First of all fire is coming. The federal government is mandating this standard so it's a good quick way to get folks up to speed on what this fire standard is all about. You can use these environments these digital twin environments to create test and validate health care applications and then it's standard based. So bring in the HL 7 standard with the business process modeling standards with some of the very technical engineering standards to develop a simulation environment. So with that we put together this cascade of how important it is to capture our document needs precisely in the business process modeling standard allows us to do that so that we can digitize and automate our business processes. We commonly refer to this at the interoperability Institute. Our project is kill the fax machine. We are in this decade. The second is to use these simulation environments. You can actually enhance and verify but you can't do that without the expertise of the clinical side. You can foster consistency which is an enabler for interoperability which ultimately is to provide better patient care and reduce errors because of the clarity of those requirements. And then finally all about implementing and moving forward with that pathway to production. So I will now turn it back to Dr. Goyle to share some of the next steps that ASAP has been thinking about with their new data institute. Thank you. Thanks Mary. So how is ASAP applying all these core principles and practices. Our board was very generous and they supported us building an emergency medicine data institute. The vision for the institute is really to have our patients and communities to have a happier and healthier life. And the mission is to really create an impact and visibility into emergency medicine for physicians through innovation. There are five pillars of data institute on the left hand side. We are starting with our registry platform. We had a registry where we learned a lot over the last eight years or so. And initially the registry was focused on MIPS compliance understanding the alternate payment models and the future of the registry framework. But then we took that learning and develop our quality improvement program because the ultimate value is in the quality improvement. How do we enable research and analytics rapid development of quality measures testing of quality measures and putting an industrial scale around quality measurement methodology because historically societies have done quality measures and clinical policy development through two methods. One of them is literature review and other is expert judgment. But we are trying to add a third leg to the stool which is data driven policy development and data driven policy development. Data driven quality measure development and the fourth area is expanding registry beyond just MIPS compliance you know and not only our own specialty but more around patient journey you know where is patient coming from what are the pre hospital component of care like 9 1 1 or emergency medical system or ambulance system and then post acute care where is patient going after stabilization in emergency department. Are they being referred back to primary care. Are they getting admitted to hospital on a bed. Are they being referred to another hospital. How can we follow them. But then also understand various transactions that are happening you know what is the economics of health care in serving that patient. What are the research opportunities. Where are we doing better and where can we do the best. If you look at the penetration of our data institute emergency is a very diverse patient population. We get a lot of patients that are new to the system. They come to the hospital for the first time but we get a lot of visits 150 million visits every year comes through 5000 emergency department in the nation. Our current platform captures data from more than 1000 emergency departments in 47 states and we have more than 20000 clinicians that are participating over the last 80 years we have accumulated more than 120 million patient visit data and our data set is pretty large. We have around 850 data elements that is anything and everything that happened to the patient in the emergency department. Our philosophy around use of this data from data institute is what we call five levels of data maturity model where everything starts with collection of data. Data is nothing but you know raw facts and figures. We take that data and convert that data into something useful called information which is putting a context who is the patient. Why is this patient here. Where was the patient served etc. And taking that information and converting them into knowledge you know which is around you know why did this patient come. What did we do. What what what were the tests that were ordered. What was the differential diagnosis. What were the processes applied in serving this patient. What were the timestamps and delays that happened in the care etc. Once knowledge is generated we go to the next level which is generation of wisdom that is to really understand a 40 year male patient with chest pain with no underlying condition. If that patient came to emergency department in Seattle versus Boston versus Washington D.C. What was the variability in the care. What were the root causes of that variability. Who's doing better than the other whose patient is better in the end from outcome perspective etc. Once we have that knowledge captured then we try to build knowledge modules and then disseminate for sustainable change management. The change has to be really incentivized so we also look into various incentives or change at physician level at system level at patient level etc. So that's that's our philosophy around data institute that we are trying to go through. There are a couple of QR codes here if you are interested in learning more about Mel sandbox and also about our quality measures for 2023 is our contact information with that. Let's bring our next speaker ready and this is just a flow diagram of how BPM plus looks like. Thank you. My name is Randy Cudner. I am from Astros that's radiation oncology. There we go. So I spoke last year about sort of our data journey and wanted to give you an update. It's a it's a different perspective and it's something that we didn't start out to do. We fell into it. And so I want to share how you all can can maybe do the same. So I'll just give an overview. We'll talk about what's happening right now and then sort of the outcome of it. So probably like many of you over the last 10 years you've had members working on their own siloed projects in the world of interoperability data standards. They're trying to solve a solution with technology. We had all of that in about 2019 we converged with ASCO on a data standards project specifically for oncology which helped us take all of those disparate projects and start directing them all the same way. And now we are having successful pilots information vendor systems are adopting federally recognized internationally recognized data standards. And so in a very short time we've been able to make a large amount of impact and that's been through these words that mean nothing and code and codex. Everybody loves an acronym M code quickly is a data standard around oncology. It is oncology and oncology plus it is not just what is the chemo drug. What is the radiation dose. It is the full spectrum and the full picture of a person being treated for pain for cancer. Codex is the fire accelerators of the HL 7 fire accelerator we've heard of a few of them over the course of the past couple of days. Argonaut Da Vinci you know gravity all of these things are at that HL 7 level trying to solve a certain problem. Codex is the first one that is disease sites or disease specific. It did start out as oncology but has quickly expanded into cardiology and genomics as well. And so while oncology is a few years ahead of the other diseases they are they are using the oncology model and quickly mimicking those successes for their own disease specialties. So like I said codex or M code sorry the data standard is a big picture oncology but the M in M code is still minimal. And so it is it is always intended to be a tight data set that is sort of answers the question of what should I collect for a cancer patient whether it's research quality measure registry data you know pharma looking to their own trials anything it's what should I ask. And so this is version 3 of HL 7. It is a fire standard and it's it collects all of those things that are part of the patient journey. So you are looking at the workup and the treatment and the follow up and the person in the center of it. And we have radiation oncology is on the right there in the yellow section and we've made a huge impact in the past few years. This has been helpful because of the perfect storm that we find ourselves in. There's so much talk about A.I. There's so much talk about interoperability but there also has been for years. The talk isn't new. The federal power behind it is. And so that is making these dreams and these tools more of a reality because the vendor systems have to comply. The payers have to comply. You have to be able to get your patient data to the patient. And so all of a sudden these dreams that have been dreamt about poorly since meaningful use are coming to fruition because there's the tooling because there's the framework and the infrastructure. So if you don't have a digital plan get it going. It's a perfect time. So within Kodak's that's the community of people working on these data standards. You can see there are a lot of players and it is a very diverse group of players. It is not you know specialty societies are up there but its payers its federal agencies its international partners. You know it's it's everybody. And so you know we talk a lot about at CMS that better together stronger together. What partnerships can we create and rely on. And a lot of the time we think about what can Astro and ASCO do together. What can a UA and ACR do together. I challenge you to dream bigger because there is energy and there are a lot of people and a lot of groups with funding that are interested in helping groups that have the SMEs at the table and that's what I think in the first session this morning there was a call out of our capital as specialty societies is the member knowledge and their their content knowledge. We have the SMEs. Let's focus their energy on these these solutions. And so this is a it's a great environment to bring those groups together that have the technology tooling that have the data standards knowledge and then the clinical expertise coming from specialty societies. So I'll talk quickly about sort of what we've done. I can talk about this for days. And so I'm giving you the highest highest level. This is where Astro started. It was to develop a solution for end of treatment summaries which sounds really boring but it is so meaningful for care coordination especially in oncology. But for that patient you know there was mention of the patient information that comes before they hit the emergency room. This is part that that that full picture of your patient and so you can see the key partners there we've got. We have industry we have CMS contractors. We have research organizations international partners and health care systems. I think one of the big things here is that we have the vendors at the table. Dr. Coyle said you know how are you convincing your physicians to change. Spoilers we're not. We're talking directly to the vendors. We're getting the information the data standards that are specific to what they're doing into the systems on the back end. So it just happens. It's not it's not trying to convince anybody or win hearts. It's going to the source of the data and changing it from there and we do have with that one we have the radiation oncology major to vendors adopting the M code standard and we also have epic adopting the radiation therapy standard which is huge because epic doesn't play in that space and they are saying this is so established. This is so comprehensive. We want to be in this space. So then we took that information because we set that groundwork and we said great. What else can we do. What else can we apply it to like end of treatment summary is so small but your foundation is set. And so the minute you have your foundation set you can do anything. So we've taken it to prior authorization. We have payers at the table working right now in a pilot with US oncology and epic and very in his vendors saying I think we can make this easier. Prior off the political piece of it is still a nightmare. But the technical piece of it the data transfer piece of it we can do something about that. And so you know we've heard a lot about the digitizing guidelines during the meeting. We've heard about clinical decision support that kind of goes along with this prior off because payers have digital accepted guidelines that they are OK paying. And so how do you merge all of that information and and take a lot of that burden out of the day to day practice. Again we've got payers AMA Pfizer telogen is a CMS contractor all very interested in this and working on it and quality measures which of course this group is is more interested in. We have done a couple of proof of concepts of can we use not to replace current easy QM language but can we further specify things in quality measures that are specific to cancer that there's just you know there's a big bucket in easy QM language and I'm just going to call it that because I forgot the letters but we can get down to minutiae in in an oncology specific standards so we can get down to those really complex questions and you know how many times as measure stewards do you have to water down a concept because you can't get at the data. And so if we can be more specific in data standards you can get at that data and therefore quality measures are more meaningful and more powerful and actually impact quality improvement instead of just your Medicare reimbursement. So we have you can see where we are we're at that that proof of concept 2.0 we had a conversation earlier this week with the telogen team and ASCO about about just that we've done the proof of concept we've taken ASCO measures and Astro measures and put them into you know a digital format using M code and and where you know current coding languages didn't suffice and so that's been proven done. What's next. I think this next step is going to be finding a clinical concept not a fully baked easy QM. Let's find the clinical concept and see where the gaps are and build out those standards. Let's have this be a use case where we can fill in the gaps because that's what M code is is filling in the gaps that don't exist in other coding languages. And so you can see this sort of very detailed multi phased approach. I'll tell you this week this approach went out the window because we had that conversation of proof of concept is done. We did it. M code works for measures. We can do it. What's next. And so we are regrouping of like what you know what's the goal. What's the concept we can't answer now and how do we get there instead of working backwards of what data can we get. What question are we trying to answer. Let's go make the data. Let's go find a way to get at the data. And so we've had like I said M code is is being adopted widely it is international at this point in a lot of different use cases and a lot of different scenarios. We M code was cited by CMS as an option for data submission for the enhancing oncology model which this is the first you know federal model that they've said yeah let's let's do this M code thing. Let's try something from an accelerator. So there is excitement around it at that federal level. And also M code data elements were included in a proposal for U.S. CDI plus for quality which again is is acknowledgement at that federal space that we're moving in the right direction. This is actually where you know we're setting the foundation to solve all of these other situations. And I think that's where our success has been and where I encourage you to go is set your foundation use your SMEs in your membership to to to fill those gaps that you have right now that stop you from creating or finding out the data or the answers you need. And also I think this was just announced last week, the White House pairing or partnering with the Codex and Vulcan fire accelerators to further moonshot activities, to further informatics activities. So again, federal recognition for what's happening. So again, I can talk about this for days. So I'm happy to talk to any of you about how you might engage in something at your societies. There are, like I said, for the Codex accelerator, an oncology, a genomics, and a cardiology space. But I'm happy to talk to any of you after or later on. Thank you. Thank you, Paola, Randy, and Mary. That was extremely informational. We'll now open it up to our reactors. Dr. Elizabeth Dry, she's the chief scientific officer at the National Quality Forum. And Dr. Shajia Sadiq, the assistant professor of medicine at the University of Pennsylvania. Thanks. And I'm going to go fast because we don't have too much time left. Great information, Chair. So much rich progress. I'm hearing it from the perspective of working at the National Quality Forum. As most of you know, our focus is on advancing quality, and particularly through leveraging quality measurement. And part of our mission is to strengthen the infrastructure that we can use to be able to assess quality, and looking at outcomes that matter to patients, and doing it in a less burdensome way. So you can imagine the content here is really central to executing on that mission. Some of the things I'm hearing, I just want to echo because that was a lot of content. And I'll just try to react and summarize a little bit. I think they really fall into four areas that are sort of an end-to-end way to think about data capture, all the way to getting improvement and care, what Juan maybe would call data to value, to quote him. And that's the data capture, the aggregation of that data, and creating and getting it into a form we can use to generate learnings, the translation of those learnings into tools and guidelines, and then the implementation of our actual algorithms, whether it's decision support or quality measurement. And you're hearing about problem solving just in these couple of presentations across all of those phases. We have to get through all of that to get to real improvement in care. So I'm just going to flag a couple points the panelists made, and maybe throw some additional things for you to keep in your view in this complex world that's probably not something any of us have enough time to spend in, but is so consequential to how we're going to be able to deliver better care. On the standard setting side, you hear a couple of use cases, quote unquote. I personally had a hard time when I first started working in this space, even thinking, what do we mean? We're just taking a problem and trying to, as Randy was saying, figure out, what do we need to be able to act on this problem? Are we not capturing data? Do we have no way to share it because there's no actual FHIR standards in this case? Or is it about the sharing across organizations? These use cases, I think, are really valuable because there isn't a lot of, there's no sort of centralized drive to get us down this path. There is ONC going through an advancement of standards process. But they are more followers than leaders. They will say that. They need to see things demonstrated before they're actually going to push forward and require data to be interoperable. You have to show that it has a use case and that you can use it for something. So these use cases, to me in some ways, they seem very narrow and isolated given the scope of problems that we have. But they're really valuable. As I think Randy, well, everybody illustrated in getting some forward progress and laying a groundwork for, once you've demonstrated that you can build standards and you have the right collaborators to the table, you can actually repeat that in other use cases. So I would echo the encouragement to find areas in your societies and in your specialties where you really want to solve a problem and then start learning how to work through this process. The second thing is this ability to have a toolbox. That's pretty new. As somebody who developed measures for 15 years, data was our biggest barrier. So you see a lot of claims-based measures. I was working at Yale Core when I was doing this work. Early measures were all Medicare. It's the only place we had integrated data sets. Now we're pooling these EHR-based data and other sources of data. And that's a complex area. I'm just going to comment. I'll just note that it's so critical for us to be able to learn, to be able to aggregate across care settings, not just build out of registries. But for example, if you think about diagnostic excellence, which is a focus of this meeting, to aggregate data for patients across all the settings that they're in, even before they get a diagnosis. So we can look at things like delay or misdiagnosis. The learning and translation into guidance, that value step, this is a really challenging one. And the one thing that didn't come up, but it's come up in other discussions at this conference is this needs to be a really transparent process because it's not just a technical math or AI process. There's values laden in the way that we do things. So Juan noted, for example, how accurate is accurate enough in our algorithms? We've been in the prior, in the first, I guess it was after breakfast panel, we're talking about what data should we be using to build these algorithms. We need to be able, I come from an organization that's a nonprofit multi-stakeholder standard setting organization. We need for all our stakeholders, including patients, advocates, providers of all types, payers, employers, et cetera, regulators, to be able to understand the data we're using so we get these steps right. So as we're doing these kind of narrow use cases, one thing we can be thinking about is, how do we bring that input in into our learning and translation step? And then finally, implementation. There we have, one thing I just wanted to emphasize is how you need different kinds of players at the table, and you need incentives. I think all of you guys mentioned that. We have to bring together vendors, payers, regulators, and really get not just something built that we can use, but we have to figure out how to incentivize it and then enable and govern the sharing of data that the use of these kinds of tools requires. And hopefully, coverage for that upfront investment that we need to make if you're going to put in place data sharing and quality improvement, quality measurement systems locally in states and systems that can really advance care. So I'll stop there because I know we don't have much time. Just giving you that big picture view, really appreciate the opportunity to hear from the panelists and to share with you. Thank you. Good morning, everyone. My name is Shazia Siddiq. I'm a gastroenterologist clinician. I work with the American Gastro Association. I've been on the Guidelines Committee for the past seven years, but I'm also a health services researcher. And I do research both with Medicare data and with health system data. So I'm going to be commenting a little bit more on those last two areas about translation and implementation and how a lot of these data standard elements can fit into the bigger picture of clinical guideline development. So we heard two really fantastic presentations about the promise of data standardization digitally through the HR pathways and for quality measure development. So just to take a step back and just review how standard guideline methodology can intersect with this type of digital guidelines, which essentially operationalizes them and can also help with clinician workflow. So for those of us who work in the guideline methodology space, we are familiar with grade methodology, which is a rigorous method to produce guidelines. And it's important that there are explicit patient-centered questions and outcomes that are selected a priori. And then that's used to inform a systematic search. We then collate the evidence, rate the quality of evidence through critical appraisal. And then we formulate recommendations. And that process of formulating recommendations, as was just touched upon, requires value judgments. And we use that objectively through an evidence to decision framework. And because there are trade-offs in many of these different decisions, and we've heard this with race-based algorithms as well, do you screen more black individuals but then have more unnecessary biopsies for prostate cancer screening, for example, there's a lot of value judgments that are required as we start to use our data to actually impact change and do iterative changes through the AHR that I really agree with this point about transparency. And so you may be familiar with the traditional pyramid that shows at the bottom of the pyramid are the lower-quality studies, like case reports and case control studies. And then toward the top are randomized control trial data. What grade methodology does is it takes systematic reviews and meta-analyses and looks at it with a new framework that really acknowledges the fact that there is heterogeneity in study quality across these different study designs. And so what we are seeing with the example from the Emergency Medicine Data Institute and these large data repositories is that many of them are actually better than a lot of randomized control trials, which are not done in a pragmatic fashion and really don't provide real-time data for how patient data is evolving in response to interventions. There are several pitfalls of translating guidelines to digital format or interventions. And one which I heard about in the first presentation is when the guideline that's being used as a basis for making clinical care pathways is one that's just expert opinion or a guidance statement, so something that's not following Institute of Medicine standards using a systematic review and guideline. And that's something that is really important as we think about evidence-based clinician workflow. Another thing we see this often is even when people are utilizing our guidelines that there is often a false equivalency between strong and weak or conditional recommendations. So when you see these clinical care pathways that get built and different data polls that are going to happen and trying to show compliance to these recommendations, it's a problem when a strong recommendation and a conditional recommendation is being held to the same standard. And that's because a strong recommendation, which we use the word recommend, so you'll see something like the USPSTF recommends. I'll give an example, colonoscopy for colon cancer screening, that's applicable to the majority of individuals. But a conditional recommendation or a weak recommendation uses the word suggest. And although it seems very subtle, and I will say as a clinician, many of my clinician colleagues often don't even realize that these are purposefully very different, it's very important because in some situations doing the complete opposite of what they're telling you to do is reasonable. And we usually try to be very transparent as to what factors should push you in that opposite direction. Sometimes it's things like age or your pretest probability of cancer, those kinds of things. And then another issue is when you start to embed guidelines or any sort of evidence-based, and it can even be from your own data institute or registry data, that become quickly outdated. And we've seen this with COVID-19, things that are evolving very rapidly. And another thing is that what needs to happen in parallel is that we need to give more resources to living guidelines which do update these guideline documents more rapidly, as well as rapid reviews to more quickly build systematic reviews and guidelines alongside that data component. But what we've seen is that all of this work that's been done is really a huge step forward in closing the evidence to practice gap with rapid integration into clinician workflow, monitoring quality measures, identifying problems and solutions more rapidly during clinical care. I love the example about reducing administrative barriers. There's so many different use examples of this, as well as data utilization, of course, for financial reimbursement. Thank you very much. Well, thank you all. We do have about five minutes left for questions. If anybody has any questions, please step up to the mics here. Or if anybody wants to pitch Mary a Shark Tank idea on how to recycle fax machines in the future. The comment was kill fax machines, which got a great chuckle back in the back of the room. Donna Gronin with the American College of Preventive Medicine. Probably not someone that you would think would be asking a question in this particular way. But you all are brilliant minds, great academicians, great researchers, great data folks. Randy, you had a slide that was beautiful about patient was in the center. And you had the entire methodology with all the different data sets and all the different data interactions for the patient in the center. How on earth can an internist at a private practice ensure that all the different subspecialties and or referrals that they send their patient to can be housed in one centralized repository so that primary care physician can truly manage the care and coordinate the care of one patient? So it's truly the question is about implementation, practical application of all of this great data, all of these great systems. How does it actually get practically applied at the point of intersection at a private practice and the patient? I think I'm going to give a really simplistic answer to a really big problem. Because that's the problem, right? Is that, great, we have all this data, but I can't share it. Nobody can see what happened two years ago. I think the first step is building the data standards and adopting the data standards and choosing the same data standard. I think with all the siloed projects that I talked about before we started moving into FHIR, everybody had their own solutions. So University of Michigan is a huge data center, and they had their own solutions. But they couldn't share information with University of Pennsylvania, who is also a big data center. And so if two giant data forward institutions can't talk to each other about the same specialty about oncology, then nothing is going to happen for a single doc practice in Omaha. Nothing against Nebraska. So who then, because I think you mentioned or somebody mentioned the accountability, so who then will hold either the practice, the system, the physician accountable to ensure that there is interoperability so that the patient gets the right care? It's an excellent problem. And I'll probably start with a problem statement first, which is around having a national standard around common patient identifier. We haven't agreed to that as a nation in 25 years. So if we want to serve our patients, our patients will go a lot of places. They have need. Primary care will create a referral. Patient will be referred. We would not want to fax their documents to the specialist. We would like to have an interoperable environment where the specialist has access to that data so that they do not do duplicate tests. They do not give them penicillin, which they are allergic to. So it's very important that there has to be a public-private enterprise here in order to hold us accountable. We have to put patient at the center of the universe and let patient drive the behavior. Unfortunately, patient is the most neglected entity in our health care universe today. A lot of behavior is driven by peers. And we have to take the medicine back in hands of providers and patient. So then my final follow-up would be I would love to hear your thoughts about what could CMSS do with each of our specialties to put pressure on the parts of the system that need to be fixed the most to ensure that there is, in fact, accountability in the interoperability at the physician practice level with the patient in mind. I would love to hear your thoughts. I'll just add, I think, Randy, you mentioned that payers are a big force in this space. And that's a plus minus. But from where I sit, the biggest push is to move to population health accountability, as you know. More and more capitation and payment, but also more accountability for outcomes. So at NQF, this is something we're thinking about every day. How can we enable population-based measurement and also fair accountability? And I don't know that we don't have the answers to that yet. But interoperable data is a huge, and we're not going to have interoperable data for a while, but that's a huge need. And efforts to, say, at the ACO level include population health measures like blood pressure control, hemoglobin A1C control, as you guys know. Those ended up being postponed because the interoperability execution on interoperable data exchange was just too hard. People weren't ready. And you see also the federal government, CMS backing off, and the Office of National Coordinating their timeline for driving towards that. But I think we're going to, the momentum, even though we're at a slower pace than probably most of us would like, the momentum to get the data infrastructure there so we can do population-based accountability is critical. And to do that, we absolutely need that. To do it fairly, we need that data sharing and that visibility for primary care providers. I don't think we're going to reverse going in that direction anytime soon, but I would welcome. So we have to get there is my sense. I think one other thought around what can CMSS do. What's happening now in the standards organizations and being driven by the federal government are standards that are paper-based, implementation guides that are paper-based. Part of the reason I'm here is that technical toolbox, to be able to reduce those barriers and moving with partnerships like ASAP, moving to reference implementations. Here's a reference implementation of Codex. What does that mean to this payer? What does it mean to this pharmacy? And be able to actually have working code in an environment where we can advance some of these hard issues. Thank you, everyone. We're at time for our session. Please give our panelists a big hand, and thank you for spending the time with us today. Thank you. Have a good lunch, everyone.
Video Summary
The panel discussed the digital evolution in clinical practice guidelines and quality measurement. They highlighted the importance of data-driven digital quality measures and computable guidelines. The speakers discussed the use of data standards, such as FHIR, and the need for interoperability in healthcare. They emphasized the value of collecting and converting data into actionable information and knowledge. The panelists also discussed the challenges and opportunities in implementing digital guidelines and quality measures in clinical practice. They emphasized the need for collaboration between different stakeholders, including clinicians, vendors, payers, and regulators. The speakers highlighted the importance of transparency in the implementation of guidelines and the need for ongoing updates and revisions. They acknowledged the potential benefits of digital guidelines, including improved care coordination, reduced administrative burdens, and enhanced patient outcomes. Overall, the panel emphasized the need for standardization, interoperability, and collaboration to drive the digital evolution of clinical practice guidelines and quality measurement.
Keywords
digital evolution
clinical practice guidelines
quality measurement
data-driven
interoperability
collaboration
FHIR
actionable information
care coordination
standardization
×
Please select your language
1
English