SPEAKER 0 It's registries, as you will see, are both powerful in terms of what they're able to accomplish in terms of the transformation of the delivery of health care and the outcomes there are, there are. But they're not necessarily your friend. There are several issues related to registries that I'm going to expose and then to think about what do we need to do to accomplish a good data environment? It really is going to require leadership. And so I'll try to make that point. That leadership position leadership in particular, is going to be required to transform medicine, that the end destination is to have better data integrated into the workflow, to capture it as high quality data at the point of care. All right. So that's a phrase that presages what I'm going to actually talk about. Let's dove into the talk itself here. So the first question is, kind of are we there yet? If you were, if you were to ask the administrative leaders, governmental leaders, most people would say, Yeah, we've been working on this for 15, almost 20 years now. We should already be there. Government timelines, as you know, there's presidents are only in office for four years and they expect a lot over the period of time that they're in office, so either one or two terms. In 2004, President Bush established a 10 year goal to develop the Electronic Health Record, a Republican imprint. In 2009, President Obama, a Democrat, signed the American Recovery and Reinvestment Act, which is, I think most of you will recall, really pushed electronic health record deployment and adoption through a series of financial incentives. The expectation was that by 2016, both the Republican and Democratic side of the aisle said that, well, we'll be there. We'll have data flowing through our systems in ways that enable artificial intelligence, enable analytics, etc., etc.. Well, I must submit to you that I think we aren't really a whole lot closer to where we were when we started on our journey, yeah, there are kind of pockets of data and I'm going to go through that pockets of high quality data. But the ubiquitous torrent of real world data, the things that are in the yellow part at the bottom of this slide. That we believe we would be able to action upon really aren't there. All we really have now are kind of these puddles of document exchange through health information exchanges through some direct and indirect connectivity from electronics, health records systems. But it's really at the document exchange level. Take a PDF document and move it from one person or one entity to the next. And you, you call that interoperability. I'm being a little bit pollyannaish about it, but suffice it to say that I don't think we're there. We clearly aren't that big at the point of big data. What we're dealing with is largely transactional data and trying to figure out truth from transactional data. Even the randomized clinical trials, which viewed the availability of electronic, ubiquitous information as an enabler of advancing science in the better, faster, cheaper mode, it's turned out to be kind of an electronic bridge to nowhere. So I'm going to be a little bit critical about informatics, I'm an implementation, so I'm kind of pointing the finger at myself and saying, you know, kind of where do we fail? So let me just give you an example of one of the things that has was set forward as part of the era and is very well intentioned, but really hasn't quite accomplished what everybody believed it would accomplish. So the era of high tech set up the Standards Committee to specify a number of things, including, for example, standards for interoperability, the decisions of LED by John Whampoa and Jamie Ferguson, the Committee to Identify Standards for Interoperability basically said, You know what? The best way to think about. Innovation in health care and health care information technology is to let the technology companies do whatever they want and try to figure out interoperability at the boundary between entities, not within an entity. So we really ended up with boundary based interoperability. That is, if you want to move information from one system to the next, you actually have to always do some degree of ETL extraction transformation and then loading of that data into a new system. The approach to boundary based interoperability was supposed to be founded upon the Big Five. The Big Five Terminology Code sets, they're listed here, snowmen like our norm ICD and CPT. But if you really think about this for a moment, take a little bit of a dove into any of these I would stay for, say, for example, helmet. And you're asking for the snowman term for myocardial infarction. What you realize is that there are limitations to ballot based interoperability when you type in the word myocardial infarction, which has a fairly specific meaning to a cardiologist. There are 308 matches that are returned. This is a query that I ran a couple of years ago, so there may be more matches now. And the amazing thing or the interesting thing about this is that these terms are all organized by an auto logic or, if you will, relationship basis defined by pathology and anatomy as opposed to clinical definitions. So the problems with boundary based interoperability, really, if you think about ICD 10 code sets, this noted city code sets is that there's a lack of vocabulary specificity at the clinical level that enables the capture of good data at the point of care. Yes, the ontological relationships in particular actually help with analysis. But the. Starting point that is, how do you create good data? Start off with somehow that particular part of the story has been if you will miss. Now at the bottom, this I actually list a duke participates in around 20 cardiovascular registries, and if you think about what I've expressed you so far. It basically means that every time you want to supply data to a registry, we have to go find the information and documents. We have to convert that into some degree of data structures so that we can then move the data and then we have to ETL. We get it into the registry of question. So you might say, well, that's really not a very good model. Why do you do it that way? Well, the reason we have robust evidence and guidelines leading to registries is actually, I think, fairly simple. It's the best way we've designed the best way. This the community of health care is actually discovered to improve processes, to improve systems and then to improve outcomes. That is, by capturing high quality data and then really conducting pretty straightforward analytics like means medians, standard deviations, et cetera, et cetera. We can return that data back to clinical practices and really change what happens at the front lines. How do we make sure that our patients with heart failure all receive a beta blocker? How do we make sure that a patient who comes with an ischemia presentation of chest pain ends up getting a cardiac health registries actually tell us what we are doing. And then by telling us what we're doing exposes, if you will, the opportunities. So then you might ask the question, how do registries solve the data capture problem? They are obviously in order to do even very simple analysis, simple analytics. They actually have to start with good data. They have to understand that the definition of New York Heart Association class to heart failure in Seattle is exactly the same as it is in Durham as it is in Omaha. Registries solve the data capture problem by standardizing the data on the basis of data that they want to capture. And these aren't trivial exercises, but for example, the Cath PCI registry, the angioplasty registry that I supply data to has 500 plus data points. The TVT registry that's the transcatheter valvular therapeutics registry has over 800 data on these data elements are all standardized in terms of their meaning. Again, so that the consistent capture of the context, the meaning of that data occurs regardless of where you are. Now, is this done electronically? Well, the answer is absolutely not. This has led to a whole industry within the cardiovascular space. It's about $2 billion spent every year by health care systems supplying funds for this individual to do swivel chair interoperability across all of health care. It's north of $15 billion for capturing data in ways that then supply that data to the multiple masters that exist within our health care environment. It isn't just registries. The health care systems also want to understand, for example, population health. There are federal and state regulations for reporting. FDA has their finger in the pie here wanting to understand about device device outcomes. And yeah, there's all of us who want to be data scientists to do machine learning, to do artificial intelligence. I would argue that in order to do that, we need good data to start off with. But how do we do that? Because you can't just ask the producers, the clinicians or the non-sustainable model that we have right now, and that is the registry technicians, the technologies to capture the data and then transform it from a document over to electronic health information system. We're all time challenge. We're very short staffed. We're overloaded with information. And the information that is or the expectations that we have upon us simply just continue to increase. It is not a sustainable model in any dimension. So before I get to the answer, let me just make one mention about some concepts of artificial intelligence because what I find is with AI, the uninitiated, the novices, if you will say, Oh, just think about A.I., just put the information in there and it'll figure it out well, if you really dove down into what is all about. Hey, AI starts off with data features. That is, it describes the context of the patient in this particular case in terms of their features. So what is their age, for example, in this box? Are they male or female? What is their what is their ethnicity? What is their history of heart disease? How many blood vessels are locked in terms of heart disease, et cetera? And then you put all these together and then hopefully what you can do is you can predict why an outcome based upon all of these X characteristics or features that are quantitative in some way or another. Well, if you think about them, why A.I.? Well, A.I. builds upon this by taking each one of these observations X to the M, if you will, down here and then applying logistic regression or any other API methodologies and then having those interact in ways that improves the ability of the approach, the standard approach in this particular case, logistic regression via via the combination of these latent processes, then to end up with improved predictive capability. But the point that I'm raising here really is not about I'm obviously not that good at teaching A.I. because I'm not sure that I explained it to anybody here particularly well. But the point of this is the bottom of this. The foundation of A.I. processes, again, is good data. You still need to know what the age of the patient is, whether or not they're male or female, what their ethnicity is, a number of coronary artery disease vessels, that's that heart disease, et cetera, et cetera. In order for the machines, then to take that data and then do something with it. So what is good data? Good data really is the foundation. It is discreet, not text, not verbose, which is really what describes 85 to 90 percent of the way we document things in health care. It is highly semantic. That is, the clinical definitions mean the same thing. Again, in Seattle, Omaha and in Durham, it's granular at the clinically meaningful level. So yes, if the presence of heart disease, coronary artery disease is predictive of heart failure, that is a clinically meaningful context. But it's also highly defined. It's highly semantic so that it isn't just whether or not you've got a blockage in your heart, it's whether you have an expression of of of heart disease in a way that's again, clinically predictive. It is context specific. Is it yesterday and today? What's the time course? What's the context of the presentation? It's also interoperable, and I would argue that it needs to be interoperable at the native level. We cannot continue to facilitate boundary based interoperability if we ever want to scale this. So that again, you can move data sets from Seattle to Omaha or from Durham Omaha or Omaha to Durham and then have them just come together and actually be analyzed more. And then another concept about good data, it's it's not the hundred and fifty two hundred and fifty thousand concepts that are out there. It's really the things that are of high value and high value is really defined by having information that is useful for a management standpoint and outcome standpoint, a quantitation of performance, prediction of outcomes, et cetera. It has to have high value in and of itself. So in health care, where do we see well-formed clinical data today? Well, the good news is that since electronic health records are built around the ability to build anything that has to do with billing, that is the transactions of health care are actually pretty well-formed. Clinical data there is a specification for the billing form where all health care enterprises have to use the same billing form and then can pay bills to the payers. So events orders that are placed by a computerized physician order entry, medication information, for example, again, prescriptions being a type of order are actually pretty well defined in terms of transactions, and even things like ICD 10 diagnoses are very well clinical data today as data. Now the problem, by the way, with ICD 10 diagnoses that for those you don't spend a lot of time in health care is that there's a lot of gaming of the ICD 10 system to help manage, excuse me, help maximize the opportunity for revenue generation. So what you see in terms of a diagnosis of a patient is not necessarily the truth in terms of what's going on with that patient. You a very simple example. I'm in the Cath lab in women of childbearing potential. We always get a pregnancy test to make sure that we're not subjecting an individual who's pregnant to X radiation during a cardiac catheterization. Now, the only way we can get reimbursed for that for that order is to list a diagnosis of pregnancy. So the truth of the matter is that nobody who's undergoing a cardiac catheterization, at least as best we know, is pregnant. And yet all the women of childbearing potential have as ICD 10 diagnosis pregnancies. So again, the nuances of some of this stuff you really have to discover and then correct for as you're trying to do analysis, there is automated data that comes out of laboratory machines in particular, but also devices that we need to anticipate. But granular clinical data, what we really need that 85 percent of the information that's there through the transaction of through the documentation of healthcare, not the transactions, the documentation is largely missing in action. So what I would put into the category of not so well-formed data and what's driving us towards this good data paradigm? First of all, is that the tradition of medicine is that we document things via text that clinical information that's in that text includes medical history, risk factors, social history. You can read all the things there. And I would suggest to you, those are the pieces of information that get digested in a physician's head to come up with that kind of back of the envelope prediction of outcome. Yes. Yeah. We take a look at patients and say, Wow, you're frailty is very high. You're not going to survive a an operation that would otherwise be life saving and so will relegate that patient to medical therapy. Was there anything that we could do that we could engender from a computerized output to say this patient's frail? The answer is no. This is something that is, if you will, of summative or qualitative assessment that is part and parcel of being a physician. Another one is patient reported information. So how do we get to having patients actually participate in the capture and reporting of their state of being, et cetera? And so there's lots of examples of not so well-formed data that we still need to really resolve. So if you fall asleep, I would ask you to wake up because this is really the sum of everything I've just said. And that is that data quality, high quality, good data is really going to be the essence of success. And to be very frank with you, kind of the differentiation of one of the things that we are focusing on in the center of intelligent health care that is not getting a lot of attention elsewhere, mainly because it's not all that sexy. This is blocking and tackling in the trenches. This is not, if you will, bright, shiny objects. All right, so what's the solution? The solution is a multi-tiered, and that's really what I want to spend the last five to 10 minutes just chatting with you about to give you a sense of where we're trying to go with this concept of the the core for good data. So the first problem that we wanted to think about is how do you actually make sure that you end up with good data? And this has led to something which I call the four time data capture. The four tenants of data capture that is you need to capture the high quality good data once and use it really across the entirety of all potential purposes, as I've mentioned before, you need to collect the data at the point of care. At the point of view, it's not later on, not through, not through a swivel chair operability and you want to do it using a team based high usability system. To some, by the way, that's fighting words you want the nurse to document for the doctor. You want the doctor to not be original in terms of their content, et cetera. So there's a lot of politics associated with that. We can talk about that if you'd like in the Q&A. But suffice it to say, this is a bit of a paradigm shift for the way medicine is practiced. We should also use the computer to abstract and compile views of the data stop using. The electronic health record as a sophisticated filing cabinet and instead of the computer that drives that electronic filing cabinet actually collate all that information and put it there in front of you. And yes, we need to do this in a way that reduces our clinician cognitive burden. So the data capture component is a huge one, and it's built on a paradigm that is not just as simple. Let's go after data as high quality data. No, we will have to figure out how to re-engineer the whole system. This component has to do with the data elements themselves, the clinical concepts, I'm not talking about capturing all of the things that we do is clinical, all the things that we represent through clinical concepts and statements. But really, again, these high value things. Again, a multi mature group of folks, group of interests that have to be involved. Professional societies, as I mentioned, own the lexicons of medicine. There is a role for academia to help try to understand how to coalesce around concepts so that they are consistent in terms of meaning in one location to the next. Other agencies like FDA, they all also have kind of skin in the game. And then how do you convert those clinical concepts into data elements? And then how do you build those data elements into database systems that get us at least closer to this concept of native data interoperability? The next one is is that we really do need to understand why we want to get to native data interoperability. This is another way of representing that concept of capturing the data once and using it for really all purposes. That is, if we do native data capture right, we will naturally accomplish native data interoperability. And then the last dimension that I do need to mention is that we need to figure out how to do this while reducing clinician, but not by increasing clinician for not by asking the doctors and the nurses to do more, but asking them to do less. And in fact, building on that concept, there is four things or different dimensions that we are trying to solve for simultaneously, and that is to improve the amount of data. The including the quality of that data, so how do we actually transform our documentation processes to include to capture more both quantity and quality? How do we reduce the amount of time that it takes to be able to use that data from orders to really right there for clinical decision support? How do we reduce the need for a swivel chair? Traditional chart abstraction and move towards concurrent data acquisition so that the individuals who are handling the data at the point of care are actually the ones who are responsible for capturing it as data? And then how do we do this all at the same time, reducing the total cost of ownership of of these processes? That leads us to this concept of structured reporting what's called structured reporting. Again, the concepts here, I've already mentioned them, but it's specific data captured by that individual who is actually handling that data, and it's integrated into their workflows as opposed to being one offs like swivel chair soldier interoperability. It does require informatics formalism, specifically the identification of high value targets, clinical concepts, representative data elements. It does require engineering on the part of understanding on the part of computer science in terms of understanding how to leverage computers to do what they really should be doing. So, for example, if I'm in the Cath lab generating a procedure report, how do I actually have the computer pull various pieces of information and it's literally a thousand plus pieces of information from across the span of all the documentation that's been done and then put it into a cauldron for that then becomes, excuse me, a human readable structure. Reporter It does result in a return on investment by improving data quality and data quality, reducing redundancy of effort, reducing the time to final report, reducing FTE requirements, etc., etc.. Graphically or pictorially, this is kind of what it looks like we have gone from a electronic filing cabinet documents based approach, which would be just all documents lines up here to using forms, using forms that are targeted for that individual and their individual role at the point of care in ways that then are consumed into a computer creating structured documentation. But that structure, documentation process, or I should say, with that structured documentation process serving several purposes, including the responsibility of the clinician, to make sure that the data is high quality to begin with. And then there is the use of that data through the storage analysis and then submission of data to various use cases down the road. What destructive reporting fix it actually minimizes abstraction, we can reduce the FTE count. The chart detractors after each count by around two thirds, you still need people to manage the process and to do a little bit of chart abstraction. But suffice it to say, we minimize it to a point where it actually becomes economically sustainable. It does result result in a single source of truth with the mantra of trusting and verifying the data being something that permeates the processes of this data capture it results and reusable data capture the data. Once use it many times, it forces us to do engineering of the forms based approach to make sure that we're asking for prompting literally or individuals to think about all of the key pieces of data and to also describe or to not document when things are absent. The tradition of medicine, by the way, is what's called charting by exception, which really means you only talk about the positive things that we want to actually talk. We want somebody at least to think about the negative things and to make sure that there is an explicit at least prompting for that. Not necessarily an answer, but at least the questions always get asked. And you can see some other things that the stricter reporting environment fixes. Now, how do we do this last couple of slides? There is tiers of data that we need to understand in terms of the mechanisms for accomplishing the capture of that information, resulting in our actually having multiple different systems that accomplish the overall goal of of data capture as data. So yes, we are using Epic where it is best positioned because we can query it for those pieces of data. So transactions, laboratory data, medication information, we pull that out of epic wherever possible. We also have a cardiovascular information system termed Lumet x Lumet has the advantage of being something that we can program ourselves. So, you know, we're human beings can develop the front end interfaces and we can do this in the hours to days. That's long kind of weeks to make transformations in terms of what data that we want to acquire and then for very small projects, meaning anything less than 500 to a thousand patients. We also do some work inside the red environment. The data gets stored in a data warehouse type of configuration with depending upon who's got the data, who owns it, where it can go, et cetera, et cetera. We end up storing it in various data warehouses and then creating for analytics purposes. The data out framework, especially this middle concept here, which is the Duke data lake, which allows us to take data from all of these data warehouses and then configure it so that we can create these portals, such as self-service reports, automated visualizations, etc. for pushing the data out in ways that I think are really are, if proven to be quite enabling of the Duke environment. So what are we trying to then do? You might say, why am I so excited about or explicitly say, why am I so excited about Nebraska? So the Center for Intelligent Health Care is where I see us actually able to take the Duke lessons and operationalize them, commercialize them, take the things that we've learned and actually make them into a true good data stack. The good data pipeline that you see here talk about the in terms of what we need to try to accomplish through the good data center. The core for good data is that we need to focus on how do you convert clinical terminology into interoperable CDs? We have a part of our structure is to think about the modeling and the business practice. Molly was called BPM in the sign formalization to the workflow and where data then goes into that workflow so that we can then again commercialize this, or at least scale it so that other enterprises can use it. The other components of this are all part of the playbook that we're talking about designing so that others can learn from this. And I want to give Dr. Wendell John Wendell credit because he's actually trying to figure out how to enable the work that has started at Duke. But really, I think needs to be part of the way that we define the transformation of health care. So with that, I've done a couple two or three minutes over apologize about that. But I do want to thank you and I'm looking forward to a deeper and deeper collaboration with the center and standing up at least the core for good data as a paradigm for beginning this transformation of health care in a way that I think I hopefully have articulated will be enabling for all the things that we can imagine that we can do within the health care space to improve processes, improve systems. And ultimately improve patient outcomes, so thank you very much for your attention. Usually this is now open for questions to understand. SPEAKER 1 Jimi, this is Melissa. And do you want to address just the comment nursing care is not so formed and how these models with the good data that you presented would help with nursing, especially with the data collection that they collect is so different. SPEAKER 0 Yeah, so if you think that changing physician practices are changing, nursing practice is at least an order of magnitude more difficult. So I would answer or I would respond to the question by saying that the approach that we have taken at Duke really did transform nursing practice. It transformed our technician expectations and it has transformed physician practice. None of these groups went down without a fight. They all said, No, this is the way we've been doing it for 50 years. So therefore, it's got to be right. And the answer is, well, this is why we model things. This is why we create a different set of expectations. We're actually kind of, if you will, changing the rules as we're going along in ways that still respect each one of the disciplines. We literally had our nursing colleagues involved with saying, now what can we do and what do the rules say and what would you be comfortable with, et cetera, et cetera, as we've gone through these transformations? So it is a long discussion. I didn't mean to make it sound like it's all an informatics exercise because probably at least half of the work has to do with governance and politics and at least the perceived rules of the road. And so one of the things that we do have to address our deeply seated and oftentimes well-founded approaches to doing things that we basically have to ask the question would you consider doing it a different way and then getting the requisite parties involved in ways that then arrive at an alternative universe that is transformational? I think one of the challenges for nursing documentation is it's very structured and very repetitive with very little real information because the Joint Commission drives a lot of the data that's required. I once did a malpractise suit, and it's always interesting to see how many boxes of paper get generated from a hospital visit, but six of the nine boxes were nursing notes. We do have a nursing expert on the line. So. Melissa, what are your thoughts? How do you make it more effective witnesses? Well. SPEAKER 1 Well, I think helping with what the data is required just to fill billing needs as well as registry needs. The first document did you see an improvement in the nurse and documentation when utilized the medics at Duke? Because you have epic there, correct? SPEAKER 0 Yes, we have epic. We have flu medics and we have to figure out where we documented lunatics and where we document an epic. So the answer to your question is absolutely. What we found is that the nursing notes became much more valuable because they were actually descriptive of the things that we really want to ask them. We structured it in ways that we were still respected what the Joint Commission is asking us to do. But I would say that the quality, as well as the value of the nursing notes actually increased. That's what the nurses saw. They said, Oh, we're actually contributing in ways that are positive, not just creating six out of the nine boxes of if you will have documentation. So. There's another question in the chat about those Duke-NUS epic and just wondering which in ours would be most likely to engage. So this is. Let me just say that I think this would be the topic of yet another talk to describe work that we're already engaging in in the good data core. And that is is that what we want to do is make sure that we're engaging the industry as much as we can. So we've embarked on a project codenamed Card for right now for excuse me. Obviously, the card stands for cardiology, but it is an endeavor that mimics work done by a minor in the oncology space to help define a standardized lexicon, a minimum core dataset that everybody could agree to across health care. The minor folks have designed and developed is basically just one hundred terms. It's a little less than 100 terms, but it's critical information for all of oncology, regardless of what oncologic disorder you have. And the magic is, is that they've involved each have it, which is one of the standards development organizations, and they've integrated the epic and several other electronic health record vendors in terms of helping lead the the work, along with obviously the professional societies, et cetera. So that's the model that we want to follow, and that is we're not going to try to define, you know, 10000 terms or heaven forbid, one hundred and fifty two hundred fifty thousand terms. What we're going to do is focus on the high value targets. Several of us on this call, including John Bruce, I think Joe, you were involved as well as Jeff Anderson, have worked on various versions of an electronic health record core vocabulary for cardiology. This is work that's a decade old now, but we're talking about actually advancing that in ways that then get brought forward by the epics, the supporters of the other electronic health record vendors out there, getting us closer and closer to this concept of capture, the information as data once and use it for really all purposes that you can imagine. Jimi, this is Joe. SPEAKER 2 Wonder if you could kind of expand a little bit further on what I would call the vendor challenge you. You've just gone through the, you know, the big vendors, but we've got we've got clinical data hiding out of places as well. You mentioned medics, we we use a different platform at Mercy and are moving on. Yet a third and clinical data hired and a lot of different places, so even interoperability within an individual health system is a challenge to get disparate systems to to talk to each other. When I kind of. Talk about that a little bit. SPEAKER 0 Yeah, so a couple of different concepts there. First of all, the work that we're trying to get going with the Center for Intelligent Health Care really is about organizing the both the business processes as well as the data concepts themselves by modeling in ways that can get consumed by any HIIT vendor. Yes, you're right. HIIT vendor community is ultimately the ones that have to bring this forward, as well as in terms of an operational product. But we're trying to make it so that it doesn't matter if you are merge or if you're Lumet's, or if you're epic or off your server, that if you follow the paradigm, if you follow the modeling that you will have a best practice approach to both capturing data integrated in workflow as well as the database structures, so that the data as it's captured is again natively interoperable to try to reduce or hopefully one of these days eliminate ETL entirely to get the, you know, the data scientists were meant away from all. I spend 90 percent of my time cleaning data that I'm basically a data janitor to one where they can actually spend their time doing analytics and beyond. So it's a long journey. I don't know, to be honest with you, how long it's going to take, but by judging that, we've been on this journey now for 20 years and have only made, you know, just this much progress. I'm suspecting it's going to be a bit of a journey. But with things like HL seven and their accelerators, the Codex product project from Mater, the M card project that we're going to bring forward by the CIA. See, I think all of those now are starting to coalesce around these concepts of minimum core data sets. Let's try to figure out what it is that we need to store as data and then make the make. It is interoperable natively as possible. Yes, there are all these places within any organization, but. Imagine if your merged folks and your epic folks had exactly the same data structures for the concepts that they needed to share, then you wouldn't have it would be plug and play. You could move the data back and forth and you'd be done as opposed to all the gnashing of teeth and the high minded work that has to go. Or high intelligence work, I should say that has to occur in order to make sure that there's consistency and ability to move data around. So sorry, I can't be much more specific than that other than to try to create pictures, but so I think what we need to drive towards. SPEAKER 3 But to me, as we've talked before, I'm. Fully agree with you on what you said, except you can kind of look at it as a glass half full or glass half empty. I tend to say it's it's half full and maybe getting fuller as we done some of other. So there has been some progress. We've got a long ways to go. I fully agree. Be interested in your thoughts of sort of the idea of of of black rather named about, but the economic approach to the behavioral economics program on to this, which is really sort of motivating people in the pocketbook or where with whatever value they care about seems to make some sense. And we've made some progress with that in our education, our epic training and some of the other things and protecting so you the cultural anthropology or or behavioral economics, which seems to be the the buzzwords for how to motivate people better to kind of do the right thing work smarter rather than harder? SPEAKER 0 Yeah. So I agree with that completely. The work, for example, Richard Thaler, who's a Nobel Prize laureate in economics, writes really about psychology, but he's a Nobel Prize laureate, is some of the leading work in there, and I can say that that actually permeates the work that we've done. And that is, how do you figure out how to change behavior in ways that people identify that there's something in it for me? And at the same time, provide them fairly narrow guardrails around which they can make those decisions. So it reminds me a little bit of, you know, if you're trying to feed a two year old, you don't ask them, Do you want to, you know, what do you want to eat? You want? You say, Do you want your peas or your carrots because you want to get vegetables into them? OK? So it's along those lines. So I do want to return also to your so everybody or anybody who's not read Richard Thaler, Carl sunscreens work. I would encourage you to look up those authors, and they're great studies of behavioral economics. How do you make people change in ways that end up being moving in the right direction? But the last comment that I do want to return to your metaphor about the glass full glass half full versus the glass half empty. My answer to that is you don't have the right side glass. So I would ask this all to think about re-engineering the glass to fix the problems. I think there's no question on the chat. All right, yeah, so here's a question from Quinn. What is are the biggest obstacles toward adoption of good data pipelines, human computer interaction or vendor adoption? Politics, governance, data standards, data standards organizations. Those aren't in the in the chart, but the answer is yes. They all have to be addressed. You have to deal with the professional societies, the owners of the lexicons. You've got to get them to agree. You have to deal with the enterprises or the leadership of health care. Enterprises really have their own set of priorities, and the question is kind of what's in it for them? You have to deal with the clinicians. That includes not just doctors, but everybody delivering clinical care. The HIIT community there, they literally are, ultimately the enablers of this. And so the answer is yes, you have to deal with all these. And that's kind of this pipeline of stuff that I talk about with all the groups that need to be in interact with are represented in that pipeline and everybody has skin in the game. So you have to figure out how to do it for everybody. This is a complex problem. This is not something where we're going to build a widget and we're going to sell 20 million of them and make a profit. This is something that's so complex that it's going to require work on all levels in order to try to transform health care. Thank you for watching our six o'clock. I would encourage folks on this conference call if you have talks that you'd like to give that you think would be appropriate for this group, if you know of people that you think would be good to give talks. Please let us know and we'll get this to our education committee. I look forward to seeing you folks in about four weeks. We did a survey and it seems like the second round stage works well for folks. So that's where we will meet again. All right, I think you're right.