SPEAKER 0 That didn't get started. Thank you for attending this grand rounds. The Center for Intelligent Health Care is proud to have a family who is our speaker today. Eve is a Ph.D. candidate who will be finishing up his dissertation in the next month and has been working with me for the last. Well, he actually started working with me when he was a graduate student at Uno or a Q Grant and then elected to stay on and work with me as a biomedical informatics graduate student. He came here as a Fulbright scholar from the Congo and had a great experience. You are now and is very interested in artificial intelligence and hopefully after his Ph.D., will stay on with me as a postdoc. The center is, I will tell you, is in the midst of fundraising. We are 40 percent the way towards our goal of raising $5 million to help support the center. It is a little tough with. Dr. Gold trying to raise far larger sums of money for other projects, but we're gaining traction and there's a lot of interest in this area. The other thing is we are looking at and Melissa, make sure I have the dates, right October 18th and 19th of 2022 for a national meeting, really concentrating on a lot of these center's core strengths. We tentatively have Dr. Rob Califf, who has been nominated to head the FDA as our lead speaker. That's always that's not been confirmed because he's not been confirmed. But I think we have some opportunities to bring in some national and international leaders to help all four campuses kind of focus on what what we want to do in this area. So with that, as an introduction eve, I will turn the table over to you. SPEAKER 1 Oh, thank you very much, the window washer I've shared the screen so fully, you can see the screen. UNKNOWN We can OK, fine. SPEAKER 1 And so the title of the talk is building a framework for protection diagnosis. And I'll start with a brief review of some of the gaps in hypertension, diagnosis and management. So hypertension is the world's leading force on cardiovascular disease. 47 percent or close to one in two Americans, is it the it's in an other affected, but it is the disease, and it's predicted approximately a thousand deaths per day based on recent statistics from the Centers for Disease Control and Prevention. One in three patients among those who spoke with national unaware or do not know that they are potentially. Moreover, three in three and four patients have uncontrolled hypertension, meaning that they are not prepared to target. So the aim of this project is to build a framework core feature of each engineering and data declaration to help develop AI models for hypertension diagnosis and then develop the AI model to predict hypertension. And finally discuss all such models can act as problem knowledge couples to assist with evacuation of the patient's problem list. So this plantation comprises five parts. The first battle of Bhagwant concept forgetting about Russia that includes the pollution of blood pressure measurement because the rule of thumb, serious plot, new cutting blood pressure variations and the current clinical clinical standard for accurate measurement of blood pressure. The second part is about the importance of clinical practice guidelines, including the flaws in the current guidelines and medical vocabularies for recording hypertension. The importance of good data protection algorithms and now a guideline inspired by framework can lead to better tools for clinical care. The third part is about how we obtained the raw data and the initial cleaning steps. We conducted it armed with an illustration. An example in, for instance, to the AI model. The fourth part is about upstream machine longing for one of the features needed in the frontal model and also discussed about the engineering for those features absent in the raw data. But two are proven to be relevant predictors of a potential based on literature and domain expertize. And lastly. Oh, so the last part is on the dynamic modeling itself now to diagnose the potential. And I start by establishing a baseline accuracy of using the current network and then constructing a transparent probabilistic graphical model using dynamic based on network. So the Jetsons version of the microscope or the microscope was a scientific breakthrough that led, among other things, to the discovery of blood cells, and it was the study of our flowing blood and solid structures for hemodynamics, which include arterial outer pressure and cardiac output and so forth. And a notable discovery that the blood pressure on the 18th century by Stefan Health was depicted on this slide. So we conducted several dissection and experiments of living horses to understand the function of the function of the ventricles and blood pressure. So eventually, it's called blood pressure, which was, well acclaimed by a physiologist at the time. How Discovery inspired uprising in France to create the mystery man metal, which is a first blood pressure measurement measuring device that Abbate Marty was clinically usable when collude with your fundamental method by conducting several optical trials to record a beat to beat fluctuations of blood. This device is called the Kamagra, and it is depicted on this slide suite eight. He added a panel that lovely press the variations in blood pressure. And perhaps a significant contribution from it would reduce the life time series recording of blood pressure in 1848. It's the first documented use of some serious blood in medicine, although such a plot point to adjust to the business world in the earliest read by William Playfair. Since then, the times there is plot the attempt to be the obvious way and most comprehensive way to record a blood pressure. The gold standard of blood pressure, the gold standard of beneficial measurement is done through after annihilation using intra arterial catheters. This approach is similar to the one introduced by health and father, but the initial tools used using the left in the mid-19th century are far cry compared to current tools. Interrupting your blood pressure, I guess a some reading, but it is investing in an effective and efficient in a routine clinic visit. Also on the right side you see stigma, stigma and monogamy, which has been significantly improved since its introduction, the statement read. The one depicted as a nasty and ArcelorMittal to split blood pressure readings than but because they're equipped with electronic screen instead of ArcelorMittal. So the inclusion of the back your artery creating a counter pressure which is progressively released and thus allowing to record the systolic and then the blood pressure. And of course, some fine it can be combined plus petition using a simple call to get a more accurate diastolic reading. Still, the accuracy of blood pressure cost is subject to several confounders or pricing lies, as I shall explain later. Unlike the bittersweet variation that the gold standard method displays, that pressure readings measured in clinic with traditional costs allow and cost them both times, there is a representation as depicted here. They depicted dittos from a single patient from this project, that effect. Looking at the history of a production, it transpires that science allowed us to study and measure about pressure, but we understood it clinical significance much later. Ashton Kutcher I both by patients to field differences in both oddness and let that Mohamed describe the physical and physiological effects of this national hypertension. And with them, essentially because it was deemed normal, that blood pressure should increase as we age to compensate for the aging heart muscle and maintain the body tissue, oxygen and fat, which was obviously a misconception. Indeed, at 63, President Franklin Roosevelt blood pressure of 220 over 150 millimeters of mercury was considered essentially normal, but he died days later from hemorrhagic stroke caused by the hardening of the arteries. So that left former vice president and then President Harry Truman of the national heart out, leading to the foundation of the National Heart, Lung and Blood Institute. The latter point that the Joint National Committee, or DNC to work on the first hypertension guidelines, which were subsequently published in 1977 at the time. Earlier evidence from the Veterans Administration about the co-operative trial and digital not necessarily blood pressure was the main risk factor for cardiovascular disease. The most comprehensive guidelines were published in 2003 and referred to as General Dempsey seven, the National Heart, Lung and Blood Institute later conferred responsibility to the American Heart Association, American College of Cardiology, or HSBC, which published improved guidelines to the genesis of in 2010 17. So both the Genesis seven and the 2017 guideline benefited from the Framingham Heart Study and the Sprint trial, which showed that systolic blood pressure drop. That's a significant role in inducing cardiovascular disease. How's the 2017 guidelines are the ones recommended for clinical implementation in contemporary practice, and they serve to guide I from what I've shown in the table above of less than 120, over 80 is considered normal, whereas blood pressure is elevated when the systolic is between Washington and 129 definitely close on 18 or over a patient, they're drawn up with national systolic is between 130 and one 39 and between eight and 89. In contrast, they have this to our attention when I have obviously positive or equal to 140, but the death toll go up to 90. An important modifier is that blood pressure must have been measured several times or the ascribed stage to be accurate. The guidelines say that it should be measured at least twice on up to two different encounters. However, there are inherent flaws to the guidelines and looking again at Adamstown through his blood pressure, and it's not apparent how these patients should be labeled. So there are blood pressures across crossed the cutoff values throughout the 57 encounters that they've had. Similarly, with that, most of the news is cut from the international classification of diseases because especially with this plastic called now that we don't reflect this fluctuation. At this and the same applies, of course, for the systolic blood pressure. There are too many clinical vocabularies for disease problems and disease identification. This is systematized nomenclature, medicine, clinical trials last known at city and the 10th revision of the International Classification of Disease, or ICD 10. It shows them has a unique code to identify essential hypertension. However, ICD 10 is a must use as it contains billable called documented diagnosis for reimbursement purposes. But for this project to do rather good data modeling from the most recent, it's a government, it's based specifically on the guidelines were instrumental in our official selection and engineering. There is nothing for each of us constituting good data. And that good data is a concept that prominent artists are increasingly promoting with the rise of big data, the conventional wisdom is just to add more data when training AI models and then think of the models called until it performs well. So the problem is low quality data based models that are optimal performance. The data we use consists of occupations, official records and other relevant features from the University of Nebraska Medical Center Heart and Vascular Clinic Inc. it encompasses outpatient encounters from 2011 to 2021, and the data comes from the electronic health records. So this would be epic and then go through extraction, transforming patient processes and is loaded into clinical research analytics, environmental crime. And crime unit of the crime is part of the Patient-Centered Outcomes Clinical Research Network of ACORN, that's in the approximately five million blood pressure records from. Five hundred thousand patients were extracted from Crown, along with demographic information. The dataset was then loaded to the Center for Intelligent Care supercomputer, and there were 10 only the most relevant attribute based on the guidelines and the published literature. So these attributes include the patient identification number with different time. The number of times the measurement was taken, action date of birth computer, does that edge at the Latin counter? The systolic and diastolic readings of the medication status. And I'll explain shortly, because I like to attribute what we're up to machine learning and huge engineering. So they were not part of the road. Bit of processing power processing steps was conducted in seven steps. The data came into two different shit. The one with blood official record and another demographic record. So first, we remove data recorder, not systolic blood pressure values. And then we removed deposit called erroneous systolic blood pressure value. Systolic blood pressure measurement, about 300 millimeters of mercury were possible 40 on urine based on expert input. Third, we would record your own blood pressure you. The observation here was that there were no no that the public values because of the values that were associated with the Nazis that were removed back to. The first and the first step we remove patient is 17 and the anger from the demographics table. And then we also removed the patient is 86 and older because the system recognizes that there is little published evidence from randomized clinical trials targeting this age group. And then we merge the two data sources, which led to discarding officials with no matching demographic information based on the patient. So the demographic information report from Crown had a fewer number of patients compared to. So if your patient had documents of the time when the deal was done or some something happened, but we just noticed more patients in the blood pressure recalled biocompatible the horrific violence that led to discarding those blood pressure records. Demographic information on this missing. So at seven, we remove patient of chronic kidney disease or those who have had that kidney transplant. So they most certainly would have a potential. But this study is called patient or comorbid condition also went up attention problems with comorbid condition. It carries a different ICD 10 code. So we are just looking at essential with about this, I thank God. The final set off of the machine learning to predict that precious vision is represented by and prime and includes approximately four million records from 400000 professional. So this slide to pick an example of an input instance for the final model, the neural network fiction is for instructional purposes, as I show later. The one I use for this project as hidden layers of several neurons, and it wouldn't fit on the slide. So still, what is interesting here is to convey how the data behind pictures would look like. Patient identification serves only to identify the instance, and with I.D. is assigned to the output, so that's allowing us to link the output to the corresponding input of the training. The measure, defense findings also suitable before, so things are quite an ascending order because a dynamic model receives input, be representing the trend from an earlier of time. And then the rest of the show and I'll constitute the input and stuff. In this case, vastness is out of the thirty five year old with a blood pressure measurement of 135 over 85 millimeters of mercury. Of importance, the blood pressure stage is the drawn, but unlike some probability, graphical models such as Visual Network know networks expect memory input, so the stages are transformed into numbers zero for normal blood pressure, one for elevated to stage one, etc. So which is why you'll see so we have to representing not to our potential. Likewise, a Boolean input of true or false is given zero or false alarms also true. So in this instance, the patient is not on hypertension medication, which is represented by zero. And the blue neurons belong to a two hidden layer for this illustration. And lastly, the ultimate probability of zero point seven seventy five indicate the seventy seven point five percent chance that this patient is upper torso. So blood pressure costs a lot of physicians and advanced practice providers to measure blood pressure, but they are the corresponding numbers just ahead is not recorded alongside the measurement. And so obviously because also they know the evidence might chance the guidelines might keep changing. But when when surveying or recording the measurement in the electronic health record, this stage is not captured alongside that. So which is why much of machine learning was needed to inform the special stage feature. It allows the automatic labeling or for better clinical guidelines for those most mostly to staff time down the road instead of writing causal manual labor language dataset. We're just run this algorithm and it labels of the record are based on the clinical guidelines stage of. So to that end, we employed discretionary business, if that were the advantage of probation network is a random an expert on what it is possible to construct the structure of the model. And what is the representative about the director domestically? Well. And so we can see that we only use for two in four blood pressure said it was a subset of all the features already present, so the features are going to be needed for a final dynamic modeling. But to infer the obstruction in the feature edge feature we need, there's only a subset of them and they include the edge of the patient, which influences our systolic blood pressure the number of times the measurement was conducted during an encounter. The number of encounters and the number of time and the number of encounters will influence the system fully and enough to pressure so they. So the accuracy of this measurement depends on how many times they were recorded and on how many different occasions. And finally, this systolic and blood pressure at the stage, the patient would be in. So also of importance there we are looking at a little more than that because all the guidelines, they just look at the cutoff value, but including these features makes it just a bit more accurate for this rebel. So to confirm the result, we often to this special edition of the late night, where we also implore the decision tree and the decision to classify can be added to machine learning because it provides high officials in space and time. Research has shown that given a reasonable number of labels, a decision to reclassify enriched highly accurate is that less time compared to alternative classifiers? There efficiency is mainly attributable to the decision to pacification approach of splitting the classification problem across the four branches. As can be seen in this depiction. So if you if I zoom on to the root node, you see about the decision. Based on the cutoff of systolic values or split into not on the left side, we have. So it goes on to classifying the record world class and I could see myself on side the right side. It goes on to classifying the recording process of those I know it's not. And the Guinness cooler on this note is a measure of the purity of the north, some calls within and not readiness quadruple down to zero are still not in the right place. So if the Guinness core. Is not zero. It means the samples in the northern tip, if not through which optimal classification. And you can see the notes that the victims of 0.71 front line and on the left side is dropping faster. So, so on the left, they've got two zero one four 10 compared to the to the right side, and the base of the left side will reach optimum position faster on that tree. So if I zoom out, we can confirm these are the left side and then cluster. So which means death on the left side with no. So, so if I focus on the left side, it is clear that the Guinness quote reaches zero. And there are no more branching as a not. When there are no more bunching them not becomes a lift, even though the sample is muscles within the not assigned the corresponding class carried by the North. And this is the result we obtained. We have little metal, so it was a simple problem, we just took two pictures. So it was like, Oh. More reasonable to have perfect accuracy. But then what what those is more important is not to compare the ICD 10 classification. I mean, documentation a lot of potassium compared to the. Machine learning classification that is before this comparison, to be accurate or fair, we found that sufficient sample sizes should be further truncated from 2017 onward and because we are, we are, we are assuming that the adoption of the 2017 guidelines did not start before the publication. So looking at the hit map, only it was visible that acidity and documentation is missing several professional. Instead, Joanne. Because so they're classified, we have a huge number of classified us. The drama hypertension, but they do not have a documented the prognosis. And likewise, the algorithm got more based on respect to our profession compared to accept the plan. Only. So to confirm the statistical significance of this observation, are we running for my policies testing, using this for? It tests whether the probability that a patient is in rural. I got lots of care is equal to the probability of being enrolled. I report that they've been in constant. In other words, this would mean that the probability of being killed on Sub Sub J. Given that the patient was in raw supply is the same as the probability of being stopped. That would further mean that, for instance, the probability of stage one is in the Bay Area, regardless of the acidity status, indicating that this observation is simply arbitrary. Oh, not not significant. And just so this is just a human and more human humanity way to formulate their policies with checks, with other population and distribution of machine learning. Hassan Bridges is identical across the ICD 10 groups and with a p value of less than zero point zero zero zero one at the significance level of zero point zero one, it goes by the number. This observation is not just random chance alone, which means that the probability of a patient in that one hypertensive text into account the ICD 10 data. So the difference that we were that we not too nice to time documentation and special stages does exist at the population level. We have other confounding factors that have the potential to enforce the law, and the image shows 10 possible issues. While recording about pressure and for instance, the cuff is large, the pressure on the posture is not recommended. And the composition of all of could be good, but I saw alluding to a stressful situation. So we needed to engineer features that address such confounding in the final dynamic model. So are individuals already considering medication status as one of the features? And that is because blood pressure lowering medication can influence that patient trend over time and lead to negative hypertension status. Forever, the idea, strategic cardiovascular risk for the risk is a 10 year hazard risk for developing disease, and it determines when a patient should start therapy. Pharmaco pharmacologic therapy. As the patient may not be on medication, but if their risk is high, I to indicate that they're hypertensive. And so medicate medication now. So we're looking at the medication now 30 minutes before they got zero up at the hospital. But also looking at the risk, even though they are not on medication, the risk is higher. It goes up to them and they are protesting so that so that we risk reducing the major predictor as well. And they estimate there is quite a conflict that using Cox proportional hazards regression, which will let several risk factors like exposures to survival time. And this week's fact does a Sean Smith looks at the age, the sex, the rescue, a systolic blood pressure, the smoking status, hypertension, diabetes mellitus, total cholesterol and HDL cholesterol. And baseball is divided by the AJC using data from the National Health and Nutrition Nutrition Examination Survey. And the hospital surveys were divided into different edge and U.S. based calls. And then the Cox proportional regression equations or computed for each pool. So what it are referred to as the whole equation. And the ensuring better coefficients from the regression were published by the adjacency, and they're the ones, so they published for the purpose of developers to construct calculators for the deal with coal. And we use these published data coefficient to construct the calculator so that we could just label the dataset by running a simple function that compute the risk for all the patients in the dataset. In consultation with a panel of assisted members, we also engineer the feature called Chamber, which is a little more granular than blood pressure stages themselves. So this is considered the one true because part of the panel of people who brought the 2017 guidelines. And, for instance, questions in the press role have lost their Joanne. Both this great nation and we are not on medication, so this group is said to be in number one and it's described as normotensive, however, up to 10 percent of normal, efficient and have masked hypertension. And that would be a modifier for this group. Patients in the second prime timber tool, they have undiagnosed or untreated lack of national. But the cautionary tale here is that 30 percent of undiagnosed patients might be displaying what caught up with national or high blood pressure due to a human error rate perhaps and calibrated to. And it is likewise interpretations for the remaining members. And this slide just shows how we internally how we kind of confronted by Jim, by just the simple, unconditional flow statement. And based on that, so running the statement, we're able to label each record in the business of creating the future. So from all of the evidence in the literature. Based on blood pressure and clinical practice guidelines, it ensure that a dynamic modeling approach is required to accurately and adequately model hypertension. Indeed, dynamic modeling would help capture the true blood pressure of crime in official and those. It could also enable forecasting or predicting the future potential status of a patient. We designed the dynamic model using two apertures, a deep learning approach and a probability graphical modeling approach. In deep learning, a dynamic model is constructed using what is called the recraft no network. Ah, and then it is suitable for analyzing a time series and it is common in the stock market, predicts future prices as on path, train and autonomous driving systems, and now an end can have participants car trajectories. Our names are also common in Texas, this translation or predicting the next word based on a murder sequence. And R.A. So all those applications described the use of phone and using unstructured data. But we can also be applied to structured data, which is what we did in this case. But that is as long as that at that time series the sequence of early observations a record might predict the future status of crime. So we use two different. No network variant for the architecture on the left. The input were not we were not normalized, but we included the normalization layer and for the architecture on the right, we made sure to normalize input before heading down to the model. Since this is a ziplining approach, the competitions are carried out by the hidden layer. In this case, there's only one hidden layer, which is a long shot down memory with 100 neurons. And I'll describe how lithium works in the next slide. But the drawback of deep learning approach is that the behavior of the hidden layers is not transparent. So it is not obvious to a human to comprehend how the neurons carry out the signal conveyed by the input instance in the initial computational. On the upside, however, deep learning models tend to have higher accuracy relative approaches, which is why, for this dynamic modeling, we established a baseline accuracy from Nariman. So the hidden layer is made up of blocks of cells. And before the airlift, you know? Before they left for the input of the hidden layer before the LSM was conceived or developed in 1997, the input, of course, and this is true for the royal machine learning deep running algorithms undergo when people go through several transformations throughout all the activations, and that was causing the model for a specific task to lose all traces of earlier input. So, so in beyond. Like when training in order to translate the language from one to another, it would lose or kind of forget really how the sentence began. While it trying to predict the remaining of the rest of the sentence. But then the LSD south, I mean, and it helps maintain the state of the input despite the transformations and the figure shows that the cells receive the input. The input is coming from Donia X. And the count and then put is off the counter stuff, and then we have also the sea of T minus one, which is the long term state of the previous stuff. And then it also exists at T minus one, which is the short term extent of the previous to. Then the input see what three or four gets there. Well, some of the memory seemed irrelevant to all of us, although they are dropped because they may no longer be useful. And then some of the new memories also added. Based on the current input and the previous shutdown state. And once that is added, forget the long. The long term stay is a sense, said it is right now without any further transformation to conserve that. But also a copy of it is made, and that copy goes to an activation function, which is represented by the blue light blue bar here. It's the activation. And. So that create a leaner version with less memory that goes out of a short term memory, so said. Meanwhile, the current input tax on the previous Short-Term Memory T minus one undergo transformation. I gather logistic and activation. So the logic is all that. It's like a bit of pinkish and this submissions, that's what part of the input. So which part of their memory were dropped and which part where added to the short term memory for the next step, but also, of course, it timing like the output or the contents to. And this shows some of those exhibition functions I just talked about. So all of them. So the end, you know, so it was, as I said, sigmoid activation function, which as you'll see, transforms the input between two B values between zero and one. Similarly, with an edge of loss of the input to B values between minus one and one. So all of these could lead to not being able to repress the input because of all these transformation. And so it's part of the reason why the planning approaches are black boxes, because after all this transformation, it's hard to know which features as part of the input were more relevant for a given prediction. Oh, so as opposed to the black, the black box nature of a control network based on belief network is a type of probability graphical model that allows for transparent modeling. This is especially true when the structure or directed graph, as depicted here, is known or in our case, we constructed it manually with expert input because based on belief, networks are allowed to. So we can also come with the data, but we don't know how the structure of B. This approach would also have to construct the structure. But in way that is suitable when there is no domain expertize available. But so far, I guess we have domain expertize. And so it was. Oh, suitable to the side of the aisle, did not connect with follow links to and and so this is part of what fosters the transparency of the module as we can repress. So when we make a prediction of a status, it's easy. We I mean a knowledge of. There's a conditional probabilities to repress that and see which part of the details went into that decision. So that means that the transparent model can easily be interpreted by somebody of little human knowledge, like somebody who knows all the best not to correct. That's all for the dynamic mathematics and also see the blue lines depict how the previous the note of the previous step would link to the note at the next step. In conclusion, we have bells on the literature, 47 percent of Americans 18 and older who are affected by potassium and 33 percent of them do not know about their potassium. The model that we are developing is based on peer-reviewed evidence based lines and expert input, so it can improve the accuracy of opposition diagnosis compared to the costs under the law. Also, it showcases the workings of the problem knowledge government in treating a constantly changing problem list item. So because that's still. We've also reviewed part of the literature on this issue, and it's still really huge in the sense that problem lists on the patient electronics record can sit through it and become and often get lots of spam calls and become less useful for the clinician. But this is one of the problems are constantly shifting for time, and such models can access problem knowledge. So they show an aspect of full analytical flow into the duration of the problem. And finally, research also shows that black box solution, less adoption by physicians and advanced practice providers and not because they'll pick and which means it's not easy to trust the output. So this vision guided by black box models, can be harder to explain question and can open up complicated medical legal issues. However, transparent modeling is interpretable by somebody who knows a little bit about work, but it can also easily be made explainable compared to the black box. So the black box model could also be expendable, but it takes more work and it's not OK. Yes, as opposed to the standard model that. Thank you. SPEAKER 0 Thank you. Excellent presentation, it's now open for discussion. SPEAKER 2 I, Dave. Can you hear me OK? Yes, I can. Bruce Brad from Utah. Blood pressures are an interesting base to play with in this area because it's of a very accessible semi big data cache that we often have in our medical record system. So we've played with it as well. And one of the most interesting things is sort of the back at the good data kind of area. There's lots of bad blood pressures, as you probably found out, and we found out that we looked at a bunch of, you know, some of our primary care docs actually get paid based on what your blood pressure is now, or at least get a bit of a bonus for successfully beating the guidelines. So we started looking at this and we found the conscious text of where the blood pressure was done was about as important as anything. And of course, the emergency room and interestingly, the IV Infusion Center or the few places that had the highest blood pressures in our in our area. And fortunately, we screened out the exercise blood pressures in our exercise lab because those are the course hobbies. So knowing the context of things like where it was done and in fact, we we even then started to label a an element called blood pressure, which is the blood pressure done in the office in a controlled resting environment. That's the gold standard. So, so part of that, that whole quality of of data, and that made some sense. The other thing? And so we played a bit with that, and it's interestingly difficult problem to get good data and some of this. The other was we also played a little bit with the LSHTM model and the therapy side of this. When one of my students has looked at the trajectories of blood, a class of drugs is going to be the most successful to add next to our blood pressure measurement and do that in a bunch of patients. And so the Alice team MRI, well, that works nicely on the therapy side of things as well, which in my body particularly identify treating the new blood pressure patient with with the initial drugs. This relatively simple finding that drug resistant, that resistant hypertension patient and when they refer them on to the hypertension clinic or something like that, is probably one of the more interesting places to play with machine learning, because that's something that people struggle for months or years with inadequate therapy. And if you can identify them earlier and get them shunted to the right place in the right med trajectory, I think that's a particularly interesting place to play with some of these tools. SPEAKER 1 Exactly. I agree, because, you know, the next step is really to work, as you said, get into the management aspect of hypertension from here. SPEAKER 3 Hives, this is fantastic. I have a question. Very nice presentation, thank you. You probably spend a lot of time and Dr. Rendell also, but I have a question for you. If goodness forbid you have hypertension, or maybe you suspect again, goodness forbid. And you know that a deep learning or neural network gives you a very good precision, say, over 90 percent or something like that based on a legitimate study versus Bayesian network that gives us, say, 80 percent. Which one would you prefer to use for yourself or for your relatives or friends? SPEAKER 1 Yes, but that's really a good question, and there is a lot of discussion on that point of adoption in medicine. A black box versus the transparent modeling. Of course, you can have high accuracy and you know and say it's truly predict with disease, and that's not true. I mean, because of the high accuracy, but yesterday, the fact that sometimes it's not that easy. To understand or to explain to the question about the prediction or just understand from the from the picture which one was more important. Yeah, I also so I would say combining of first of all, I would say there's also so sometimes there is a lot of study discussing how to adopt these because they know we do not have a lot of work, but I mean, it's accurate, accurate and that's it. We know it could be employed, but at the same time, we also see that if a model has to be used for management of a disease, it's important to understand both of those factors that contributed mainly to the process that would how to understand that advice to give to the patient or what lifestyle changes should be met by understanding some of what played the role. Because if you don't understand which are the more important than the other in the prediction, then so that can lead to low quality my management, my. SPEAKER 3 And Q, so you're implying that maybe division networks will do better for causality modeling versus predictive modeling or but you have to be careful about the causality because it's very tricky. SPEAKER 1 It is, and it's are so at this point, just the science that one of those shots of the arrows not represent causality that's not created. But I mean, from my epidemiology class causality is really needed in medicine. I mean, all those statistical analysis I did in my particular plus were really probably mainly causality. So it's something we also I mean, dynamic business or just business in general can be model of causality. And it's something I had to bring down the cost of living for the next period. SPEAKER 0 Thank you. One of the things that's interesting when we talk to the hypertension experts, we really. They're hard to pin down, so especially in the negation that you're not hypertensive. And so when we asked them to predict whether a patient was normotensive or hypertensive for the negation, we could only get to 10 to 20 percent. The they would commit that 80 percent chance to 90 percent chance that they were not hypertensive. We couldn't get them above that, but as experts. And so they demanded more data for that. On the Converse side, when you start modeling that a patient, how it's hypertensive measurements in clinic that they are have a diagnosis of hypertension and that they're on antihypertensive medicines, then you're really get to that 95 percent probability that they're hypertensive. So and we have a question here. In our last one. SPEAKER 1 Yes, exactly, as I've seen the question from Dr. So and so the question is whether the conditions of being experimenting or single decision tree. So no, so we are still in the development phase. Nothing has been tested clinically like in the real world. SPEAKER 0 Well, now we have. That's what we've been doing. SPEAKER 1 Yes, with the yeah, we all follow the facts. But yeah. Yes. So you had the panel of clinician expert looking at the model. And so they validated it and they've contributed into the creating engineering some of the features without it. But so the question is whether it's been used in practice medicine. So of course it's been. It's been tried and validated and tested on the independent test. So that shows the accuracy of that report. That is the one on the test set. So that shows how it would perform in the next best thing. The other question is about. So in the data length said how the outliers were handled and in the detecting substance use were we? I mean, based on the DR window input and research as well, like excluding patients of comorbid disease or looking at unusually high blood pressure measurements, so that as I thought, we handled the noise all. SPEAKER 2 So it just in addition, good having you know, there's eight papers are exploding in our in our review stack. And it's the sort of external validity question always comes up. And so most of these fact there's an argument that that you should validate externally before you even publish initially because it's so easy to over fit some of these that these models. So getting a collaborating together with some other groups and externally validated the data because that context is really important and the context changes frequently between between different sites. So that's another thing that's that's fairly relevant for this. But these this computer pattern recognition things are pretty powerful and makes it interesting to explore with these new tools. SPEAKER 0 Well, we are at the top of the hour, if anyone needs to drop off, I want to thank Eve, but I'm sure he's willing to stay on a couple more minutes if there's more questions. Thank you. SPEAKER 2 Not so where are you, where are you going to go next with this you? SPEAKER 1 So it to be looking at blood pressure measurement, which can be more predictive of hypertension and based on research that home blood pressure monitoring. So that would really be the next step. And of course, walking also, I really like the project to working on nothing about why we should also. So we like to be part of the management side because we have high numbers of people who are out of control, the figuring out a pragmatic way of identifying SPEAKER 2 those resistant patients because they're the ones that need special care, team management, potentially referral to nephrology, or we have a multidisciplinary special clinic just for those patients. And finding those people sooner, I think, is a very valid and valid goal. And doing that across a few institutions would be interesting. SPEAKER 0 Yeah. Bruce, the other thing we've identified and you saw it in one of the slides and actually it comes up with one of our experts is that the guidelines are written as dichotomous. And not continuous, and there's real barriers to the accurate diagnosis of hypertension based on dichotomous numbers because the expert who wrote the guidelines has on medical record to blood pressure recorded on two different occasions that were stage two hypertension. And in the midst of the meeting, he says, but I'm not hypertensive. And that's where we really think the Bayesian work comes to, because I think what we're going to see is that there will be patterns in individuals. There will be a group that are pretty strictly normotensive in all settings. There is going to be a group that are hypertensive in all settings. And then there's going to be a dynamic group that's going to run across all four there. And so I think it's going to be interesting. In fact, our last meeting, we had a great discussion of. What does hypertension actually mean, do you determine hypertension by like a lot of the literature with the amount of LV age know what, where do you actually what constitutes the diagnosis of hypertension? How many blood pressures above 140 do you actually need? So I think we really see it as kind of a fundamental level to reset to the discussions around dynamic blood pressure I train for from dichotomous. So moving to a continuous model and I agree and SPEAKER 2 certainly the Bayesian approach that takes into consideration other features like we're throwing away all of the clinical features of of their body size and the comorbidities and the things like that. And of course, in our research, we use interval thickening of the of the arteries and and some, some fairly fine measures there that we don't do out everybody at home. But you know what, what they're doing at home is probably more important in what they do in front of a waist out in the clinic. So, so that's that's a rich area to do this and some of those other other issues that you can capture on the clinical side of it that can drive it can be represented in the abrasions that we're kind of model. Very good. Thank you very much. Very interesting thing. Representing that robot. SPEAKER 1 Bye bye.