Research

Artificial Intelligence in Health Projects

Master
Content

Using Large Electronic Health Records and Advanced Analytics to Develop Predictive Frailty Trajectories in Patients with Heart Failure

Javad Razjouyan

Physical frailty is common in adults with heart failure and increases the risk for poor health outcomes. Clinicians need tools to identify adults with physical frailty as part of routine care to support early intervention to treat frailty and reduce this risk. This project seeks to develop a novel measure of physical frailty using data collected as part of routine clinical care and recorded in the electronic health record.

Heading

Dementia Caregiver Support During Care Transitions

Content

Molly Horstman

Dr. Molly Horstman was awarded a Mentored Physician-Scientist Award in Alzheimer’s Disease and Related Dementias funded by VA Health Services Research and Development and the National Institute of Aging. Using a planned adaptation approach, Dr. Horstman and her research team will develop and test a new and unique intervention that combines evidence-based care transitions training with evidence-based dementia caregiver support. Dr. Horstman’s mentorship team for his award includes, Dr. Mark Kunik (Baylor College of Medicine; Center for Innovations in Quality, Effectiveness and Safety), Dr. Alan Stevens (Baylor Scott and White Health), and Dr. Aanand Naik (Baylor College of Medicine; Center for Innovations in Quality, Effectiveness and Safety).

Heading

Community Resource for Collaborative Benchmarking of Genomic Data Analysis Tools

Content

Bo Peng

Benchmarking is a critical step when developing powerful computational tools and can be used to select the most appropriate analysis tools for genomic data. Due to the large size of genomic datasets and difficulties in learning and running numerous data analysis tools, benchmarking can be a tedious and unrewarding part of genomic research with incomplete and often biased results that only capture a snapshot of available data and tools at the time of the study. Our long-term goal is to create a repository of benchmarking studies that makes it easier for researchers to create, share, and update “live” online benchmarking studies. For this goal, we propose to develop a standard data-exchange format, related software tools, and a web site called BioBenchmark.org to manage benchmarking studies and visualize benchmark results. The platform will allow researchers in the genomic research community to:

  • Collaboratively create benchmarking studies
  • Continuously update benchmarking studies with new or new versions of computational tools, reference datasets, and performance evaluation metrics
  • Use the resources to benchmark their own methods
     

The rationale is that a collaborative platform would support the development of comprehensive, ongoing benchmarking studies that are less biased and have a long-lasting impact on the relevant fields. The repository would grow from user contributions if it reciprocally provided useful tools for users to benchmark their own data analysis tools. The expected outcomes of this projects are a central repository and website with several “live” genomic benchmarking studies that are ready to be viewed, updated, and expanded and a set of tools and resources to help researchers benchmark their own data analysis tools. The resulting resource is significant because it:

  • Provides the genomic research community with a centralized resource of benchmarking studies
  • Significantly simplifies performance evaluations of numerous data analysis tools, so they can be applied appropriately
  • Facilitates the development of new data analysis tools by providing a feedback loop of performance measures
     

The repository also encourages data sharing and promotes research reproducibility and integrity. BioBenchmark.org will be an extension of tools the team has developed over the past few years and will be tested by implementing several genomic research benchmarking studies. The development and long-term operation of BioBenchmark.org will be supervised by a Scientific Advisory Board.

Heading

Machine Learning to Improve Performance of Electronic Safety Triggers (IRB: H-46281)

Content

Andrew Zimolzak

Missed or erroneous diagnoses are common in medicine and a source of excess morbidity and mortality. This project seeks to enhance the retrieval of emergency department diagnostic errors, using rules-based and semi-supervised machine learning methods. In brief, researchers at BCM and the Houston VA IQuESt center have previously created expert-informed "e-triggers" that retrieve cases of possible diagnostic error, but positive predictive value has been modest. In the present AHRQ-funded research, clinicians are labelling trigger-positive charts as true positive or false positive. We are designing a vector embedding of structured and unstructured medical record data. The labeled records and embeddings will combine to provide similarity-based retrieval of further cases, which we hypothesize will be enriched in the proportion of diagnostic error.

Heading

Trial of Virtual Breakthrough Series for Improving Follow up of Test Results (IRB: H-45450)

Content

Andrew Zimolzak

Algorithms to detect delayed follow-up of abnormal clinical tests (e-triggers) have been developed by researchers, but these need to be implemented in clinical operations to positively impact patient outcomes. This study is a stepped wedge cluster randomized trial involving 12 VA medical centers. The intervention comprises a change package, frequent implementation teleconferences, and an adapted version of e-trigger code. The informatics challenge is to engineer the e-trigger for an expanding user base with more time constraints than typical researchers. Using a pseudocode intermediate, we have also successfully implemented the e-triggers at one non-VA site with a very different electronic record and data model.