AWS Spotlight: Voice, Supply Chain, and Data Projects

The Pittsburgh Health Data Alliance recently teamed up with Amazon Web Services (AWS) to improve patient care through machine learning. Four of the AWS-supported projects and their researchers are creating automated systems to improve diagnostic coding, using imaging and EHR data to make stronger predictions about disease, and developing voice-related technologies.

Voice & Visual Biomarker Discovery for Large Scale Data Collection and Algorithm Development

Louis-Philippe (LP) Morency, PhD and Eva Szigethy, MD, PhD

Assessment of mental health symptoms requires the understanding and evaluation of multimodal behaviors that occur during face-to-face communication. For this project researchers are creating new multimodal fusion algorithms which can be used to identify biomarkers in speech and behavior that are evident during conversation. Discovery of such objective, non-invasive biomarkers will improve the accuracy and efficiency with which mental health disorders are diagnosed and monitored.

 

Diagnosis Coding Engine for Electronic Health Records

Pradeep Ravikumar, PhD and Jeremy Weiss, MD, PhD

The goal of the project is to addresses the cognitively complex problem of medical diagnostic error by developing an automated diagnosis coding engine. The solution will review all available data in the electronic health record, including both structured data such as laboratory observations and medications, as well as unstructured data such as clinical notes, and use a machine learning based mapping from such electronic medical record data to specific diagnosis codes. This project will advance the field of concept feature representation and will provide a tool to improve the accuracy of diagnostic coding, which has implications for medical care decisions and medical billing.

 

Multi-Modal Learning from Imaging and EHR Data

Zachary Lipton, PhD

This project is a large-scale investigation of multi-modal machine learning for diagnostic recognition and prognostic prediction. The initial use case being investigated is breast cancer, due to the availability of abundant well-annotated mammography imaging, and the dependence of current treatment patterns on other data available in electronic health records. Rather than focusing on what data in the health record might be predicted from imaging alone, the proposed approach combines the available health record information together with the available imaging to make predictions stronger than could be accomplished with either in isolation. The approach explores temporal aspects of machine learning and statistical certainty in multi-modal data classification.

 

Novel Voice-based Technologies for Healthcare

Florian Metze, PhD

As an Associate Research Professor at Carnegie Mellon University’s Language Technologies Institute, Dr. Metze is a leading researcher in the areas of speech recognition, speech processing, and human-computer interaction. In this project, his team will draw on their expertise to develop novel tools to improve healthcare.

 

To learn more about other Alliance projects that received AWS support, click here.