Deep learning harnessed to suggest new purposes for drugs

 

Researchers from Ohio State University have developed a machine learning method which helps to determine which existing medications could be applied to improve outcomes in diseases for which they have not been prescribed.

 Drug repurposing – which accelerates the commercialisation of drugs and lowers the risk associated with safety testing – is not a new concept. Botox injections, used for cosmetic purposes, were originally approved to treat crossed eyes while sildenafil (marketed as Viagra) was first developed to treat chest pain. However, this process traditionally required serendipity in addition to time-consuming and expensive randomised clinical trials.

This project involved applying machine learning using datasets from millions of patients to find candidates for drug purposing, and predictions for the effects of those medications on a set of outcomes. Promising candidates can be entered into the clinical trial process.

Although this project focused on repurposing drugs to prevent heart failure and stroke in patients with coronary artery disease, the strategy could be applied to most diseases.

“This work shows how artificial intelligence can be used to ‘test’ a drug on a patient and speed up hypothesis generation and potentially speed up a clinical trial,” said Professor Ping Zhang, an expert in biomedical informatics at head of the AI in Medicine Lab at Ohio State University. “But we will never replace the physician; drug decisions will always be made by clinicians.”

According to Zhang, machine learning can account for hundreds of thousands of human differences within a large population which could influence how the medicine is received by the body. These factors, which include age, sex, race, disease severity, and comorbidities, function as parameters in the deep-learning algorithm on which the method is based. The algorithm is capable of accounting for the passage of time in each patient’s experience, for every visit, prescription and test.

“Real-world data has so many confounders. This is the reason we have to introduce the deep-learning algorithm, which can handle multiple parameters,” said Zhang. “If we have hundreds of thousands of confounders, no human being can work with that. So, we have to use artificial intelligence to solve the problem.”

“We are the first team to introduce use of the deep-learning algorithm to handle the real-world data, control for multiple confounders and emulate clinical trials.”

The researchers used casual inference theory to categorise the active drug and placebo patient groups found in a clinical trial, which allowed them to address the complication of multiple treatment. This model tracked patients for two years and compared their disease status at that end point to whether or not they took medications, which they took, and when they began this treatment.

The model yielded nine candidates considered likely to lower risk of heart failure and stroke in coronary artery disease patients, including six which are not already in use.

Among other findings, the analysis suggested that a diabetes medication (metformin) and a drug used to treat depression and anxiety (escitalopram) could lower risk of heart failure and stroke. Both of these drugs are currently being tested for their effectiveness against heart disease. Zhang emphasised that these findings are less significant than the method they used to acquire them.