October 2016

Project Aims to Automate Adverse Event Reporting in Cancer Clinical Trials

story6

A lot could go wrong in a clinical trial for cancer, even for a treatment that proves effective. Documenting any adverse events thoroughly and accurately — all the bad along with the hoped-for good outcomes — is critically important to show whether there are indeed benefits from a potential new treatment, and whether those benefits are enough to outweigh any potential harms.

Identifying and reporting adverse events in a cancer clinical trial is a laborious process for clinical research assistants. They must manually review patients’ charts to identify symptoms consistent with more than 700 adverse events that are specified in documentation guidelines from the National Cancer Institute (NCI), then report those events via an electronic case report form. Because this process is complex, there is a chance that clinical trials of new therapies might not accurately measure how often patients experience treatment side effects during the study.

“It means that we don’t really have the right information to counsel patients,” said Tamara Miller, MD, MSCE, a pediatric oncologist and instructor in the Division of Oncology at Children’s Hospital of Philadelphia, who led a study published this spring in the Journal of Clinical Oncology evaluating adverse event reporting in two pediatric cancer clinical trials. When expert clinicians reviewed patient charts in detail and compared the adverse events they identified to one of the trials’ published rates, Dr. Miller and colleagues found that the adverse events they studied were underreported.

If such undercounting is common, it has widespread implications for patients. For example, if published studies say that 10 percent of patients who receive a treatment have a certain adverse experience, then underreporting could mean that figure is simply untrue. Providing that potentially inaccurate information to patients and families is harmful because they deserve to know what risks they face to make an informed treatment decision.

“It also makes it hard to compare new drugs,” Dr. Miller added. “If you don’t really know the side effects of an old drug, it’s hard to know if the new drug that comes along is better or worse.”

But Dr. Miller is already at work on a solution, along with her mentor, Richard Aplenc, MD, PhD, MSCE, a pediatric oncologist at CHOP and professor of Pediatrics at the Perelman School of Medicine at the University of Pennsylvania. She is leading an innovative effort to make adverse event reporting in cancer clinical trials more accurate and more complete by automating the process with computer algorithms. She recently received the Damon Runyon-Sohn pediatric cancer fellowship award from the Damon Runyon Cancer Research Foundation to support this work.

“It’s a great honor to have been selected,” Dr. Miller said. “It’s going to be a great opportunity to meet all the other fellows and hear what research they are doing.”

The other fellows will also have a chance to learn about what Dr. Miller is doing, which is fairly unusual for pediatric cancer research. Her approach uses primary data collection to electronically capture the adverse events in cancer clinical trials instead of relying on a research staffer’s manual review. The algorithms take the data out of a hospital’s electronic medical record and automatically grade it according to the NCI guidelines.

“By doing this, you take out the human error, you take out the human time that’s required, and you might make a system that’s more universal and more standardized between people, in addition to making it more accurate and more efficient,” Dr. Miller said.

She has already demonstrated the concept with a relatively simple algorithm that identifies and grades adverse events that are measured with lab test results, such as a high potassium reading on a blood test. The new grant extends this work to a whole new level to connect multiple types of data from a patient’s electronic medical record, including laboratory results, radiology data, vital sign data, and even clinician notes. These diverse sources are necessary to piece together relevant information about potential adverse events such as acute respiratory distress syndrome, which can be confirmed only with multiple types of clinical data.

Once these algorithms are developed, Dr. Miller plans to test them both at CHOP and at Texas Children’s Hospital which, as one of the largest children’s hospitals in the country, treats a large population of oncology patients. Testing the algorithms in multiple hospitals will help confirm that the tool is applicable beyond a single site. She and expert colleagues will also perform manual chart reviews to compare the rates at which they identify adverse events to the algorithm-generated rates.

If the algorithms can successfully automate adverse event reporting, this new approach may offer numerous potential benefits, in addition to improving accuracy. Automating the process can reduce the amount of time that clinical research assistants spend on adverse event reporting, allowing them to spend more quality time on other essential aspects of running a clinical trial. The algorithms would also generate robust and granular data sets about clinical trial patients’ adverse events that could be useful for epidemiological studies, similar to population studies that are done on a whole hospital or health network’s electronic medical records — but focused on a cancer clinical trial population. Such studies could address new questions to help improve the design of future clinical trials.

“I think that is one of the things that is unique about this project, that we are really trying to use it as a tool to improve clinical trials, rather than simply gain information,” Dr. Miller said. “We really want to make the trials better, to generate data that is as accurate as possible while also running trials better and more efficiently.”

Share This

Print