The Challenges of Risk-Based Monitoring: Statistical Analysis

IDMP-Delays_blog-header.jpg

The shift to risk-based monitoring (RBM) is not without challenges for clinical research teams. In a previous blog, I outlined three key adoption challenges teams face. That blog also provides an in-depth review of the first challenge, risk assessment. In this blog, we focus on the challenge of establishing and performing the sophisticated statistical analyses which are critical to detecting clinical trial anomalies.

In my experience, I’ve seen beautiful bubble plots of endpoint data, and convincing line graphs, all showing that, without a doubt, there is fraud happening at a clinical trial site. The prospect of having to create such presentations can intimidate any research team. They might ask, “Do we have the tools we need? Do we have the people and skills necessary to create and interpret these figures?”

Most clinical teams do have talented statistical counterparts, and their skills are certainly compatible with the analysis duties of the central monitor. They can design and develop programs to produce reports and review the results with expertise. However, it’s important to understand the new challenges the stats team faces when they take on RBM:

  1. Traditional stats teams work with static datasets, and deliver reports designed to reflect clean data for a regulatory audience. For central monitoring, they need to learn to work with dynamic data and reporting tools that are geared for it.
  2. Aggregating and integrating multiple sources of data – EDC, Labs, and Patient Reported Outcomes is time consuming and requires careful checking. Though this is familiar work for the statistical team, it is customarily performed much later in the trial’s lifecycle.
  3. Initially, there is no “real” data for development and testing – Stats teams are accustomed to working with datasets populated with representative values.
  4. Stats teams are *BUSY*. Faced with attending a meeting to review the Risk Assessment Categorization Tool (RACT) or complete an analysis to respond to an FDA inquiry, which would you choose? Stats teams (particularly in CROs) are almost always engaged in time-critical analyses for pivotal trials. They do not have time to conduct data review.

Without building and training a dedicated team to conduct central monitoring analytics, the only choice a research shop has, is to guide the stats team through this transition. Let’s assume for a moment that executive management has seen the need for RBM, and will commit to staffing up so that they may focus on RBM.

That leaves three technical challenges for the stats team to meet:

  1. Dynamic data – Working with dynamic data means that reports will be run many, many times. Traditional clinical stats methods reply upon Quality Control (QC) steps that work with static data, with a manual, independent QC for each run. This is not at all feasible for dynamic data, and it is also not necessary. It’s true that reports configurations and derived data steps need to be well-checked, but the focus needs to be on methodology and logic, not end-to-end results. QC processes for repeated deliveries may be adopted from existing paradigms for SAS macro validation and applications development. Essentially, these would involve integration testing, the inclusion of warning messages, and the exclusion of sign-offs for each and every run on fresh data.
  2. Data aggregation – Data aggregation remains a challenge, but it’s a challenge that the stats team is accustomed to, and it’s simply a matter of performing this work earlier in the process.
  3. No representative datasets – Lastly, there’s the challenge of aggregating and developing reports without representative datasets to work with. This is likely the most significant change for a biometrics team. Even though data standards (CDISC, SDTM, and ADaM) take some uncertainty out of the planning and programming specification, mature programming code is customarily developed from actual datasets. Test datasets may serve this purpose, though. For each system that contributes data for a trial, there should be a complete set of test data. This data may be used to establish data integration processes and subsequent reports. We expect test data to be sparse, and contain intentional flaws intended to test the source systems. With these expectations established, initial reports may be developed, and finished when mature data arrives.

With the technical challenges addressed, let’s consider the organizational dynamics – resources, extra work burden, and the stigma of performing data review. Though anecdotal, I believe the following experience illustrates these dynamics and the bind many statistical teams find themselves in.

The director of stats at a CRO in which I was piloting RBM was a good friend and colleague. When he missed his second RACT meeting with our team, I dropped by his office. I found him and his team in an unusually chaotic state. He could not pull himself away for anything, because the team had to reconfigure the entire final set of statistical output for a pivotal study. These were experienced, smart people faced with mitigating a critical error.

One of their primary safety endpoints was corrupted. Sites had sent patients to a clinical expert for workup and assessment. The scale used for the safety endpoint was unusual, in that a zero indicated the presence of a determined defect, and a missing value indicated the absence. The clinical experts were familiar with the scale, and thoroughly trained. Nonetheless, a few misinterpreted the scale.

This critical error was only caught during blinded data review, as data were reviewed in aggregate and the team investigated patterns. Since the beginning of the study, certain clinical experts had misunderstood the scale. This meant the team needed to drop these particular patients from a newly created per-protocol analysis, essentially doubling the amount of output to create and deliver.

My next question for my friend was plain, “If we could have detected and repaired these problems much earlier in the project, would we be better off?” Of course. I didn’t need to connect the dots for him. By examining the data in aggregate, earlier on, as a part of a risk-based approach to monitoring, such a time-consuming disaster would have been avoided. The question remained, “How do we break out of this cycle?” Urgency trumps planning, and our stats team had no more time to give. We agreed that the optimal approach would be to move ahead with RBM adoption – with expert help in setting up the analytics.

To learn more about how a risk-based approach to clinical trial monitoring can help you avoid critical errors that waste time and resources, reach out to Jens-Olaf Vanggaard for access to expert assessment and implementation assistance.

 

Tags: risk-based monitoring, clinical trial design, Life Sciences, Life Science R&D

   

Recent Posts

Tag