With the advent of Risk Based Monitoring (RBM) technologies, lot of organizations have developed large libraries of dashboards, scorecards, algorithms, alerts and other feedback mechanisms to provide signals on the data from a clinical trial. RBM however embraces the principles of quality by design and not quality by volume and this potentially powerful array of signals can be detrimental if the process for managing the signals is not carefully controlled. Too many signals each being chased see quality decrease as the key signals are missed and costs increase as resources are badly utilized.
There are parallels here to EDC where electronic case report forms also included an ever increasing set of edit checks, however in spite of creating studies with many hundred edit checks, there were still quality issues found in audits. The conclusion was that careful thought was needed around each edit check used, ensuring that it was truly required and added value. Just because it added value on the previous trial did not mean it should stay in for the current trial. Too many checks simply overwhelmed the end user.
The situation in RBM has the potential to be even more extreme. Users are often not familiar (or comfortable) with reviewing data summaries yet they are being put in control of a wide array of off-the-shelf output such as complex representations of trial metrics, together with the ability to explore data with statistical tools and detailed reporting engines without the training of how to use the tool nor interpret the output.
Even if the key skills are in place the global trial lead, project manager and other members of the clinical team do not have the time to explore the wide variety of output in a pro-active manner, they need to be led to where the true issues are. Another hidden problem with presenting a plethora of dashboards, visualizations, alerts and metrics to a monitoring team is how to achieve a consistent way of defining the thresholds and interpreting the real time values. What may sound as a red alert to one person may be different for another. Rather than fishing for all possible errors that could occur during the course of a trial it is essential to identify in advance what standard triggers to use and have them systematically tracked.
If you have been stuck in the whirlwind of output overload, think again. Carry out due diligence – firstly make sure the key risks in your protocol are captured with appropriate alerts and visualizations, but then make sure you remove all the output that are not adding value – maybe they are left over from a previous protocol, or have been used before and rarely added value. In parallel technology that can understand the underlying risk framework of a trial, assess the various signals being created and identify true underlying patterns and provide a clear workflow to engage users in an optimal manner are essential.
Importantly it is also worth remembering the intention of central monitoring is not to duplicate data reviews which are already being performed by data management. These clear workflows have the additional benefit of ensuring the central monitoring groups are not distracted by issues that are already owned elsewhere. Their focus should be in determining whether the risks defined during the beginning of the trial have occurred, or not, with the probability predicted and how this impact the overall trial progress and therefore how risk mitigations should be deployed.
In conclusion, we can learn important lessons from the world of EDC as we start to adopt RBM. More is not always better and a focus on good practice to identify which risks to manage and track together with technology that helps to drive workflow can ensure that the output created drives more constructive use of time rather than overloading the clinical team.