Screenshot 2022-06-30 at 14.36.40

 

 

Introduction

If you’re wondering whether your data management system complies with Good Clinical Practice (GCP), the standard for clinical trials involving human subjects, this guide is for you. You’re right to wonder; many data management systems in fact don’t follow the rules of this international quality standard that’s provided by the ICH, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use. The risk of being noncompliant increases even further when researchers combine different software for conducting surveys, managing and storing data.

We at IDEACT see this happening often and provide an alternative, an all-in-one system – ResearchSurvey – that complies with GCP. Based on our experience and insights, we’d like to help you understand the GCP requirements for your data management system.

The GCP only devotes one section, 5.5.3, of the chapter on trial management, data handling and record keeping to data processing systems. Due to its brevity, it’s not easy to follow. That’s why we’ll take you through each point and provide you with recommendations on how to achieve compliance.

We hope this guide will help you conduct compliant, successful trials.

 

About the author

IDEACT started in 2009 to provide the founder Robert de Leeuw a way to digitally support clinical research. Unique platforms and smartphone applications has been developed and used in many clinical trials and we aim to continue innovating the clinical research field.

 

 

Table of Contents

  1. Matching your data processing system with your sponsor’s requirements 3
  2. Make sure you stick to the manual 5
  3. Data changes should be documented, and never completely deleted 6
  4. Use a detailed security system to keep data safe 7
  5. Keep track of those who can make data changes 8
  6. Avoid losing data by always keeping a backup                                                 9
  7. Hide personal information to avoid bias 10
  8. Conclusion 11
  9. About ResearchSurvey 12

 

1.   Matching your data processing system with your sponsor’s requirements

 

The first part of the GCP’s section 5.5.3 deals with making sure that your system complies with your study sponsor’s requirements for data handing. Whoever is funding your study is responsible for ensuring a compliant data management system.

 

Here’s what the GCP says:

  1. Ensure and document that the electronic data processing system(s) conforms to the sponsor's established requirements for completeness, accuracy, reliability, and consistent intended performance (i.e., validation).

 

Let’s break down what this means: Your sponsor has regulatory requirements you should be informed of, and that your data processing system should follow. These requirements include rules on using a system that ensures your data being complete, accurate and reliable.

A system that ensures that your data is complete should be programmed to tell when a required section has not been filled in. It should also be able to differentiate between fields that were forgotten and fields that weren’t filled in because they weren’t not applicable. ResearchSurvey makes sure that all the questions are mandatory, requiring a standard value if the question is left purposefully blank.

Correct data is heavily reliant on the accuracy of the person entering it, but good software can help. ResearchSurvey can do this by preventing fields which require similar-looking entries, such as date of birth and date of participation, from appearing next to each other. Field numbers are also set to a maximum or minimum value to ensure there are no outliers of even non-existing values.

A data processing system is reliable when both its technical properties and functional properties work reliably. Working technical properties means that the system is up and running and doing what it’s supposed to. Downtime, where a system fails to perform, should be kept to a minimum. Functional properties refer to the options that the system offers; for instance, a reliable system should always provide the same outcome for every entry.

The last term the GCP mentions is this section is ‘consistent intended performance’. This refers to the need to save data in exactly the same way as previous entries, to ensure that the results are dependable and trustworthy.

Both the software provider and the researcher are responsible for adhering to the requirements listed above, and you should test your systems before every project. We at ResearchSurvey test every update made to the system, and document it with the date and person responsible, which makes it easy to reproduce during audits.

Every researcher should run a test trial before the start of every new project to make sure the system complies with their sponsor’s requirements for completeness, accuracy, reliability and consistent intended performance. We recommend including at least one series of data entry, paying close attention to the set rules, such as maximum and minimum values. Once it’s confirmed that the system has been validated, we recommend keeping a thorough documentation of the test, to removing the test data from the system, and beginning the project.

 

 

 

2. Make sure you stick to the manual

 

The second part of the GCP’s section 5.5.3 refers to the standard operating procedure (SOP): step-by-step instructions (aka the manual).

  1. b) Maintains Standard Operating Procedures (SOPs) for using these systems.

The GCP is unclear about whose SOPs you are to follow, but it’s implied you should follow both your organization’s and the data processing system’s SOPs.

Your organization’s SOPs should include a set of step-by-step instructions about who in the organization is responsible for what process of data management, and at what time. It’s important for every organization to have SOPs to reduce miscommunication and failure to comply with industry regulations. SOPs are also very useful for new employees, during an emergency, or when a person responsible for a procedure is not available. ISO 9001 is a popular quality management resource to help organizations develop their SOPs, as it offers a description of the necessary processes needed for systems. The software company you buy your data processing software from should also provide you with a quality manual that you should follow.

ResearchSurvey offers a SOP for 11 topics that are important for any data management system. These SOPs are for internal use and help to reproduce steps as consistent as possible. These SOPs include:

  1. Data collection and management
  2. Privacy management of administrators and other users
  3. Development of the data management system
  4. Training and use of the data management system
  5. System validation and functional testing
  6. Securing the server and platform
  7. Maintaining the server and platform
  8. Change administration and audit trail
  9. Data backup and restore
  10. Emergency plan and out of order use
  11. Integrity compliance

 

 

3. Data changes should be documented, and never completely deleted

 

Part three of GCP’s 5.5.3 looks at how data changes are monitored. There should be evidence of every change, and all data should be retrievable if you ever need it.

  1. ensure that the systems are designed to permit data changes in such a way that the data changes are documented and that there is no deletion of entered data (i.e., maintain an audit trail, data trail, edit trail).

This means you should always be able to tell what changes were made. A system should always keep a log of who has accesses to the data and what is changed. An audit trail keeps track of all these changes. It is important to know WHO changed WHAT, WHEN. But also what the old data was and what the new data is. Then you should have the option to explain WHY someone changed the data. ResearchSurvey does this by providing unique logins to users. These allow the system to track changes by user, including when they happened, which field was changed, the old and new value, and a free text section where the user explains why the change was made.

 

 

 

4. Use a detailed security system to keep data safe

The fourth part of the GCP’s 5.5.3 highlights the need for security to avoid unauthorized entry into the system.

  1. Maintain a security system that prevents unauthorized access to the data.

You’re dealing with sensitive information that shouldn’t get into the wrong hands. Avoid data being intercepted by those without official approval by defining rules on server security, programmer and developer security, administrator security and end-user security.

Server security: Use a server that is privately administrated and recruit a server manager whose job is to monitor server access and run maintenance. At ResearchSurvey, we request that the server manager signs a privacy agreement and isn’t able to gain access outside the physical room of the server.

Programmer and developer rules: Your employees who are responsible for software development should be able to make updates without gaining access to unauthorized data. Keep the code in a separate space so that the development team can test and perform maintenance without being exposed to the data, and document who has access to what software. 

Administrator security: This refers to general access to the backend software; your sponsor should have access to the data, and everyone involved in the study needs individual logins so different access restrictions can be set.

End-user security: Study participants’ data should be given identification codes, so that their personal information isn’t linked to their data.

5. Keep track of those who can make data changes

This part of GCP’s 5.5.3 deals with monitoring who has data privileges, and how the server can assist with this.

  1. Maintain a list of the individuals who are authorized to make data changes.

It’s important to monitor who has data access rights to avoid changes being made that you can’t track. Your sponsor should choose who has access privileges, and the server can help by tracking these people.

At ResearchSurvey we do this by storing the name, email, login details and level of authorization of everyone involved in conducting the study. We make sure that we have permission from everyone involved, as new European GDPR rules require agreement from anyone whose personal data is being stored on a server.

  

6. Avoid losing data by always keeping a backup

 

This section looks at backing up data so you can retrieve it if anything goes wrong. It’s important to choose the right backup strategy to ensure you can continue to operate and hold onto the data.

  1. Maintain adequate backup of the data.

A malfunction during data collection is bad enough, but there’s nothing worse than realizing data has been lost in the process. To avoid this situation, ResearchSurvey uses a combination of the 3-2-1 strategy, redundancy and versioning:

The 3-2-1 strategy means having at least three copies of the data. Two copies are stored on two computers in the same location, usually where the data is being collected. If one computer crashes, the other continues to store the data. One copy is also stored offsite on a private cloud, so that data can still be saved if both computers crash during a malfunction.

Redundancy is a slightly different concept to backup, as it focuses on the ability for the server to continue running, no matter what happens. ResearchSurvey does this by keeping the server settings to RAID10, which stands for Redundant Array of Independent Disks. It means that that two hard drive disks mirror each other, so if one computer fails, the other will continue to run and remains uninterrupted.

We recommend combining the above strategies with versioning, as it helps to retrieve past data. ResearchSurvey continues to take snapshots of all data so that, even if data is erased or changed by accident, you can always go back and recover the original files.

ResearchSurvey uses a server with the above settings, meaning customer data is stored on both a private server in the UK and a private, encrypted local backup of all data in the Netherlands.

 

 

7. Hide personal information to avoid bias

This section of GCP 5.5.3 deals with hiding personal information of study participants. The sponsor should ensure that information remains anonymous to avoid any bias when analyzing data.

  1. G) Safeguard the blinding, if any (e.g., maintain the blinding during data entry and processing).

Let’s break down this term: “Blinding” refers to the hiding of personal information during a clinical research study. It’s important that you don’t expose personal details when adding or analyzing the data, as this could cause bias when making observations. At ResearchSurvey, we do this by giving every person a unique identification number that masks all personal information, such as names and birthdays.

“Safeguarding” is the protection of the personal information. While it’s necessary to link ID numbers to the people involved in the study, these should be kept separate to avoid identifying individuals with their data.

At ResearchSurvey, we use two databases for each study; one holding the data and identification numbers, and the other with personal information. The only person who has access to the second database is the study principle investigator (PI), or someone that the PI assigns access to. The system can access both databases, meaning it can send an email to participants without exposing email addresses to the sender.

 

 

8. Conclusion

The success of a study relies on how you collect, log changes and store data; it should be safe and secure, both during and after the project. Many data management systems fail to meet the international quality standards, and this often comes down to using noncompliant software when conducting surveys, managing and storing data.

Relying on an unofficial data management system can put both the study and researcher at risk. For this reason, ResearchSurvey provides an alternative, all-in-one system that complies with GCP.We hope that this guide has made these quality standards clearer, so you can achieve compliance to the rules and conduct successful trials.

 

 

About Research Survey

Research Survey was developed in 2011 by researchers, for researchers. We created the system out of necessity, when we ran into some of the same challenges you might be facing:

We were planning a randomized controlled trial (RCT) to compare two different therapies that prevent intra-uterine adhesions. We were running this trial on a low budget but had a high number of participants, and we needed a system that could take a part of the work out of our hands. Further, to comply with GDPR, we needed to separate participants’ personal data from their questionnaire data. We were able to do that by creating separate databases that could only communicate one way, and by enabling only an administrator to view the personal data.

We needed the system to do even more for us. We needed it to enable us to randomize. We also wanted participants to use the system to fill out online questionnaires and we wanted to use it to keep track of their replies. We developed these functionalities, and let the system send planned reminder emails. This took a lot of work out of our hands, but then we noticed our participants weren’t reacting to our emails. So, we added SMS text messages to the questionnaire invitations. We made the site mobile adaptive and people filled out the questionnaire straight from text messages on their phones. Dedicated smartphone applications were developed for entering daily routine questions, which otherwise has a high incomplete data rate.

Of course, we also needed the server to comply with high security procedures. We took all data to a private server, not connected to any other sites. We also hired a dedicated server security officer who secured the server and separated the data from personal information.

Over the years, each trial provided new insights and we kept adding new functionalities to our system. We continuously updated the code that’s kept separate from data in our database. Now many other studies are running on our system. We’re a community of researchers who learn from each other, add, and share new functionalities on this effective system.