Beyond Regulatory Compliance: Adopting a risk-based approach to efficiently validate computer systems

The life sciences industry has progressed significantly over the past decade with respect to computer systems validation in that the industry norm regarding the acceptable scope of validation activities has evolved and best practices have emerged.  By adopting a risk-based approach, organizations can more efficiently validate systems while still adhering to regulation requirements. At Sparta Systems, we have a number of employees who have come from the life sciences industry and are able to offer different viewpoints on certain topics based on their specific experiences.  I interviewed three such individuals regarding their perspectives on risk-based validation and their guidance for companies looking to implement a risk-based approach.

Prior to his experience at Sparta Systems, David Hartmann worked for 20 years in both bulk and finished dosage facilities as Controls Engineer, Validation Manager, and IT Manager.

Mohan Ponnudurai has over 20 years of experience in technical product development, marketing, customer-focused initiatives in various sectors including enterprise software solutions, embedded real-time systems in avionics and infotainment systems, air defense and other aerospace programs and manufacturing. He started his career as an engineer, dealing with design, testing, fabrication, manufacturing, site implementations and compliance to standards.  

Paul Marini spent two years as a consultant as a Validation Lead and system implementer. After that, he worked in the medical device industry for over five years where he served as the Global Validation Lead, managed the global compliance review process, and functioned as business system owner for all non-SAP quality and regulatory enterprise applications.

What is your definition of risk-based validation?

David Hartmann: Risk-based validation is conducting directed testing around the system functionality to provide the highest degree of assurance that a system is performing to its specification.

Mohan Ponnudurai: My definition would describe risk based validation as incorporating a process analytical technology (PAT) framework in which quality is built in by design and validation is demonstrated through continuous quality assurance, where a process is continually monitored, evaluated, and adjusted using validated in-process measurements, tests, controls, and process end points. This is very common in complex discrete manufacturing sectors.

Paul Marini: The simple definition is spending time and effort up front performing and documenting risk-based assessments of your work effort to greatly reduce the level of effort required to validate a system without sacrificing significant quality.  Remember, there is risk in every validation effort.  With this approach you have at least identified your greatest “at-risk” areas and addressed them.

With all of the regulatory requirements and industry guidance around validation, what exactly are regulators looking for with respect to computer systems validation?

DH: Regulators are looking first to see that your system is in “control,” meaning that it is documented and governed by current SOPs.  They are also looking to see that system access is controlled and monitored.  They want to see that SOPs are adhered to, that any change to the system is managed according to SOP, and that changes have been properly tested.

MP: Regulators are more focused on integrated approaches that tie in good science and good engineering practices that support innovation.  To achieve this, they are collaborating with global agencies to develop harmonized scientific standards for products along with the application of GAMP5 principles for GxP computerized systems in cGMP environments. 

Regulators are also more focused on the maintenance of compliance and fitness for intended use of computerized systems throughout the life cycle, which would include how the system handles change management, business continuity management, security and system administration, record management, retirement, and periodic review of the master validation plan.

PM: There are two distinct areas that a regulator may focus on when it comes to computer system validation.  The first and most obvious is your system’s design history file.  The second is your quality system.  While a bad set of documents can lead to a 483, systemic failure of your quality system is considerably worse.

When looking at computer system validation, it is important to note that the computerized system’s design history file will generally be reviewed in layers.  The first and most important layer is your summary documents.  This includes your validation plan, validation summary report, and traceability matrix.  These documents need to be clearly written, appropriately detailed and, most importantly, accurate.  A regulator or auditor should be able to read these documents, be able to ask clear and direct questions (which you are prepared to answer) and feel that a deeper dive is not required.

Specifically within your validation summary report it is best to list all of your documents (with revisions and dates), all of your test cases and results, as well as your defects and deviations.  A common misunderstanding is that test case defects are a negative – this is far from the case.  Test case errors imply that you have rigorously tested your application and that you have hopefully eliminated the errors with the greatest impact.  The most likely scenario to raise suspicion is a lack of testing errors.  If no defects are reported, a regulator will most likely search until they find one.

From a quality system side, it is equally important to have process documents that define how you will validate your system.  Besides the obvious “per deliverable” SOP requirement, you should also ensure that an SOP exists that defines your validation process.  This document could be as simple as providing a list of required documents or as complex as including a risk-based decision tree, which helps customize validation deliverables based on the system’s GAMP category and/or intended use.  The most important thing is that you follow those SOPs and that you document and process deviations in your validation documentation, either in your validation planning document or validation summary report.

How do life sciences companies assess validation risk, and how do you achieve the right balance between taking a risk-based approach to validation and ensuring compliance?

DH: By getting the buy-in of the quality organization early on in the project and making them an integral member of the project team, the quality group is then able to gain knowledge of the system being developed and is much more agreeable on limited testing of “non-critical” items.  Additionally, if the testing and evaluation approach is spelled out in the quality assurance plan (QAP), there is evidence to present to an auditor, as well as an understanding by the project team of the validation approach.  SOPs are also a great tool to unify the testing and validation approach.

MP: Life science companies assess software validation risk based on a clear process understanding and potential impact on patient safety, product quality, and data integrity.  Quality risk assessment involves the impact of the computerized system on those elements with the addition of regulatory requirements, user requirements, and system architecture and supplier capability.

Companies achieve this by applying key concepts of GAMP5: - Product and process understanding - Life cycle approach within a QMS - Scalable life cycle activities - Science-based quality risk management - Leveraging supplier involvement

PM: While in charge of the global validation portfolio, I helped refine the validation planning documents, which included a scalable validation deliverable list based on regulatory impact as well as type of system or type of change.  We also mandated requirements-based risk assessments for all regulated projects.

How have different life sciences companies from your experience implemented risk-based validation?

DH: In my experience, the only way to implement risk-based validation is through strict adherence to the SLC methodology.  In defining the validation plan and the system’s detailed functional requirements you are defining the framework for risk-based validation.  The functional requirements allow the project team to quantify issues that may arise with each requirement, along with the likelihood and impact of each issue.  The validation plan then determines where that requirement will be tested and the project team can determine the level of testing.

MP: The companies apply the GAMP5 concepts in their validation strategy and develop a robust master validation plan with clear resource planning and deliverables outlined and documented.  During execution, they would also ensure that the initial user requirements, functional and design specifications, and module unit specifications are tested appropriately, and more importantly, aligned for the intended use and process in the production environment.

PM: The most effective guidance I have seen followed, which was also recommended in GAMP5, is to leverage your vendor’s work efforts after qualifying their quality processes and deliverables.  As seems to frequently be the case, a little upfront effort can greatly impact resource drain to your benefit.

Subscribe to the Sparta Systems Blog. Enter your email address:

Delivered by FeedBurner