Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer System Validation What is it?

Similar presentations


Presentation on theme: "Computer System Validation What is it?"— Presentation transcript:

1

2 Computer System Validation What is it?
My Name is Adam Woodjetts I have been a validation consultant with Instem for over 7 years, carrying out many on site validations of our software, including clinical instrument interfaces, and our dispense application. For this session we are going to look Computer System Validation and try to explain What it is and Why it has to be done. Adam Woodjetts Validation Consultant, Instem

3 Why Validate your Software?
Why do you have to go through the often tedious and almost always resource and time intensive process of validation? Why Validate your Software?

4 Regulatory Requirement
Good “X” Practice Collection of FDA Guidelines adopted as regulations then laws Manufacturing, Clinical, Laboratory 21 CFR (Code of Federal Regulations) - FDA Part 58 - GLP for Nonclinical Laboratory Studies Part Electronic Records ; electronic signatures Establishing evidence and confidence in consistent operation according to specifications Documenting and Systematically Challenging and Testing Reduce risk to patients Validation of “systems”, the processes, software and instruments, used in the production of pharmaceuticals, chemicals, and medical devices, is a regulatory requirement. <Click> Good X Industry regulators refer to G X P, Good “something” Practice where the X could be Manufacturing, Clinical, Laboratory, and many more. It started with the FDA (US Food and Drugs Agency) developing GMP (Good Manufacturing Practice) guidelines, these guidelines were then adopted as regulations and consequently law. The major regulations from the FDA are: “Title 21 CFR Part 58” which details expectations for GLP studies “Title 21 CFR Part 11” which deals with electronic records and electronic signatures. There are more regulations and guidance documents within the FDA, and there are similar regulations applied by other agencies: EMA in Europe (European Medicines Agency), MHRA in Great Britain (Medicines & Healthcare products Regulatory Agency) CFDA in China (China Food and Drug Administration). <Click> Establish Evidence Companies involved with the development, manufacture and use of items in the pharmaceutical industry must be able to provide evidence of confidence in software and associated processes; to operate consistently, predictably, and according to the specifications of the manufacturer. This means understanding and documenting what you want or need the system to do, and demonstrating with objective evidence in a methodical structured manner that it does it. <Click> Reduce Risk The objective of this process is that by ensuring reliable and quality data collection, and or manufacturing processes, the risk to patients is minimised <Click> Validation of Software The computer software and associated processes and instruments used must be validated to meet regulatory expectations. <Click> Evidence of Confidence Validation is the provision of evidence demonstrating confidence in the software’s ability to perform as required in a consistent manner. Validation must demonstrate “suitability for purpose” of the software, instruments, and associated processes. Validation of software is required to fulfil regulatory demands Evidence of confidence in “suitability for purpose”

5 Demonstrating “Suitability for Purpose”
It is a basic statement, but this is the primary objective of Computer System Validation, or CSV. We are going to try and describe and expand on this, showing what “suitability for purpose” means, and how it can be demonstrated… Demonstrating “Suitability for Purpose”

6 User Requirements Why do you use a specific system? User Requirements
What functionality do you utilise? How did you make the selection? User Requirements Clear and objective statements Regulatory Requirements How do you want to use the software? Work flow diagrams Standard Operating Procedures As Users you have selected, purchased, and use selected pieces of software or instrumentation for a reason. <Click> Why? Why do you use a specific “system”? It may offer some functionality which you utilise, it saves you time, it provides consistency. Was this the only instrument or piece of software you looked at when making the selection decision? How did you know it was the right system for you? <Click> User Requirements The answer to these questions would normally be found in “User Requirements”. Whether a list of statements in a document, or a complex matrix within a spreadsheet, what the ‘system’ needs to be capable of, or enable, should be established in order to make the decision to use it. Requirements should be objective, clearly worded statements. What does “ The system should be Fast and Responsive” actually mean? One persons expectations of a “Fast” system may be different from those of their colleagues. It is better to state that “Formulations should be processed in X (so many seconds) seconds”. User requirements should consider regulatory expectations, such as: Time Stamped audit trails – which enables ‘data recreation’ and traceability Generation of complete records in paper and electronic media – potentially upon request by auditors or inspectors Retention of records in a retrievable state for a specified period All of these items are part of 21 CFR Part 11 – Electronic Records and Signatures <Click> How do you want to use the software As well as knowing what the software needs to do, it should be understood how the software will be used, the processes it must enable, or integrate with. This information can be presented as Workflow diagrams, a graphic representation of the processes and procedures the system must follow, or documented as Standard Operating Procedures. <Click> Basis of CSV Documenting what you want the software to do and how you want to do it, forms the basis of the CSV process. Without a clear set of requirements there is no way of knowing what anyone expects of the ‘system’, therefore how can it be proven to be suitable? Basis of Computer System Validation

7 Risk Analysis Justification for exclusion of requirements
Establish Importance or Impact of requirements Grade or score each requirement using chosen method Probability of requirement ‘Failure’ How often is it “exercised” How Complex is the associated process Impact of requirement ‘Failure’ Alternative methods Production stops Cost to resolve Combine to get score/grade Having a structured collection of requirements will allow them to be prioritised through Risk Analysis. <Click> Importance Not everything in the list of requirements will prevent the ‘system’ from working, or your processes from being performed, and there will be some requirements that if they are not fulfilled it would or could be catastrophic – potentially effecting patients! Risk analysis allows requirements to be ‘graded’ or ‘scored’. There are several possible methods, but a simple way is to consider the combination of the Probability vs the Impact of a requirement failure – requirement not being met. <Click> Probability The probability or likelihood of a failure is linked to how often the requirement is “exercised”, or how complex the process is. The second point is easier to consider: the theory is that the more complex the process, the more likely it is to fail. However there are at least two possible views regarding how often a requirement is exercised: If its the basic function of the ‘system’, surely it would not be released if it didn’t work properly? or It is functionality used many times, therefore the chance of something failing is increased Whatever the opinion on this, it should be applied with a degree of consistency, giving the requirement a grade/score on the probability scale. <Click> Impact If a requirement is not met, what is the impact? There are a number of things which should be considered including (but not limited to): Are there alternative methods of achieving the same thing? Will failure stop production? What is the cost to resolve the failure? and Patient safety considerations This will give the requirement a grade/score on the Impact scale <Click> Combine Using the very basic Green, Amber, Red example displayed here, a requirement which has a high probability of failure but low impact, is graded Amber… What happens with this grading is dependent on each individual organisation's situation. For example, if time and resources are limited, it might be possible to justify the decision to NOT test requirements with a Grading of Amber. A similar alternative method would be to use a number based grading scale, with more levels for each axis of the table. As long as each score is described and applied consistently. Then the decision on whether to test could be based on a combined scoring higher than a pre determined level. <Click> Justification Risk Analysis of user requirements can provide the evidence to justify the exclusion of requirements from testing and therefore potentially reduce the cost and duration of validation. <Click> Prioritise or Focus Risk Analysis can also demonstrate the areas in which testing should be focused, where is it more important to demonstrate correct functionality. Justification for exclusion of requirements Prioritise or Focus testing efforts

8 Testing your Requirements
Establishing environment conforms to manufactures specs (IQ) Functional Testing or Operational Qualification Requirements mapped to manual test scripts Automated test tools User Acceptance Testing or Performance Qualification Testing the workflow and operating procedures Carried out by your users Having established what the ‘system’ is required to do, it is necessary to demonstrate that these Requirements are met. <Click> Installation Qualification It is important to test and document that the environment on which the system is to be used, has been correctly configured to conform to manufacture’s specifications. Has software been installed correctly on appropriate hardware? Do instruments communicate with associated software as expected? This is another collection of documentation and testing which is included in the validation package. <Click> Functional Testing It is necessary to demonstrate that the “functionality” of software or instruments has been considered. This can be achieved by executing a suite of manual test scripts, mapped to individual requirements. The test scripts form part of the documentary evidence which can be used to demonstrate the system’s “suitability”. It is also possible to utilise automated test tools which can execute tests autonomously once started. <Click> UAT Having proven that the software functions correctly, it is necessary to show that the software can be used the way you wish to use it. This area of validation is of greater value than the preceding stage, as it will demonstrate the desired workflow with “realistic” data, using tests executed by actual users. <Click> Package of Evidence Executing these tests will provide you with a package of evidence testing your requirements, demonstrating that the system is suitable for its intended purpose. Documentary evidence of requirement fulfilment

9 Record Faults and Incidents
Record any testing incidents Symptoms Cause Consequences – Impact – Resolution Deviations from “the Plan” How Why Consequences – Impact Environment Changes During validation not everything will work, and things don’t always go according to plan. <Click> Test Incidents If incidents are encountered during testing, the symptoms should be recorded to enable investigation. Even if it is not known at the time the incident is encountered, the cause should be established. The immediate consequence of the incident, as well as any impacts there may be in the future use of the system should be recorded, not just when the incident occurs, but as on going investigations start to understand what has happened. A resolution may not be a “fix” to the problem, but a method of continuing the test by working around the problem, or actions which need to be taken in the future before using the system in a live environment. <Click>Deviations from “the Plan” The “Validation Plan” is something yet to be discussed, but if you deviate from a planned set of actions, it should be recorded. How have you deviated, why, and what are the consequences. For example: If you discovered that an instrument has started malfunctioning and must be replaced. This might need a new test to be written, which has to be run at another time… It’s not a bad thing, and the user requirements (once reviewed) will still be tested… but not as planned – due to a change in environment and test scripts to be executed. <Click> Changes to the Environment In a similar fashion to incidents and deviations, changes to the environment you are validating should be recorded. Using the previous example, instrument 1 was intended to be validated. But replacing it is a change in the validation environment, especially if there are any installation and configuration activities related to either instrument. <Click> Record of what has happened Validation can be considered a record of how requirements are proven, and what happened when it was done. Incidents don’t have to be “fixed” but the resolutions, the actions taken to mitigate the fault, or work round the situation, must be recorded. <Click> Usable in the Distant Future The first time the validation package is reviewed may be some time after the validation activities finished. Even if the same staff are available, they may not remember the exact scenario leading to an incident or change. When recording your incidents consider – Can you repeat, understand or recreate the incident with the information provided? Validation is a record of what has happened Can the issue be ‘recreated’ with the information recorded?

10 Packaging the Documentary Evidence
Create a structured Validation Plan Environment and Installation details (IQ) What is being testing and Why How requirements are proven (demonstrated) Automated and Manual tests Leveraging vendor Test Evidence Summary Report What are your findings/Conclusions? Can the software be used with confidence? Deviations from the Validation Plan What evidence supports your findings? It is not sufficient to have a collection of requirements and just run some tests and make the statement “it is suitable” and therefore validated. <Click> Validation Plan The test evidence must be viewed as part of a structured package of information which starts with a validation plan detailing: Environment and configuration of the system, or Installation Qualification. What is being tested – which applications, user requirements, or groups of requirements ARE being tested, and which are NOT, referring to the Change Impact and Risk analysis processes <Click> How are requirements proven It is not necessary to demonstrate/test all requirements using the same methods. With the potential choice of automated or manual tests, and the possibility of testing at different stages of the validation, the Validation Plan should detail which methods are being used to demonstrate what User Requirements, with the justification for those decisions. <Click> Summary Report In essence, the summary report is a statement of whether it has been proven the software is suitable for purpose. It should include any deviations from the Validation Plan, and details of any incidents encountered during the validation, including what the impact is, and how they are to be resolved or mitigated. The summary report is not a replacement for all of the validation evidence, but an initial point of reference (for auditory or regulatory bodies for instance). Therefore the report should detail what evidence is available, and where it can be found. Validation starts with a Plan – the what, how and why, which guides the execution activities and determines what documentary evidence will be generated <Click>Validation Report Validation ends with a summary report – was the plan followed, has it been demonstrated that the system works the way you intend to use it. Validation Plan – What How and Why your are validating Validation Summary Report – Demonstrate “suitability for purpose”

11 Questions Retrospective Evaluation – Already using the software
Do an Gap analysis of what you have and what you should have. Fill the gaps… perform the validation! Maintaining validated state What changes may effect the behaviour of the system What tests need to be executed to ensure they are tested – user requirements risk analysis. Change impact analysis Questions


Download ppt "Computer System Validation What is it?"

Similar presentations


Ads by Google