Application of ICH Q9 Quality Risk Management Tools for Advanced echecs16.info .gov/downloads/Drugs/Guidances/ucmpdf. Accessed. prepared by some members of the ICH Q9 EWG for example only; not an official policy/guidance. July , slide 1. ICH Q9 QUALITY RISK MANAGEMENT. How did we get here? • FDA 21st Century GMP Initiative. • ICH activity. Introduction to risk management. Links between Q8, Q9 and Q Contents of Q9, Draft 4.
|Language:||English, Spanish, Indonesian|
|ePub File Size:||26.89 MB|
|PDF File Size:||17.27 MB|
|Distribution:||Free* [*Register to download]|
Q9. Current Step 4 version dated 9 November This Guideline has been developed by the recommendation for adoption to the three ICH regulatory. Disclaimer: The ICH Q9 briefing pack is offered as a supplementary explanation of the WHO: echecs16.info ICH guideline Q9 on quality risk management. Step 5. Transmission to CHMP. June Transmission to interested parties. June Deadline for comments.
To assess the reliability of the evidence obtained, we must consider the relevance, sufficiency and competence of the evidence collected. The following guidelines can help define these attributes. Is the evidence objective or subjective? Objective evidence is achieved when two or more independent auditors are likely to arrive at the same result. Documented evidence, such as records, provides proof of compliance to procedures and is more reliable than verbal evidence.
The most common is to apply the same scale particularly the scale for likelihood or probability of occurrence to all risk assessments, regardless of the nature of the individual situation, product, or process that is assessed. Firms and some regulatory agencies like this approach because the risk ratings may then be compared across products, processes, departments, and even sites.
One limitation of this approach is that while the probability of occurrence scale may be entirely appropriate for one manufacturing process or product, it may be entirely out of range for others. Another is that using the same scale in all risk assessments may place too much emphasis on the final RPNs. This is a problem because directly comparing RPNs from a range of risk assessments fails to recognize that RPNs are derived from ordinal number scales; multiplying ordinal numbers to generate RPNs has questionable mathematical validity.
The second approach to using likelihood scales is to use available information, as limited or as imperfect as it may be, to develop a customized scale for a particular risk assessment. A scale could be based on data from 50 small-scale lots of a drug substance produced in the last two years as part of product development work, for example.
A point scale based on available data created here could connote much more precision than the actual experience base provides; a three-point scale may be more appropriate in that situation. If, on the other hand, the process for manufacturing this drug substance was similar to that used for other products at the site e.
This could result in a much larger experience set.
Several firms that we have recently visited or inspected have developed likelihood and severity scales that use the same ordinal number ranges 1—5 or the same levels e. In a 1—5 ordinal number severity scale, for example, there may be five degrees of patient-related impacts, a set of keywords describing five different degrees of GMP noncompliance, and other sets of keywords relating to drug availability or hazards affecting critical quality attributes, etc.
So instead of simply assigning a severity rating based on the high-level names associated with the available severity levels e. This helps assign severity ratings in a more consistent and less biased manner. A similar approach can be taken when customizing likelihood-of-occurrence scales for individual risk assessments. Scales can have keywords and definitions that relate to occurrences per unit of time, such as one event or fewer in five years, one or more events every week, or numbers of batches, numbers of units produced, e.
In addition to having scales customized for a specific risk-assessment exercise, it is also important for the QRM team to document its rationale for making key decisions, such as the construction of the scale, the selection of a particular category, and the like. This can help support the ratings that are assigned, and can be a useful source of information for those reviewing the risk-assessment exercise in the future. Introducing uncertainty via subjectivity Risk increases as uncertainty increases.
Uncertainty can also be present when options to detect a particular hazard are lacking or the detection methods are not used. Acknowledging that uncertainty is present or that you do not know something are two of several ways to respond. Subjectivity in risk assessment work is another important bias that should be addressed.
ICH Q9 discusses the difficulty of achieving a shared understanding of the application of risk management because each stakeholder might: Perceive different potential harms Place a different probability on each harm occurring Attribute different severities to each harm Subjectivity can be compounded by groupthink as part of brainstorming activities—during hazard identification steps, for example, and when probability ratings are being assigned.
In addition, a lack of diversity in risk-assessment teams can limit the breadth and effectiveness of risk-assessment exercises. Subjectivity can have other negative effects, as well. As discussed in ISO , 13 stakeholders form judgments about risks based on differences in values, needs, assumptions, concerns, etc.
As a result, it can be difficult to reach agreement on the acceptability of a particular risk, or on the suitability of the course of action proposed to address that risk.
This is not to say that subjectivity should be banished from discussions on risk. Without the critical analysis of alternative viewpoints, groupthink can blind team members to significant risks.
Subjectivity is not just how we perceive and discuss risks—it can also be a consequence of the scoring method used to estimate the risk. If the quantitative aspect of the occurrence scale were not there, however as is often the case , the scale would provide no guidance to what these phrases mean, especially in the context of the risk-estimation exercise in question, introducing subjectivity into the likelihood-of-occurrence ratings.
Research has found that different people interpret phrases such as these in very different ways—see Budescu, Por, and Broomwell for a useful research study in this regard. This can help achieve more science-based likelihood-of-occurrence estimates and severity ratings for hazards that are not adversely influenced by risk perception factors.
Ensure that risk-assessment teams are sufficiently diverse can help with failure mode identification activities, and when risk control proposals are being discussed and determined.
Inviting someone onto the team with a different point of view to challenge what has been proposed 13 to also be of value. Use key words in scales to identify levels of severity, likelihood, and detectability. Acknowledge that uncertainty is present during risk analysis.
Useful strategies include documenting any pertinent assumptions made during the risk assessment in the risk-assessment report, and the likely range of any risk ratings or RPNs considered especially difficult to assess. Addressing such ranges is not unlike the approach used by storm forecasters for tropical storm predictions. Realize that you may know more than you think, and source the data to support that knowledge.
Another simple strategy is to design the risk-assessment tool to ensure the following: Before any probability, severity, or detection ratings are assigned to failure modes or hazards, the current GMP controls that may help prevent, detect, and reduce the potential effects of those failure modes or hazards should be formally documented and assessed.
Ongoing monitoring activities are also important, as they can identify situations or changes that could affect the original risk assessment and the decisions made.
How much uncertainty was associated with the probability estimates and with the identification of failure modes last time? How much has the process changed since the original risk assessment was performed? Some risk reviews may be coupled with annual product reviews APRs. We think this is a useful strategy and one that can make best use of the extensive data compiled for APRs. It can also be useful if clear risk-review instructions are prescribed in the risk team report e.
Doing this recognizes that the risk team members will usually have good insight into any problems and assumptions that arose, and they should be familiar with how dynamic or static the situation was, and is. If there were significant uncertainty in a likelihood-of-occurrence estimate during the original risk-assessment exercise, for example, the team should document the need to reexamine this more carefully during the review exercise, taking into account certain types of information that should, by then, be available to better inform that estimate.
Evidence from a system or process adequately controlled is more reliable than evidence from a poorly controlled or questionable system or process. Adapted from best practices from the financial industry, these guidelines can be useful for any organization in its quality audit processes and can benefit its overall risk management strategy. Only with reliable audit evidence can it assess risk and mitigate it effectively.
Visit www. Richard L.
Ratliff and I. John G. Suedbeck is a quality assurance specialist for Metrics Inc. Please review our Terms and Conditions of Use and check box below to share full-text version of article. ICH Quality Guidelines: An Implementation Guide. Related Information. Email or Customer ID. Forgot password? Old Password. New Password. Your password has been changed.