



© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.
Published: 02/27/2009
Our process improvement consulting company was contacted by a new design client requesting assistance in improving its quality management system (QMS). The company had used an existing system for several years, but it was still experiencing difficulties in making on-time delivery of designs; it had a higher-than-industry average, and missed customer requested dates in some cases. The designs themselves were sometimes nonconforming, having a higher-than-industry average for errors or missing a promised function. We agreed to work with the designers to identify areas of their QMS that could be strengthened, and to develop and implement a comprehensive quality plan to address these concerns.
Closing the Loop on CAPAs with Quality Management Software by Mike Jovanis
Corrective and preventive action management is vital to an organization’s quality management initiatives. It ensures that issues that appear unexpectedly in the manufacturing process can be addressed quickly and efficiently, with procedures implemented that can make certain these issues do not repeat themselves.
Many companies fail to realize the advantages of a proactive, closed-loop CAPA process, integrated with enterprisewide quality management software to drive organizational efficiencies. With an effective CAPA procedure, aided by the right software, organizations can alleviate recurring issues that may arise in the manufacturing process, save time and resources, and help generate revenue.
Here are five steps to implementing a closed-loop CAPA system:
1. Deploy an effective electronic quality management system (QMS) for tracking incidents and events that occur, utilizing centralized software. Implement the system to log and manage all issues and incidents that have occurred. This will guarantee that the desired conclusion is reached when it comes time to correct issues, determine the causes, and take the necessary steps to prevent future occurrences.
2. Use your electronic QMS to apply escalation procedures to ensure that the correct people are notified of high-priority incidents, ensuring that these occurrences receive the necessary level of awareness and visibility.
3. An electronic QMS can help you correct problems expediently, and ensure that the proper incident approvals and routings have been executed prior to closure. Such software allows you to define the appropriate routings for different types or areas of problems.
4. Initiate preventive actions using investigation and root cause analysis. QMS software will help you maintain accountability by allowing you to assign responsible parties to investigate the catalyst for the issue and assign an identified timeline for resolution and a structured escalation process of overdue actions.
5. Employ routine effectiveness checks to measure how well the preventive solution is working at set points in time post-implementation. If appropriate, take additional action based on results of the check to ensure a true resolution of identified root causes.
By following these steps, organizations can effectively reduce the cost of quality operations and reporting, while increasing the level of regulatory compliance.
About the author Mike Jovanis is vice president of product management for Sparta Systems, an enterprise quality management solutions provider.
|
Typically, for a new client, our first action is to assess its as-is/current state system, then develop a customized to-do list based on the discrepancies indicated when the assessment is compared to the company’s ideal state. Our analysis showed areas where the client had done a good job (this was independently validated by its customer feedback and third-party audits) and other areas where its system needed improvement (validated by customer feedback and internal audits). We knew we could install tools and techniques that would not only ensure the system conformed to customer requirements, but also exceed them in some areas. We set about to design a streamlined and productive QMS.
One challenge facing our client was long lead time in design, longer than industry average. To add insult to injury, designs didn’t meet customer requirements in some cases. We planned to tighten discipline in the existing QMS process and started by encouraging the designers to use project management methodology. We recommended they install a “lessons learned” database to monitor and report on mistakes made during the design phase; it would also document good practices and techniques for publicizing throughout the organization.
A lessons learned database helps identify what went right, what went wrong, and any suggested improvements for future designs based on “hard knock” lessons implemented during current design cycles. Across many industries, new designs are often derived from similar existing designs where some portion of the form, fit, or functionality is enhanced or replaced. Because many designs use 50 to 80 percent of previous design specifications, avoiding the same mistakes made during the previous design, and capitalizing on the benefits and enhancements discovered during that design process, can speed up the current design process considerably. Initial revisions tend to be accurate, and design costs go down accordingly.
During our discussion on the benefits of using a lessons learned database, we discovered that the client was using a quasi-lessons learned database already. Actually, six different databases--of varying ages, in different program languages, and with different initial foci--were accessed for this purpose. The problem in using these legacy systems was that they didn’t interact. To address this shortfall, the client had its IT group develop a portal that interfaced with the different databases and could display search results from all of them. This lessons learned database wasn’t slick and shiny, but it did represent something already in place. That was encouraging. So, next to “step one” on the to-do list--database in place--we could place a check (sort of). Our next step was to understand how often the databases were used for both input and output.
To determine the portal’s level of data completeness and accuracy of input, we selected a current product in design and then searched the databases to find previous similar designs.
Our search result was simultaneously encouraging and disheartening. We found not one or two, but three instances where a problem had been identified and “fixed.” The encouraging part was that the data had been entered into the databases in all three cases, so there appeared to be a solid base of captured errors that we could now build on. The disheartening part, of course, was that the problem recurred a second and third time. The first occurred on the first design, the next on the second, and the third on a much later revision.
How did this happen? In all three cases, a design flaw was found after the design had been finalized and frozen, and after the first article sample had been produced, which forced our client to do a costly re-design. With the second and third instances, this costly error should have been avoided, or at least identified and investigated prior to the design freeze.
The portal search results highlighted two important points: Data entries were varied in completeness and accuracy; and once data was entered into the database, it wasn’t accessed.
We now had a set of actions: put a corrective and preventive action (CAPA) plan in place to address the existing but faulty CAPA system and thereby improving the infrastructure; and validate the relative merits of the entries in the system, improving implementation.
We developed a multitiered approach involving the following actions:
• Assess the databases for completeness of lessons learned.
• Consider transferring the information to a newer database, if feasible.
• Develop a methodology to simplify and improve the accuracy of the databases search function.
• Train engineers and managers to review the lessons learned database when beginning a design or redesign, and review it again at major checkpoints.
Company personnel reviewed a sample of database entries for the last two years to determine the significance and quality of the information captured.
Problem: This varied widely, depending on the engineer inputting the information. If the engineer was detail-oriented and had sufficient time, the database captured major and minor problems, triumphs, actions taken, and recommendations for follow-on products. If the engineer entering the data had focused mainly on the big picture, was new to the company, was unaware of the implications of design errors to other parts of the design, or was simply in a hurry to get off this project and on to the new one, the report wasn’t detailed and had little concrete data to support findings and recommendations. Generally, these recommendations weren’t well thought-out and proved of limited or no real value.
Action: Develop a guidance document for lessons learned database input, including examples of complete and incomplete entries. Train all engineers on this procedure and ask managers to hold their employees accountable for complete and comprehensive data input.
One of the underlying factors contributing to the existing database being largely ignored was the portal itself.
Problem: Our client’s quality personnel speculated that it might be too complicated to use easily, and because the design engineers were rushed for time (being behind the industry average put considerable pressure on them to work faster and better), many of the design engineers simply skipped this step entirely when starting a new design. The quality personnel investigated whether a more streamlined and simplified system could be used. The difficulty with this proposed solution was the age of the databases accessed by the portal; some of them were too old to be transferred to a new, fully integrated database. Incompatibility of the data formats was also an issue.
Action: A request was submitted to the IT department to research the cost and resources needed to redesign the database. They determined that three of the six databases accessed by the portal wouldn’t be able to migrate to a new database, so these data would be lost if this project was implemented.
Recommendation: Develop a new database or portal access, and input data to the new system as information becomes available; migrate data manually as new designs and revisions are developed. For the short term, because we didn’t want to lose momentum or support for this project, we suggested that a simple Microsoft Office Access database be used that would have searchable tags. This would provide the engineers with an interim database that could then migrate to the new solution. Each entry could be given multiple tags as necessary to identify the design characteristics. This allowed users to enter search queries based on key criteria. The example in figure 1 below has four tags, although more would be attached in working models. The figure also shows how this table can be used in different industries.
This database could then be expanded to capture additional action information, such as the action proposed and taken, the person responsible for the action, due dates, milestones, and closure results.
Until a new system could be developed, populated, and implemented, the current system had to be used.
Problem: We needed to encourage design engineers to use the database by developing examples of searches and search strings that would make the database search easier, faster, and more focused. This would also encourage standardization of tags to use when searching; these could also be used when creating the entries, as illustrated in figure 1.
Action: Write simple scripts for portal searches, including the most frequent search strings, anticipated and unanticipated results, and how to refine the search to eliminate the latter. Provide training and reference materials to design engineers and managers. Develop standardized search tags and make these available in a drop-down menu format to standardize and facilitate lessons learned data entry.
Recommendation: Solicit design engineers and managers for suggestions to include as lessons learned tags. At key checkpoints, provide feedback to managers and engineers about how they can use the lessons learned database to eliminate rework. This would highlight early improvements to garner buy-in throughout the organization.
We needed to train personnel to use the database during the design phase.
Problem: In the past, the same errors showed up in redesigns despite being documented in the lessons learned database.
Action: We emphasized with engineers that reviewing the lessons learned database wasn’t time-consuming busywork but rather a valuable tool to prevent redesign. We emphasized that the earlier a design flaw or error was identified, the less money and time it took to correct. This could improve both their turnaround time and final design costs. This approach appears to be working well because we’re seeing better database entries. To keep them informed and motivated to participate, we translated this into language appealing to management: project schedule compression, improved customer satisfaction, and lower design costs.
Recommendation: The lead engineer in each particular subsystem should review the lessons learned database for entry completeness prior to final acceptance. Because these gurus would understand not only the major points but also the more subtle implications of design flaws or errors, we thought they could then tag the data appropriately to ensure that a keyword search would give the expected result.
In addition, we asked managers to require a lessons learned database review as part of the formal design review. We also suggested that they require a review of lessons learned database entries as part of the project closure.
The long-term results of this case study are yet to be determined, but the short-term results show that acceptance and use of the lessons learned database are growing. As data become more complete and useful, we expect this usage to grow until it becomes a standard operating procedure for all new designs.
Like all processes, CAPAs must be reviewed for efficiency and effectiveness. In this case, the lessons learned database wasn’t being used effectively, which led to efficiency losses. By focusing your organization on continual improvement, you can identify nonvalue-added work, and either streamline it to make it more effective and efficient, or eliminate it.
If you have trouble identifying where a problem lies, take the time to look at your CAPA process as well as the process itself. You may be surprised at what you find.