{domain:"www.qualitydigest.com",server:"169.47.211.87"} Skip to main content

        
User account menu
Main navigation
  • Topics
    • Customer Care
    • FDA Compliance
    • Healthcare
    • Innovation
    • Lean
    • Management
    • Metrology
    • Operations
    • Risk Management
    • Roadshow
    • Six Sigma
    • Standards
    • Statistics
    • Supply Chain
    • Sustainability
    • Training
  • Videos/Webinars
    • All videos
    • Product Demos
    • Webinars
  • Advertise
    • Advertise
    • Submit B2B Press Release
    • Write for us
  • Metrology Hub
  • Training
  • Subscribe
  • Log in
Mobile Menu
  • Home
  • Topics
    • 3D Metrology-CMSC
    • Customer Care
    • FDA Compliance
    • Healthcare
    • Innovation
    • Lean
    • Management
    • Metrology
    • Operations
    • Risk Management
    • Roadshow
    • Six Sigma
    • Standards
    • Statistics
    • Supply Chain
    • Sustainability
    • Training
  • Login / Subscribe
  • More...
    • All Features
    • All News
    • All Videos
    • Training

AI and How It Affects the Quality Profession

Assimilate or you will be assimilated

fotogestoeber/iStock

James J. Kline
Tue, 11/04/2025 - 12:03
  • Comment
  • RSS

Social Sharing block

  • Print
Body

The use of artificial intelligence (AI) models, specifically generative AI, is growing. This has raised concerns about the effects on jobs in various professions. The quality profession is among them.

ADVERTISEMENT

Like it or not, the quality profession has been disrupted. This occurred before AI became widely used. For instance, membership in ASQ dwindled from a height of 150,000 during the 1990s to about 30,000 today. Local chapters, like the Portland, Oregon, chapter, have closed.

With the closure of my local chapter, I let my 12-year membership lapse. I was an MQ/OE and a Six Sigma Green Belt. In addition, for 20 years I researched and wrote articles on quality management in government. I also edited a book titled Quality Disrupted (CERM Academy, 2022). More recently, I’ve published several books and articles on enterprise risk management and AI use in government.

I state this so you understand the shift that one individual made. I made this shift before AI came into the picture in a big way. But for several years I’ve been following the movement to risk management, and the effect AI is having on the workforce and the quality profession.

In a recent article, Greg Hutchins discussed this disruption generally. It’s worth going into a little more depth.

This piece looks at a recent academic study on AI’s effect on the workforce, the relevant quality-related jobs ChatGPT believes AI will eliminate, and how government actions will influence the movement to risk management. Each will have direct or indirect effects on the quality profession.

Stanford AI study

A 2025 Stanford University study, “Canaries in the Coal Mine? Six Facts About the Recent Employment Effects of Artificial Intelligence,” on the effects of AI on the workforce, noted that as of July 2025, about 46% of workers were using large language models. Further, AI capabilities, in terms of ability to solve coding problems, moved from 4.4% in 2023 to 71.7% in 2025. 

The study determined that the growth in AI use and capabilities is having a significant effect on younger workers. Employment rates for younger workers have decreased. The reason is that younger workers have higher levels of codified knowledge (book learning) and perform more routine work than more experienced workers with greater tactical knowledge (experience). AI currently has more influence on positions with high levels of codified knowledge, or routine work activities such as answering questions, writing and responding to correspondence or emails, or preparing training materials.

The Stanford study informs us of the current situation. The hiring of new employees in many fields has slowed. The question, thus, becomes: How will AI affect members of the quality profession already employed?

ChatGPT AI impact assessment

AI Whim, a blog authored by Ole Paulson that researches and discusses the effects of AI on various positions, asked ChatGPT to estimate the 100 jobs most likely to become obsolete due to AI. According to ChatGPT, the jobs associated with the quality profession that could be headed for extinction or augmentation are aerospace engineer, biomedical engineer, biostatistician, civil engineer, data scientist, database administrator, electrical engineer, industrial engineer, mechanical engineer, robotics technician, statistician, and system administrator.

The results were further analyzed based on the degree of education: no college degree, associate degree or certificate, and bachelor’s degree or higher. The AI replacement percentage results were: no college, 19%; associate, 11%; and bachelor’s or higher, 70%.

Although ChatGPT didn’t list specific quality jobs, it touched all the professional occupation bases. Quality-related professional positions are in danger of elimination or augmentation. Further, looking at the results by education level reinforces the idea that AI will have a greater effect on jobs heavily weighted toward codified knowledge and performing routine activities.

Despite ChatGPT’s prognostication, the exact effect of AI on the quality profession has yet to be determined. However, one can see the direction that governments, and by extension the private sector, are heading. That direction is predominately toward risk management. Two government actions, the European Union’s AI Act and Oregon’s Artificial Intelligence Action Plan, show this clearly.

European Union’s AI Act

The EU Artificial Intelligence Act passed in 2023 provides specific requirements for the development and maintenance of AI models. Its primary focus is on the developers of AI systems.

The AI Act regulations are based on the perceived threat or risks the AI system could pose. There are four risk levels used to determine the extent of the regulations:

Unacceptable risk—This type of risk is prohibited. It includes social scoring systems and manipulative AI.

High risk—This type of risk is highly regulated. It includes safety components or products covered by EU laws. It requires the developer to undergo a third-party conformity assessment, which includes having a risk management and a quality assurance process. The risk management process must be consistent with International Organization for Standardization (ISO) 31000:2018—“Risk management—Guidelines.” The quality management process must be consistent with ISO 9001:2015. The quality management process must also be subject to audits by a third party. 

Limited risk—Developers must ensure that end users are aware that they are interacting with AI systems such as chatbots and deepfakes.

Minimal risk—Unregulated. This includes AI-enabled video games and spam filters. 

The requirement that ISO 9001:2015 be used to verify the quality management system provides quality professionals with an indication that their auditing services will still be needed. This is especially encouraging, since AI development and use is growing.

The ISO 31000:2018 requirement of the act indicates the risk management trend. There are several reasons for this emphasis. First, the risk concerns are considerable. AI models may have a bias or faulty code that gives bad results. A coding error could result in the loss of all or part of the data set. In addition, there are risks to individual privacy and issues concerning unethical use. Thus, risk mitigation efforts are going to be part of any AI implementation plan.

Second, AI use is growing and worldwide. AI risk management considerations will likewise increase and extend beyond the EU.

Third, in the United States, the National Institute of Standards and Technology (NIST), which is responsible for developing AI implementation guidelines for federal agencies, is basing its AI risk management guides on ISO 31000:2018. Its guides don’t mention ISO 9001:2015.

This growing AI risk management emphasis can also be seen in Oregon.

Oregon’s AI action plan

On Nov. 28, 2023, Oregon Gov. Tina Kotek issued an executive order, Establishing a State Government Artificial Intelligence Advisory Council. On Feb. 4, 2025, the council issued a finished plan: State Government Artificial Intelligence Advisory Council Final Recommended Action Plan.

The plan’s purpose:

“Create an informed and empowered workforce where state employees are well equipped and trained with the knowledge and understanding of AI to make informed decisions. We envision a future where AI is governed by transparent, well-defined policies that ensure its ethical use, promote diversity, equity, and inclusion, and safeguard personal and sensitive information. Oregon aims to foster a responsible AI ecosystem that enhances government efficiency, accountability, and public trust while upholding the highest standards of privacy and ethical integrity.” 

In the process, the council created 12 principles. Two of these relate to risk management. They are (emphasis is mine):

Governance—Policies, processes, procedures, and practices across the executive branch related to the mapping, measuring, and management of AI benefits and risks are in place, transparent, and implemented with accountability and full inspection; a culture of risk management is cultivated and present.

Risk and risk management—Identify, assess, measure, and manage all AI risks, ensuring compliance with relevant regulations and assessing projected effects. The risk management approach is based on NIST’s Artificial Intelligence Risk Management Framework (AI RMF) 1.0. The AI RMF references ISO 31000:2018.

Two work tasks specified in the plan make it clear that AI risk management is a major concern. The two tasks are to institute the NIST AI Risk Management Framework and to pilot that framework.

Despite its name, the State Government Artificial Intelligence Advisory Council Final Recommended Action Plan is a work in progress.

Conclusion

The Stanford study indicates that AI use and capabilities are increasing at a rapid rate. Further, the greatest effects will be on younger workers with more codified work and those doing routine tasks, as opposed to those with tactical knowledge performing higher-level tasks.

AI is a disruptive factor. Only time will tell whether it will affect the quality and related professions as dramatically as ChatGPT indicates. But AI will have effects.

The EU AI Act, with its requirement that high-risk AI developers use ISO 9001:2015 to validate their quality management system, indicates there’s potential work in a growing industry. However, the EU AI Act and the Oregon AI Plan both emphasize risk management. This shows a broad and worldwide concern with the risks associated with AI. This concern will dominate the AI implementation process. In both cases the risk management process mentioned is ISO 31000:2018.

For the quality profession, there are several takeaways:
• First, AI is here to stay. Its use will grow, as will its capabilities.
• Second, quality professionals must become familiar with the AI models their organizations are likely to use.
• Third, quality professionals need to find ways to use the AI models to facilitate quality-related work. For instance, you can use AI to perform routine codified knowledge tasks such as drafting reports or memos, and performing simple calculations. This will free you to perform higher-knowledge tasks.
• Fourth, the trend is toward risk management. Understand the basics of risk management and ISO 31000:2018.

Add new comment

The content of this field is kept private and will not be shown publicly.
About text formats
Image CAPTCHA
Enter the characters shown in the image.

© 2025 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute Inc.

footer
  • Home
  • Print QD: 1995-2008
  • Print QD: 2008-2009
  • Videos
  • Privacy Policy
  • Write for us
footer second menu
  • Subscribe to Quality Digest
  • About Us