{domain:"www.qualitydigest.com",server:"169.47.211.87"} Skip to main content

        
User account menu
Main navigation
  • Topics
    • Customer Care
    • Regulated Industries
    • Research & Tech
    • Quality Improvement Tools
    • People Management
    • Metrology
    • Manufacturing
    • Roadshow
    • QMS & Standards
    • Statistical Methods
    • Resource Management
  • Videos/Webinars
    • All videos
    • Product Demos
    • Webinars
  • Advertise
    • Advertise
    • Submit B2B Press Release
    • Write for us
  • Metrology Hub
  • Training
  • Subscribe
  • Log in
Mobile Menu
  • Home
  • Topics
    • Customer Care
    • Regulated Industries
    • Research & Tech
    • Quality Improvement Tools
    • People Management
    • Metrology
    • Manufacturing
    • Roadshow
    • QMS & Standards
    • Statistical Methods
    • Supply Chain
    • Resource Management
  • Login / Subscribe
  • More...
    • All Features
    • All News
    • All Videos
    • Training

Questions to Ask Before You Adopt an AI Quality Management Tool

It’s more about governance than capability

BoliviaInteligente / Unsplash

Josh Peterson
Bio

ZenQMS

Mon, 02/09/2026 - 12:03
  • Comment
  • RSS

Social Sharing block

  • Print
Body

As AI makes its way into every corner of work, quality management is no exception. That isn’t surprising: Quality teams are buried in documentation, training upkeep, investigations, and reporting, which are exactly the kinds of workflows AI can help streamline.

ADVERTISEMENT

However, in regulated environments the questions about AI quality management tools shouldn’t be, “Can AI help?” or even, “Is it impressive?” The questions should be risk-based:
• What are we using it for?
• What’s the risk if it’s wrong?
• Can we defend our controls during an audit?
• What’s the risk to the confidentiality of my data?

To help QA leaders as they begin evaluating AI quality management tools, I consulted Karin Ashkenazi, VP of quality assurance at ZenQMS, to better understand the questions leaders should be asking—and what “good” looks like when AI is introduced into regulated workflows.

As it turns out, AI adoption is more about governance than it is about capability. Below are the core questions QA leaders should ask before implementing an AI-driven quality tool, along with examples of what those controls look like in practice, and how ZenQMS is approaching compliant AI.

What’s the intended use, and what’s the risk if the AI is wrong?

Start here, because this one question determines how every other question should be answered.

AI in quality management spans a wide range of risk. Some use cases are assistive—helping users find information faster or draft training quiz questions they’ll still review. Others can influence decisions that produce downstream compliance effects.

A useful way to frame it, and how Ashkenazi often frames it, is simple: If the AI output is wrong, what’s the consequence? In some workflows, an incorrect result from AI is inconvenient. In others, it can affect product quality or patient safety.

That’s why “AI in quality assurance” isn’t one category; it’s a spectrum. Building search filters or suggesting training questions typically sits on the lower-risk end. Risk scoring, investigation conclusions, trend analysis, or regulatory reporting sit higher.

ZenQMS intentionally started with lower-risk workflows where users can immediately validate results, such as AI-assisted filter building that shows you the logic it created before anything is applied.

Quick risk lens for AI tools in quality management:
• Low-risk, assistive AI—search/filter building, summaries, drafting training questions
• Higher-risk decision-influencing AI: risk scoring, investigation conclusions, regulatory reporting

Will the AI be trained on our data?

This is usually one of the first questions QA leaders ask—and for good reason.

Quality data can include proprietary manufacturing and process information, deviations, CAPAs, supplier details, and regulated records. If those data are used to train public or third-party models, the risk can be unacceptable.

Ashkenazi also highlighted a practical nuance many teams miss: Data protections can change based on the AI tool’s subscription tier. She asks, “Am I going to use the free version of ChatGPT, or do I want to pay for a subscription with more data protection options because I’m going to upload my customer data?”

The right question isn’t just whether the vendor is secure. It’s also, “What protections come with the version we’re using?” Look for contractual clarity from your vendor (as well as audit-ready documentation).

In regulated environments, trust isn’t a control. Documentation is. As you evaluate AI features, ask for written commitments such as terms, agreements, and when applicable, a data processing agreement (DPA) that spell out:
• Whether your data are used to train models
• Who can access your data and where they are stored
• What happens when data are deleted (and what remains, if anything)

The ZenQMS stance is simple: Customer data aren’t used to train third-party models. “No training on customer data” is a baseline expectation for compliant AI adoption in regulated workflows.

Is the AI feature opt-in, and can we turn it off without disruption?

In regulated quality workflows, the safest AI adoption is deliberate adoption: You decide when AI is enabled, where it’s used, and who can access it. That control matters even more because, as Ashkenazi points out, AI doesn’t only show up when you buy a new “AI tool.” It can appear inside tools you already use as vendors roll out new AI features. When that happens, QA teams often need to reassess the tool’s risk, intended use, and validation effect—just as they would with any significant new capability.

That’s why QA leaders should be wary of AI quality management features that are enabled by default, especially when they touch regulated workflows. If AI arrives silently through an update, teams can lose control over when and how it becomes part of the quality process.

What to look for:
• AI features are opt-in, not turned on by default
• You can disable AI at any time

ZenQMS AI features are opt-in by default. Customers explicitly enable AI features so adoption can align to internal procedures and validation plans.

Where do our data go, and who can access them?

Even if a vendor doesn’t train on your data, your data are still being processed, and QA leaders need transparency on how that’s done.

This is similar to supplier qualification. But AI raises the stakes because it can introduce additional processing layers. Ashkenazi ’s advice is straightforward: You should ask the following questions of any vendor, but especially those with AI features:
• Where are data stored and processed?
• Are data isolated per customer?
• Who can access them?
• What happens when we delete data or leave the platform?

If you’re evaluating AI tools for quality assurance, look for the fundamentals Ashkenazi emphasized: clear answers on where your data are stored, who can access them, whether they are isolated, and what happens when they are deleted—backed by documentation you can rely on.

Can we explain and defend AI outputs in an audit?

One of the biggest adoption barriers for AI in regulated environments is the “black box” problem. If an inspector asks, “How did you get this result?” the answer can’t be, “The AI said so.” Explainability matters, not as a preference but because quality systems have to be defendable.

A good AI feature should make it clear:
• When AI was used
• What AI contributed
• What a human reviewed or changed
• What evidence supports the output (when applicable)

This is also where “human in the loop” becomes nonnegotiable. Especially as risk increases, AI should recommend and assist, but humans should review, accept, reject, or edit before results become part of the quality record.

This principle is embedded in how ZenQMS approaches compliant AI. For example, with AI-assisted filtering, the system shows the filters it generated so users can validate accuracy before acting. And when AI is used, activity is logged so organizations can show how AI was used in context, alongside the human decisions made afterward.

What safeguards exist against hallucinations, bias, drift, and inappropriate outputs?

AI can hallucinate. It can be biased. It can drift over time. And depending on how a system is designed, it can even produce inappropriate outputs in response to bad inputs.

Ashkenazi’s framing here is especially helpful: Vendors should be able to define the “borders” of acceptable behavior and show how they detect when the AI moves outside those boundaries. In other words, the question isn’t just, “Do you have guardrails?” It’s, “How do you know when outputs aren’t reliable—and what happens next?”

In a regulated environment, trust can’t be assumed. It has to be engineered through safeguards like output controls, monitoring, and clear pathways to flag poor results.

This is why ZenQMS treats guardrails and output controls as essential. The goal isn’t to pretend AI never makes mistakes. It’s to reduce the likelihood of unsafe outputs and ensure that users can identify and correct problems before they become risk.

How is the AI validated, and what happens when models change?

QA leaders often ask whether AI can be validated the same way as traditional software.

Ashkenazi thinks so, but with extra considerations. AI qualification still needs the same fundamentals you expect from any regulated software: defined intended use, validation evidence, and controlled change management. But it’s different because AI systems can evolve—models might be recalibrated or even swapped—so you must understand how changes are governed and documented.

A vendor should be able to explain:
• How AI features are validated for their intended use
• How changes (including model updates or model swaps) are managed through change control
• What oversight exists when the model behavior drifts or needs recalibration
• What documentation is available to support audits when something changes

ISO/IEC 42001  (an AI management system standard) is a helpful benchmark for whether a vendor understands the control expectations around AI. It’s not the only signal, but if a vendor can show alignment (or certification), it can indicate stronger governance maturity.

At ZenQMS, AI features follow the software development life cycle (SDLC) and release controls, and validation evidence is included in release documentation.

What do we need internally to adopt AI responsibly?

One of Ashkenazi ’s strongest points is also the easiest to overlook: AI adoption is a shared responsibility. Even if you trust your vendor, you still need a plan for how your organization will use AI responsibly.

That means internal clarity on:
• Acceptable vs. prohibited AI use
• What data users can and can’t enter
• Expectations for human review
• What “verification” looks like in practice
• Training so users know how to evaluate outputs (and what to watch for)

In many organizations, AI risk isn’t created by the tool alone. It’s created when teams incorporate AI without shared rules for how it’s used.

Responsible, regulated AI adoption is possible

AI adoption in quality management isn’t a matter of if. It’s a matter of how.

For QA leaders, responsible AI starts with a risk-based approach: Define intended use, match controls to risk, hold vendors accountable, and build workflows that are explainable and auditable. The organizations that adopt AI well won’t be the ones that move fastest. They’ll be the ones that started by asking the right questions.

And for many teams, the best place to start is where ZenQMS has started: low-risk, human-verifiable AI features that reduce toil without shifting decision-making authority. Then, scale adoption as governance, validation maturity, and confidence grow.

Published Jan. 14, 2026, by ZenQMS.

Add new comment

The content of this field is kept private and will not be shown publicly.
About text formats
Image CAPTCHA
Enter the characters shown in the image.

© 2026 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute Inc.

footer
  • Home
  • Print QD: 1995-2008
  • Print QD: 2008-2009
  • Videos
  • Privacy Policy
  • Write for us
footer second menu
  • Subscribe to Quality Digest
  • About Us