When you go to a medical appointment, does the doctor look at you while you talk? Or are they busy typing everything you say into a computer? If it’s the latter, you may find that will change soon, thanks to artificial intelligence.
|
ADVERTISEMENT |
Some doctors’ offices are using AI services to transcribe your discussion with the doctor and automatically enter the results into your electronic medical records.
That’s a time-saver for doctors, who often spend hours filling out their patients’ records. It also allows them to look at the patient rather than their computer screen.
You’ve probably noticed AI chatbots asking if they can help you when visiting a company’s website. These chatbots aren’t as common in healthcare yet, but it’s possible they could assist you with basic medical questions in the future. This could free up the doctors’ time for more complex concerns.
These are just two ways AI might affect your future healthcare. But given the high stakes, it must be done with thoughtful standards.
The need for AI standards
For AI to work in the medical field (or any other field it’s used in), we need to develop specific and useful standards for it. These will need to include characteristics that can be used to judge an AI model on its reliability and trustworthiness.
One way AI can prove its trustworthiness is by demonstrating its correctness. If you’ve ever had a generative AI tool confidently give you the wrong answer to a question, you probably appreciate why this is important. If an AI tool says a patient has cancer, the doctor and patient must know the odds for how right or wrong the AI is.
Another issue is reliability, particularly of the datasets AI tools rely on for information. Just as a hacker can inject a virus into a computer network, someone could intentionally infect an AI dataset to make it work nefariously. In many AI systems, which use large datasets to learn, people can introduce Trojans, similar to computer viruses, which can alter the AI system’s reasoning. This can be done at the level of the input (dataset), the model (the thinking), or the AI’s environment and how it interacts with the world.
For example, researchers introduced a Trojan by placing a sticker on a stop sign. This Trojan made the self-driving car run through a stop sign because it thought it was a speed limit sign. So, there are dangers we’ll have to face if AI is unreliable.
My NIST colleagues (Walid Keyrouz, Timothy Blattner and Michael Majurski) are doing considerable work to help detect Trojans, which I hope will make AI more reliable.
NIST researcher Ram Sriram wants to see technology make healthcare more available to more people. Credit: M. King/NIST
Why AI standards matter
Standards will be critical to evaluating AI tools as they become more commonly used. Our research at NIST will help influence voluntary standards in this field, which will help the U.S. lead the world in AI, especially in healthcare and medicine.
Some people have an unfair and inaccurate perception that standards hinder innovation. In fact, standards are just an agreed-upon set of rules that encourage it.
One of my favorite examples of this is in music.
Thousands of years ago, music wasn’t written down. But during the 11th century, a monk named Guido d’Arezzo developed an early system for music notation. Music notation has since become a standard that allows us to play and sing other people’s music from across the world or from earlier times. You can perform any musician’s work as long as you know how to read music.
I’d argue that we’ve had a lot of innovation in music since the 11th century, thanks in large part to this standard.
AI can see stem cells
I plan to use AI in my own healthcare at some point in the future. I’m one of the several million people in the U.S. who have the gene mutation for age-related macular degeneration. Although it doesn’t cause complete blindness, it can make it harder to see up close, read, or drive. Luckily, stem cell implants grown from a patient’s own cells offer a promising solution to preserve vision.
However, during the manufacturing process, these living cells undergo multiple transformations that can create health risks for the patient. AI can examine the quality of the cells in a way that can predict which cells will work best for the patient.
My NIST colleague, Peter Bajcsy, and his collaborators at NIST and the National Institutes of Health have made significant contributions to this technology. Their work has led to a U.S. Food and Drug Administration-approved treatment for age-related macular degeneration that has already shown success in patients.
It’s reassuring to know that this technology is available should I need it to protect my vision in the future. So, helping ensure that AI works optimally could help millions of people, including myself.
Future of AI in healthcare
Many of my family members, including my wife and brother, are doctors. My family is so steeped in the medical field that Sir Alexander Fleming, who discovered penicillin, was a friend of my grandfather’s. Fleming used to have tea with my grandfather every day during his visit to Madras, India.
So, I want to see AI help doctors, not replace them. In this case, I think of AI as “augmented” intelligence, not artificial intelligence. I’d love to see AI get to the point that it can help doctors in as many different areas of healthcare as possible.
I’m so passionate about this because I want to see technology make healthcare more available to more people.
Remember the example at the beginning of this post? I hope that if doctors can use AI as their notetakers that, in addition to helping with decision-making, they’ll be able to spend less time on paperwork and more time seeing patients.
A framework for managing AI risk
Like all technology, AI comes with risks that should be considered and managed. Learn about how NIST is helping to manage those risks with our AI Risk Management Framework. This free tool is recommended for use by AI users, including doctors and hospitals, to help them reap the benefits of AI while also managing the risks.
Published Jan. 7, 2026, by NIST.

Add new comment