Featured Product
This Week in Quality Digest Live
Innovation Features
Eric Stoop
Data can make a major difference to plant efficiency without causing the manufacturer major upheaval
Innovating Service With Chip Bell
Invite your customer to co-create. Their loyalty will soar, along with your reputation and bottom line.
Jim Benson
Barbecue means more than cooking on wood
Thomas R. Cutler
Several factors contribute to manned forklifts being more dangerous than autonomous varieties
Eric Weisbrod
Make it easier to enable agile and resilient organizations

More Features

Innovation News
NSF-funded project is developing a model to help manufacturers pivot and produce personal protective equipment
Despite being far from campus because of the pandemic, some students are engineering a creative way to stay connected
What continual improvement, change, and innovation are, and how they apply to performance improvement
Good quality is adding an average of 11 percent to organizations’ revenue growth
Start with higher-value niche markets; don’t cross the valley of death
Program to provide tools to improve school performance and enrollment
Liquid-entrenched smooth surface (LESS) coating repels bacteria that stink and infect
Leader in workplace productivity introduces document automation product
Help drive team productivity with customizable preprinted templates

More News



How to Unlock the AI Promise

Standards under development seek to make AI practical for more settings

Published: Wednesday, December 4, 2019 - 13:01

As artificial intelligence (AI) becomes increasingly ubiquitous in various industry sectors, establishing a common terminology for AI and examining its various applications is more important than ever. In the international standardization arena, much work is being undertaken by ISO/IEC’s joint technical committee JTC 1—Information technology—Subcommittee SC 42—Artificial intelligence, to establish a precise and workable definition of AI. Through its working group WG 4, SC 42 is looking at various use cases and applications. The convener of SC 42/WG 4 is Fumihiro Maruyama, senior expert on AI at Fujitsu Laboratories.

Currently, there are a total of 70 use cases that the working group is examining. Health, for example, is a fascinating area to explore. Maruyama himself describes one use case in which a program undertakes a “knowledge graph” of 10 billion pieces of information from existing research papers and databases in the medical field. The application then attempts to form a path representing the likely development from a given gene mutation to the disease that deep learning has predicted from the mutation.

Solutions for health

Radouane Oudrhiri is the chief data scientist at Eagle Genomics, whose work involves research undertaken “in silico”—that is, using primarily computer or data-driven innovation. One area of focus is on microbiome, which comprises all of the genetic material of microorganisms (e.g., bacteria, viruses, and fungi) within an entire collection, such as the human gut, mouth, or skin. Microbiomes aren’t just limited to humans and other animals: oceans, soils, and rivers all host microbiome communities that impact entire ecosystems. Microbiome data is very complex as it is hyper-dimensional and compositional. Oudrhiri’s colleagues analyze microbiome data using AI and machine-learning computational tools for spotting associations that humans simply cannot. This radically improves productivity and enables revolutionary discoveries. It identifies new, sustainable ingredients and therapeutic targets, and informs safer, more efficient industry practices.

AI technologies have been used to analyze human tumors for some time now, but as Frank Rudzicz, the Canada representative for SC 42, director of AI at Surgical Safety Technologies, and associate professor of computer science at the University of Toronto, points out in an interview for this article, this is just one application of several. For instance, an application has been deployed recently to identify early-onset dementia in more elderly patients. Residents at care facilities, normally assessed by a doctor once every six months for 15 minutes, have instead been issued with a computer tablet and asked to respond verbally to a series of questions. The program then alerts the medical team if anything seems awry, such as a change in the patient’s voice patterns, or if the person seems unable to spot obvious relationships in an image of a family group.

Oudrhiri has also been working with a company that has developed an AI solution that was initially designed to make shoes smarter by collecting biomechanical metrics, and measuring aspects such as shoe usage and sporting performance. It works via a chip inserted into the sole. The application has been so successful that advances in technology will soon allow it to be used for the detection of the likelihood of developing diseases—such as Parkinson’s—just by analyzing the way in which an individual walks.

The AI of everything

Health, of course, is not the only field that the work of SC 42 will impact. Maruyama also cites the example of an AI program that uses ultrasonic waves for inspecting wind turbines. The program flags any portions of the turbines that may have defects, clearing the way for its human inspection experts to make an informed choice about any subsequent course of action. Crucially, as the program is undertaking the initial inspection, time is freed up for human experts to inspect more turbines.

Intelligent transportation systems is another sector that already relies heavily on AI. Mahmood Hikmet, head of research and development at Ohmio Automotion, a company that focuses on intelligent transportation systems, points to lidar technology, which measures distance to an object by the use of laser light rather than sound or radio frequency. If several of these laser lights are stacked on top of one another and spun round at top speed, the result is a 3D “print cloud” showing how far away a given object might be. All of this takes place at “tens or hundreds of times a second.” It’s an application that can be used in driverless cars, with the capability even to distinguish between different blades of grass.

Hikmet also highlights crowd-counting AI for driverless cars, a predictive analysis application drawn from data on the infrastructure side of intelligent transportation systems (as opposed to the cars themselves). This involves cameras that monitor people walking back and forth, tracking them throughout an entire shot, while predicting their likely “route” as they interact with others. These data are then picked up by the car and used to prevent any possible collisions.

Behavior training for machine learning

YOLO—you only look once—is object-recognition technology that separates to the tiniest degree the different aspects of disparate objects. It has obvious applications in safety and security contexts. Behavioral cloning is another field of AI, in which a machine is obliged to learn a series of tasks through reinforcement training. It’s “a way of punishing and rewarding a neural network for doing things right or wrong,” Hikmet explains. The network ends up learning from the reward or punishment signals it receives from the human user as to how it is supposed to “behave.”

Venture capital is key to certain aspects of Oudrhiri’s work. One exciting area of research seeks to digitize and systematize nothing less than “the entire entrepreneurship process.” By gathering data throughout the venture life cycle, identifying innovation challenges, and categorizing information, the platform provides predictive models on a company’s performance, growth potential, and valuation. A risk profile is therefore established, assisting in the selection process and the entire startup evolution. Until now, information of this kind has been collected through human responses to surveys. These are more aggregate in nature, do not lend themselves to easily built predictive models, or often lead to unwittingly biased conclusions. After all, it is only natural that company owners will want their projects to succeed.

These examples are as ingenious as they are effective. And yet the vast majority of us are unlikely to have heard of these specific AI technologies, still less to have an awareness of their impact. Current AI solutions are often developed in silos and built for very specialized applications; their true power will be properly realized when they are considered in a holistic framework, such as the horizontal frameworks SC 42 is developing.

A role for standards

For this and other reasons, international standards are now under development. Oudrhiri suggests that standards are needed to “cut through the hype” so that fears and objections to AI can be either taken on board or simply rebutted as groundless. Radical ideas for AI applications are often promoted with great fanfare in the media and other public forums—for better or worse—yet, as Maruyama points out, many, if not most of these ideas never get past the proof of concept phase.

Consumers do need to be protected—from physical harm, certainly, but also from companies that use the phrase “artificial intelligence” as a way of promoting a product simply to spike its share price. And given that AI data are at the intersection of many different fields—software engineering, neuroscience, decision making—it is hugely important that a common framework is developed so that consumers, producers, and regulators can speak a common language.

This is not as ridiculous or unlikely as it first sounds. Experts talk of “AI winters” in which previous generations of AI technology peaked, only to fall away because of misplaced experimentation and consequent withdrawal of funding. The same could possibly happen again and undo much of the progress in today’s world.

State of the practice

It is precisely because AI technologies are developing so quickly that International Standards are so needed. In the words of Oudrhiri, they should focus on the “state of the practice, not the art.” Subcommittee 42 has already produced draft technical reports, with standards under development. The subcommittee is working with technical committee ISO/TC 69—“Applications of statistical methods” on mapping both terminologies and concepts within the machine-learning world, between statistics, software engineering, AI, data science, and operational research. An entire working group—SC 42/WG 3—is looking solely at trustworthiness.

Maruyama believes the best approach to developing International Standards is to converge around a limited number of alternatives, and to “focus on where technology is already stable.” A common language and criteria are being created to get beyond the proof of concept stage. Another area of focus is describing the process and life cycle for developing AI applications. They will also help capture the broad requirements of consumer needs, which must include the ethical and societal considerations in use cases and applications. A third area focuses on model validation. This is highly technical and statistical in nature but will one day ensure that programs and machines will do what they are supposed to be doing.


About The Author

ISO’s picture


The International Organization for Standardization (ISO) is the world’s largest developer and publisher of international standards. ISO is a network of the national standards institutes of 162 countries, one member per country, with a Central Secretariat in Geneva, Switzerland, that coordinates the system. ISO is a nongovernmental organization that forms a bridge between the public and private sectors. ISO enables a consensus to be reached on solutions that meet both the requirements of business and the broader needs of society. View the ISO Standards list.