In April 2018, the U.S. Food and Drug Administration (FDA) permitted the marketing of the first medical device to use artificial intelligence (AI). The device, called IDx-DR, is a software program that uses an AI algorithm to analyze images of the eye taken with a retinal camera called the Topcon NW400. A medical professional uploads the digital images of the patient’s retinas to a cloud server where the IDx-DR software is installed. The software then provides the doctor with one of two results: Refer the patient to an eye care specialist, or rescreen in 12 months. Overall, the technology accelerates diagnosis and treatment of diabetes-related damage to the retina. This is one example of how medical devices using AI are traversing regulatory pathways into mainstream healthcare.1
ADVERTISEMENT |
The concept of AI has been around and progressing in linear fashion for decades. In one scenario, the technology was referred to as artificial neurons capable of performing logical functions. During the early 1950s, computer scientists developed AI programs that could best their human counterparts at games of chess and checkers. While this might be a highlight at parties, the innovation of this technology went far beyond board games.2
AI is designed to simulate human intelligence processes, so it is highly data-centric. It acquires information, determines how to analyze and use the information, and self-corrects upon receiving new data. An AI algorithm has a programmed ability to continue learning and adapting based on the data it receives, making it useful for processes that involve analyzing large amounts of information.
For example, AI is well-suited to handle repetitive work processes and can provide another layer of decision support to mitigate errors. In the healthcare arena, research firm Frost and Sullivan estimates that AI has the potential to improve patient outcomes by 30 percent to 40 percent while reducing treatment costs by up to 50 percent.3
Challenges and risks with AI
Despite AI’s potential to improve the quality and efficiency of healthcare, the technology is far from flawless. Because AI is built out of algorithms developed and tweaked by humans, it functions and provides output according to the data it receives. Therefore, a significant challenge with AI is data quality. The technology cannot provide value if it is fed incomplete, biased, or inaccurate data.
Across the life sciences value chain, data are rapidly becoming an organization’s most valuable asset and catalyst for nearly unlimited possibilities to create business value. Still, because data are an essential component of modern technology and innovation, there are growing concerns regarding the responsible use of them—particularly when that use involves AI in healthcare. The impact of AI will be enormous. The only thing that looms larger is our obligation to be responsible about our approach to the technology. It is essential to ensure that the data used are high quality.
Ethical data science requires careful consideration of data’s potential and impact in the various contexts where they are gathered and used. Mishandling data can undermine trust in the entire digital ecosystem. In the life sciences industry, where the mantra of healthcare practitioners is “first, do no harm,” responsible data handling is essential.
The World Health Organization (WHO) recently published the guidance “Ethics and Governance of Artificial Intelligence for Health.” According to the guidance, the organization agrees that new technologies using AI hold great promise to improve diagnosis, treatment, health research, and drug development. Still, there are ethical challenges that must be addressed. The guidance specifically points out concerns with data quality and inclusivity—revealing the likelihood that data used for algorithms will have one or more systemic biases, including under-representation of a gender, age, race, culture, or other characteristics. These biases will get ingrained in the AI models and subsequently diffuse through the resulting AI algorithm. In a nutshell, biased data used in AI will return biased conclusions, resulting in decisions based on skewed information.4
“We have to recognize that getting diversity in the training of these [AI] algorithms is going to be incredibly important,” says Isaac Kohane, head of Harvard Medical School’s Department of Biomedical Informatics. “Otherwise, we will be in some sense be pouring concrete over whatever current distortions exist in practice, such as those due to socioeconomic status, ethnicity, and so on.” 5
Ethical data policies and practices
Mishandling data can have a far-reaching impact on the quality of a product. Outside of costly delays, corrective actions, and the possibility of a recall, it would be unfortunate to see the success of a medical product unravel due to the company not being proactive in ensuring quality data in product development.
Effective data stewardship is a scenario where people and technology work in unison to foster a culture of quality standards and best practices. The Healthcare Information and Management Systems Society (HIMSS) outlined core principles addressing the ethical use of data in AI technologies:
• Reliability and safety. In a clinical setting, data handling could impact research and clinical decision making—possibly resulting in differential diagnoses. Research and collaboration among industry, government, and academia are needed to develop data stewardship guidelines in AI technology. Regardless of the techniques used for gathering and maintaining data, effective AI depends on quality data collected from reliable sources.
• Fairness and inclusivity. AI systems should not only treat patient data in a balanced and fair way, but also not impact similar groups of people in different ways. To eliminate bias in research and clinical practice, inclusivity is another concept to incorporate into the design of AI systems.
• Transparency and accountability. If AI systems are used to help make decisions that affect patients’ health, understanding how those decisions are made must be transparent to the key stakeholders. The concept of explainable AI makes it easier to identify and raise awareness of potential bias, errors, and unintended outcomes.6
Ultimately, regulatory bodies will define and publish guidelines regarding data handling, which will include procedures for holding companies accountable for failing to comply. In the meantime, companies are encouraged to be proactive in establishing a data-handling framework and processes that will help them align with the forthcoming regulations. A robust framework and processes put into practice by companies in regulated environments could serve as the basis for new regulations for AI, making it easier for these companies to comply.
Modernizing data management
The data governance principles outlined by HIMSS highlight reliability, inclusivity, and transparency. A common scenario that hamstrings data quality is siloed data. When disparate business units maintain their own databases, stakeholders are unable to effectively access and share data. This leads to issues such as duplicate, incomplete, and unreliable information. Having a single source of truth for data augments a company’s ability to ensure data reliability, inclusivity, and transparency.
Implementing solutions and processes for ensuring data quality is necessary for scalability and ongoing relevance in the industry. Organizations that embrace digitization for creating and curating data to pursue trends in AI will effectively circumvent the challenges and risks inherent with the technology. Modernizing data handling enables companies to identify and mitigate issues that could otherwise lead to inaccurate AI outcomes.
References
1. U.S. Food and Drug Administration (FDA). “FDA Permits Marketing of Artificial Intelligence-Based Device to Detect Certain Diabetes-Related Eye Problems.” News release, April 11, 2018.
2. Press, Gil. “A Very Short History of Artificial Intelligence.” Forbes, Dec. 20, 2016.
3. Ahuja, Abhimanyu S. “The Impact of Artificial Intelligence in Medicine on the Future Role of the Physician.” National Center for Biotechnology Information (NCBI), Oct. 4, 2019.
4. World Health Organization (WHO). “Ethics and Governance of Artificial Intelligence for Health.” June 28, 2021.
5. Powell, Alvin. “AI Revolution in Medicine.” The Harvard Gazette, Nov. 11, 2020.
6. McCarthy, Molly K. “Artificial Intelligence in Health: Ethical Considerations for Research and Practice.” The Healthcare Information and Management Systems Society (HIMSS), June 17, 2019.
Comments
AI Data Handling and Ethical Framework for AI
Hi Mr Talpade, thanks for mentioning our IDx-DR product! Our (very) recent paper describing an ethical framework for AI with regulatory considerations - 2nd, 3rd and 4th authors are all FDA Directors - may be of interest to your readers:
https://www.aaojournal.org/article/S0161-6420(21)00643-6/fulltext
Add new comment