Featured Product
This Week in Quality Digest Live
Innovation Features
Jill Roberts
Another way to know what’s too old to eat
Gregory Way
Drug designers are rethinking their response to medications that affect multiple targets
Del Williams
8-in. cable and disc systems are comparable to belt or bucket systems
Edmund Andrews
For creative collaboration, sometimes you can’t beat a face-to-face meeting
Steven Brown
21st-century standard candles at NIST

More Features

Innovation News
Gantry designs feature enhanced performance
New technology will allow customers to grow capacity, improve profit margins and gain efficiencies
Virtual reality training curriculum prepares organizations for rapid transformation
Meet the latest generation of LC xx6 encoders
Maximum work envelope in a small footprint
On-demand pipe flow measurement, no process interruptions
Products range from software to scanners

More News

Paul Laughlin


Why Our Approach to Artificial Intelligence Needs a Reboot

Building AI we can trust

Published: Wednesday, July 7, 2021 - 12:02

Do you see the limitations and over-hyped expectations of today’s approach to artificial intelligence (AI)? Does it need a reboot, a redirection, to finally achieve its potential, one that truly understands us and we can trust?

That is the premise of a great book on the subject, Rebooting AI: Building artificial intelligence we can trust (Vintage, 2019) from Gary Marcus and Ernest Davis. Marcus is both a professor of psychology at New York University and co-founder of robust.ai. Davis is a professor of math at New York University and one of the world’s leading scientists on commonsense reasoning for AI.

A quote on the book’s cover by chess master Garry Kasparov nicely sums up why this book is needed: “Finally, a book that tells us what AI is, what AI is not, and what AI could become if only we are ambitious and creative enough.”

A conceptual understanding of AI, past and present

I recommend this book for both those working in this field and nontechnical leaders. One of the reasons for that endorsement is the care these authors take to avoid jargon and explain both technical terms and the conceptual thinking that underpins them.

Packed full of examples to support their points, the big picture of this book gives a realistic description of current progress with AI. Building on that, it highlights why there is an important gap between this progress and the vision many people have of a sci-fi future where machines are at least as intelligent as people. The authors explore what positive progress has been made in the field of deep learning and using available big data. They also point out the limitations of this approach and what else is needed.

Through real-world challenges like reading, navigating changing environments, and learning from general principles, the authors highlight how limited many current models (or parlor tricks) really are. They build on this by considering the human mind and how we think and learn. But this is no theoretical exercise. From a societal point of view, they highlight the need for machines to have common sense, gain deeper understanding, and be trustworthy.

So, despite the justified critique in this book, the authors obviously care about the future of AI and make a case for a fundamental reboot to ensure AI does not (once again) fail to reach its potential.

Topics covered within this book

In just over 200 pages, so a very accessible paperback, the authors cover a lot of ground. Here are my takeaways of some of the highlights to explore in each chapter.

Chapter 1: Mind the gap

Chapter one identifies the limitations of current AI. It builds on the Turing test with six useful questions to ask any apparently intelligent system. It paints a challenging but honest picture of progress to date. “The bitter truth is that for now the vast majority of dollars invested in AI are going toward solutions that are brittle, cryptic, and too unreliable to be used in high-stakes problems.”

This chapter also explains three key problems: the fundamental over-attribution error, the illusory progress gap, and the robustness gap. They close by making the case that to move past the current “AI chasm” requires a clear sense of what is at stake, an understanding of why current systems aren’t getting the job done, and a new strategy.

Chapter 2: What’s at stake

This rather chilling chapter brings to life what can go wrong if we don’t make AI smarter. It describes what happens if we let loose systems and robots that are too “brittle, cryptic, and unreliable.” Through considering the real-world jobs we would like to automate, the authors really help illuminate this issue. There’s more at stake than being impressed by AI machines winning games or game shows.

Through examples in practice, they bring to life nine key risks:
• The fundamental over-attribution error
• The lack of robustness
• Relying heavily on the precise details of training data sets
• The perpetuation of obsolete social biases (encoded in data)
• Echo-chamber effect (learning from data generated by AI)
• Gaming the system (relying on publicly generated data)
• Amplification of social bias (due to a combination of the above effects)
• Too easy to end up with the wrong goals
• Risk of being used deliberately to cause public harm

Given the limitations of existing AI systems, it’s a sobering list. The authors make a compelling case for why this must be addressed before greater automation (with insufficient intelligence) puts us all in danger.

Chapter 3: Deep learning and beyond

Drilling down beneath general concepts, this chapter does a good job of introducing the reader to the field of deep learning. As someone who was working supervised and unsupervised neural networks back in the 1990s, I liked this introduction. It covers the concepts that matter without burying the reader in technical jargon. There are also many useful references to leaders in the field or studies to read further.

Here they also introduce a framework and nomenclature for AI that usefully positions machine learning as a subset and deep learning as a subset within that. Once again, as someone with a history in AI usage prior to the AI winter, I was glad to see the acknowledgement of the role of other approaches. Too many modern writers equate AI to variants of deep learning, ignoring Decision trees, probabilistic learning, rule induction, and other knowledge representations in classic AI. More on that later.

This chapter presents a balanced review. It acknowledges the tremendous and exciting progress that has been made in this field. But it is also honest about the risks of limiting AI to this approach. The authors cite three core problems with deep learning and related approaches:
• Deep learning is greedy (huge amounts of data needed to cope with the variability of the real world)
• Deep learning is opaque (we still lack explainable or accountable models)
• Deep learning is brittle (images and language only need tweaking from the familiar to fool it)

Chapter 4: If computers are so smart, how come they can’t read?

I hadn’t given this question serious thought before reading this book. But the authors bring the topic to life as a way of highlighting the limitations of the current AI approach. When you consider the subtlety and variability of how language can be used and the questions we can ask, you see the issue. Once a system relies on past examples and pattern matching, rather than understanding the content, it is easily fooled.

The authors walk through numerous examples of questions that are too difficult for current AI systems (text or voice-based). They also help explain how the current approach will never get there. Deep learning systems will never be able to truly learn from the wealth of humanity’s written record. Not while it relies on matching to past examples rather than comprehending context and composition.

It’s this chapter that begins to show how an older approach to AI has a role to play. How work on knowledge representations and encoding existing human understanding from classic AI is part of the solution. The authors rightly suggest that both the power of memory in deep learning and the cognition models from classic AI are needed. They also highlight a missing piece: common sense.

Chapter 5: Where’s Rosie?

Now the book turns to the world of robots (including autonomous cars). If the inability of AI systems to read and understand was concerning, some of these examples are terrifying.

The authors review the progress that has been made in robotics. Examples of progress on localization and motor control are both impressive. But they then go on to highlight important gaps between current capability and what is needed. They explain the importance of robots being able to:
• Use situational awareness (in a changing world)
• Figure out what is the best thing to do now (fast enough to react)
• Possess a general-purpose understanding to apply to unfamiliar challenges

A few real-world applications really bring to life the risks to humans of such limitations. The robot caregiver that drops an elderly patient because they react in a way not seen before. The autonomous car that protects its driver at the cost of many other lives. We don’t have general-purpose domestic robots yet because they are not yet flexible enough to cope with the real world.

Chapter 6: Insights from the human mind

OK, if AI can’t yet compete with a young child with regards to reading and navigating the real world, can AI learn from how we think? This chapter explores the potential of our current understanding of the human mind and cognition. They offer 11 clues to help AI, drawn from the cognitive sciences:
• There are no silver bullets (multiple improvements are needed)
• Cognition makes extensive use of internal representations (mental models)
• Abstraction and generalization play an essential role in cognition
• Cognitive systems are highly structured
• Even apparently simple aspects of cognition require multiple tools
• Human thought and language are compositional
• A robust understanding of the world requires bottom-up and top-down information
• Concepts are embedded in theories
• Causal relations are a fundamental part of understanding the world
• We keep track of individual people and things
• Complex cognitive creatures aren’t blank slates

Chapter 7: Common sense and the path to deep understanding

Here the book turns to a more progressive mindset. It explores options and the progress made in doing what has been advised earlier. That is, learning from human cognition and making progress in building a deeper understanding and common sense within AI models. In summary, it’s difficult, and progress has been slow.

The authors review AI’s progress with representations of knowledge and developing an ability to learn. A helpful way to think about this is the author’s use of Kant’s three knowledge frameworks (ability to recognize and reason about time, space, and causality). It is insightful how much humans rely on all three millions of times a day, yet each is challenging for AI.

Once again, we see the benefit of blending the best thinking that has happened within classic AI and technological progress within deep learning. The authors recommend:
• Develop systems that can represent those three knowledge frameworks (time, space, causality)
• Embed those within an architecture that can be freely extended to new knowledge
• Develop powerful reasoning techniques (able to work top down and bottom up)
• Connect these to perception, manipulation, and language
• Build the above into rich cognitive models of the experienced world
• Construct a human-inspired learning system (i.e., learn from prior knowledge and available sources)

No one said that building deep understanding was easy.

Chapter 8: Trust

In this chapter we are back where we first began but seeing the situation anew. Why does this matter? Plus, what else is needed for AI to truly help us? Trustworthy AI is needed. So many examples used throughout this book show how critical it is that our increasing reliance on AI and automation is matched by AI that we can trust.

The authors make a number of points here that don’t get enough airtime. They go beyond the regular chatter about ethics policies, inclusion, and the risks of social biases in data and deep learning models. For instance, they highlight the need for AI models to learn from good engineering practice. AI must learn how good engineers design solutions to be stronger than the limits in brief, how they design for failure, and how they incorporate fail-safes. Such safety-first thinking is needed.

Trustworthiness can also be aided by best practices from wider software engineering. For instance, the use of modular design for improved understanding and future flexibility. Or the use of good metrics for testing. Also documentation and the ability to debug code. In addition to all this, there’s also the need to encode ethical values into the reasoning of AI systems. Not just filtered data, but something closer to Isaac Asimov’s Three Laws of Robotics. These all must be supported and challenged by a regulatory system that is up to date and fit for its purpose (i.e., knowing what questions to ask).

Why this book should encourage data leaders

I’m encouraged by this book. It does indeed advocate a reboot but not a withdrawal. Far from being an argument against the use of AI, it is a challenge to “build back better.” To use the best thinking from classic AI and cognitive sciences, coupled with the innovations in deep learning and robotics. As the authors themselves put it: “The only way out of this mess is to get cracking on building machines equipped with common sense, cognitive models, and powerful tools for reasoning.”

So, as well as being a wake-up call on the need to change AI’s developmental direction, this book should be an encouragement to data science leaders. Set higher goals for your AI capability. Think bigger and engage more with what the real world needs rather than the chosen focus of current technology solutions. A number of resources at the end of this book can help you take that forward, in particular a curated list of suggested readings that the authors navigate with you. There’s also a comprehensive set of footnotes and end notes, together with a lengthy bibliography and topic index.

So, there you are. The book is a treasure chest to help AI and data leaders get started with their own reboot. I hope you take up the challenge to change our approach to AI into one that truly helps humanity.

First published June 11, 2021, on CustomerInsightLeader.com.


About The Author

Paul Laughlin’s picture

Paul Laughlin

Paul Laughlin is a speaker, writer, blogger, Customer Insight enthusiast, and the founder and managing director of Laughlin Consultancy.