Featured Product
This Week in Quality Digest Live
Innovation Features
Gleb Tsipursky
Use this three-step process to prevent disasters in implementing decisions
Sangeet Paul Choudary
The healthcare industry will consolidate around a small number of platforms
Jason Chester
How we think about and use technology will determine ROI
Charles Tarrio
Turning synchrotron radiation into a useful tool
Alan Rudolph
Development of vaccines typically takes several months and can be slowed by manufacturing requirements

More Features

Innovation News
Despite being far from campus because of the pandemic, some students are engineering a creative way to stay connected
What continual improvement, change, and innovation are, and how they apply to performance improvement
Good quality is adding an average of 11 percent to organizations’ revenue growth
Start with higher-value niche markets; don’t cross the valley of death
Program to provide tools to improve school performance and enrollment
Liquid-entrenched smooth surface (LESS) coating repels bacteria that stink and infect
Leader in workplace productivity introduces document automation product
Help drive team productivity with customizable preprinted templates
Stereotactic robot helps identify target and deliver electrodes to target with submillimetric accuracy

More News

Theodoros Evgeniou

Innovation

The Pivotal Management Challenge of the AI Era

Managers and leaders have nothing to fear from AI—except missing out

Published: Monday, April 22, 2019 - 12:03

History indicates that major technological changes can take about half a century to go from the first lab drawings to society. Alan Turing first proposed the Turing machine, laying the foundations of computation, in 1936; the first general-purpose “Turing-complete” system was built in 1945, and “The Computer” was only named “Machine of the Year” by Time in 1982, about half a century later.

The foundations of the internet were laid out during the 1960s, but consumers did’t get to broadly use and benefit from it until the mid- to late 1990s. For most people, artificial intelligence was strictly a sci-fi concept until recent years.

Yet, if you go by the above timeline, the AI revolution may actually be running at least two decades late. It has been more than 70 years since the famous 1956 Dartmouth workshop with Allen Newell, Herbert Simon, John McCarthy, and Marvin Minsky—the last of them passing away in 2016—during which the first AI program was officially unveiled. But statistical learning theory, the foundation of modern AI and machine learning, arrived a little ahead of the 50-year deadline. The field (whose luminaries included Vladimir Vapnik, Tomaso Poggio, and Steve Smale) cross-pollinated statistics, mathematics, and computer science to produce a flowering of breakthroughs, leading directly to today’s AI revolution.

AI and its effect on business, governments, and society may be today where physics was at the turn of the previous century. After breakthroughs in physics at that time, and later in biology and other fields, the world became very different from what it was for thousands of years before.

At the moment, companies and countries are scrambling to unleash AI’s pent-up transformational potential ahead of competitors. Due to the rapid pace of technological change and the increasingly winner-take-all nature of the innovation economy, the victors of the AI race may capture the lion’s share of the spoils—even if they win by only a hair.

A recipe for winning the AI race

So who has the advantage in the AI race? Managing technology adoption is a very old topic. Philosophers, sociologists, management scholars, economists, and engineers have pondered this matter at least since the invention of the computer, if not long before, in the case of other types of technologies such as the mechanization of mining or other analog technologies.

Surveying this body of work, what sticks out is a number of people-focused features that determine whether new tech will thrive in a given context. At the organizational level, it is essential for leaders to foster technological skills; IT infrastructure and governance; data literacy; innovation culture; norms that adhere to best practices; and most important, the ability to align the capacities of new technologies with the needs of the core business. At the country level, the same required competencies apply, but there are two important additions to the list: regulations that keep pace with technological developments, such as the General Data Protection Regulation (GDPR), and an education system that is strong not only in science, technology, engineering, and math (STEM), but across the board.

At this point, a sceptic may ask whether an AI race can be won by humans in the first place. Much of the media discourse about AI has centered on the idea of robots taking work away from humans. Is AI simply a Pandora’s box that, when opened, dooms us to a future of irrelevancy? In response, I would point to the financial sector. Computers are now executing trades in the financial markets at speeds that were impossible to conceive just a couple of decades ago. This brought down trading costs and bid-ask spreads at levels never seen before and also making markets—the arteries of our modern economic system—impressively efficient. Indeed, some jobs have been displaced, but major trading decisions are still made, and communicated to clients, by humans. The main difference is that people are doing so under increasingly different conditions: The trading floors have long been a leading example of the modern workplace where humans and machines work together in increasingly intertwined ways. 

Kasparov’s law

Garry Kasparov, the Russian chess grandmaster who has been closely working with machines since his famous defeat by Deep Blue in 1997, summarized his 20 years of experience with what’s now called “Kasparov’s law”: “Weak human + machine + better process beats strong human + machine + inferior process.” The key component in this law is the “better process.”

Management and leadership scholars have spent the majority of the 20th century developing the principles of “people managing people.” The central management question of the future will be “How can we get humans and machines to best work together?” It is not about AI vs. people; it is about AI and people. At INSEAD, we are now working on this question in contexts ranging from finance and investment decision making, to consumer understanding and purchasing, to medical diagnosis and healthcare.

Can human work be massively enhanced with the right use of AI? Absolutely. Can, in a more “science fiction” way, humans themselves be enhanced using the results of scientific and technological advances driven by AI—from fields like computational biology or computational neuroscience? Remains to be seen. You can choose to view this as scary or hopeful, but it is definitely not boring.

First published April 8, 2019, on the INSEAD Knowledge blog.

Discuss

About The Author

Theodoros Evgeniou’s picture

Theodoros Evgeniou

Theodoros Evgeniou is a professor of decision sciences and technology management at INSEAD, and academic director of INSEAD elab, the research and analytics center of INSEAD that focuses on data analytics for business.