• February 1, 2017
  • By Paul Greenberg, founder and managing principal, The 56 Group

Separating AI Reality from AI Hype

Article Featured Image

I stopped forecasting trends several years ago—even though, I must admit, I was pretty good at it—because, ultimately, what did it matter what I forecast? Who paid enough serious attention to act on what I saw coming? (My best guess: no one.) I realized at the time that I had to, instead, suss out trends that were in progress and then see what was valuably real about them, what was valueless hype, and what was hype on the verge of becoming real.

I bring this up because I’m about to do the latter, for I am very excited about what I see both now and on the horizon with artificial intelligence. It will genuinely benefit business and at the same time benefit customers perhaps even more. But there is much hype around it, too, and I want to dispel at least some of that.

Let’s start with the basics: what it is.


AI is not a brand-new science nor is it a brand-new quest. The term itself has been around since 1956, when it was first coined by Stanford researcher John McCarthy during what is now called the Dartmouth Conference. His definition was straightforward: “The science and engineering for making intelligent machines.”

A more contemporary and, for our purposes, better version, from Wikipedia’s article on AI: “‘The study and design of intelligent agents’ where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.”

Most definitions mirror this one closely, so it’s what I’ll go with here. Keep in mind that I am only focusing on the business applications of AI—not the social/humanitarian applications, which are myriad.

Why is AI so big now? Multiple companies claim to have been either researching it or using it in their technologies or as part of their technologies for decades. What makes it intriguing is that we’ve reached a point in technology’s evolution where, because scale now matters, we can no longer handle the volume of data and communications usefully without advances like AI. The concept of scaling to billions of users communicating and consuming data has not only changed our approach but also people’s expectations of interactions, demanding not just correct responses but timely, high-­caliber ones. This increase in both the nature and volume of demand by customers pushes AI from being thought of as a point solution that resolved particular issues or solved particular problems to, at least in business, a hugely strategic effort to solve much larger business needs.

So if it is becoming more of a business requirement than a nice-to-have—at least at the enterprise level—what can it really do? And what can’t it do?


To situate this properly, we should remove at least some of the hype. The biggest and continuing science fiction around AI is just that: science fiction. “AI is going to replace human beings and that it is dangerous because it will learn to think independently.” In other words, Skynet is upon us.

But is it? Nah.

A few months ago, at Dreamforce 2016, Salesforce.com set up an analyst mass meet with its futurists, among them the former Metamind CEO and current Salesforce chief scientist, the super-articulate Richard Socher. Socher made two points that stuck with me. His first point was that we are not at Skynet yet, nor will we be for a long time. His second point (and I loved this turn of phrase): AI doesn’t want anything.

Both points together go to the heart of AI reality. It isn’t human intelligence. It learns, but it has to be told what direction to take by humans. It doesn’t just “see” things and learn. It learns per the instructions given by human beings and then it will focus in those areas and learn about habits, behaviors, etc., without humans keying them in. But it first needs guidance.

CRM Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues
Buyer's Guide Companies Mentioned