C&B Notes

Decoding Artificial Intelligence

Artificial intelligence is an ill-defined term.  This nebulousness is perhaps necessary given how broad and technical the discipline is, but the enormous amount of jargon used to explain AI does not help the lay person understand something that — over time — will become an increasingly large part of daily life.  This primer explains some of the most commonly used terms in consumer applications of AI.

These are the three terms you’re most likely to have heard lately, and, to be as simple as possible, we can think of them in layers.  Neural networks are at the bottom — they’re a type of computer architecture onto which artificial intelligence is built.  Machine learning is next — it’s a program you might run on a neural network, training computers to look for certain answers in pots of data; and deep learning is on top — it’s a particular type of machine learning that’s only become popular over the past decade, largely thanks to two new resources: cheap processing power and abundant data (otherwise known as the internet)…

However, while deep learning has proved adept at tasks involving speech and image recognition — stuff that has lots of commercial applications — it also has plenty of limitations.  Not only do deep-learning techniques require a lot of data and fine-tuning to work, but their intelligence is narrow and brittle.  As cognitive psychologist Gary Marcus writes at the New Yorker, the methods that are currently popular “lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like ‘sibling’ or ‘identical to.’  They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.”  In other words, they don’t have any common sense.

For example, in a research project from Google, a neural network was used to generate a picture of a dumbbell after being trained on sample images.  The pictures of dumbbells it produced were pretty good: two gray circles connected by a horizontal tube.  But in the middle of each weight was the muscular outline of a bodybuilder’s arm.  The scientists involved suggest this might be because the pictures the network had been trained on showed a bodybuilder holding the dumbbell.  Deep learning might be able to work out what the common visual properties of tens of thousands of pictures of dumbbells are, but it would never make the cognitive leap to say that dumbbells don’t have arms.  These sorts of problems aren’t just limited by common sense either.  Because of the way they examine data, deep-learning networks can also be fooled by random patterns of pixels.  You might see static, but a computer is 95 percent certain that’s a cheetah.

* * * * *

This is one of the difficulties of using the term artificial intelligence: it’s just so tricky to define.  In fact, it’s axiomatic within the industry that as soon as machines have conquered a task that previously only humans could do — whether that’s playing chess or recognizing faces — then it’s no longer considered to be a mark of intelligence.  As computer scientists Larry Tesler put it: “Intelligence is whatever machines haven’t done yet.”  And even with tasks computers can beat, they aren’t doing it by replicating human intelligence.  “When we say the neural network is like the brain it’s not true,” says LeCun.  “It’s not true in the same way that airplanes aren’t like birds.  They don’t flap their wings, they don’t have feathers or muscles.”  If we do create intelligence, he says, it “won’t be like human intelligence or animal intelligence.  It’s very difficult for us to imagine, for example, an intelligent entity that does not have [the impulse towards] self-preservation.”

Many people working within the field of AI are dismissive of the idea that we’ll ever be able to create artificial intelligence that is truly sentient.  “There is no approach at the moment that has any hope of being flexible and performing multiple tasks or going beyond the basic tasks that it’s programmed to do,” Professor Andrei Barbu, from MIT’s Center for Brains, Minds and Machines told The Verge, adding that effective AI research is just about creating systems that have been fine-tuned to solve a specific problem.  He says that although there have been forays into unsupervised learning, where systems work through data that hasn’t been labeled in any way, this work is still in its infancy.  One of the better-known examples is a neural network created by Google that was fed random YouTube thumbnails from 10 million videos.  Eventually, it taught itself what a cat looked like, but its creators did not make any wider claims for its ability.  As LeCun said at an event at the Orange Institute last year: “We don’t know how to do unsupervised learning.  That’s the biggest obstacle.”

Referenced In This Post