The Future Isn’t Near
Singularity is the term coined to describe the moment when machine intelligence ultimately outruns human capabilities, which changes the world so much that the future is impossible to predict at any basic level. Some futurists argue that this moment is fast approaching, but in a recent Technology Review article Microsoft founder Paul Allen counters that singularity is a long way off. Instead, reminiscent of IBM’s efforts with Watson, Allen is investing a chunk of his wealth to develop a computer that can accomplish a simple task: pass a 10th grade biology test.
Microsoft co-founder Paul Allen has been pondering artificial intelligence since he was a kid. In the late ’60s, eerily intelligent computers were everywhere, whether it was 2001’s HAL or Star Trek’s omnipresent Enterprise computer. As Allen recalls in his memoir, “machines that behaved like people, even people gone mad, were all the rage back then.” He would tag along to his father’s job at the library, overwhelmed by the information, and daydream about “the sci-fi theme of a dying or threatened civilization that saves itself by finding a trove of knowledge.” What if you could collect all the world’s information in a single computer mind, one capable of intelligent thought, and be able to communicate in simple human language?
It’s a hard problem, but it’s one Allen is eager to solve. After years of pondering these ideas abstractly, he’s throwing his fortune into a new venture targeted entirely at solving the problems of machine intelligence, dubbed the Allen Institute for Artificial Intelligence or AI2 for short. ambitious, like Allen’s earlier projects on space flight and brain-mapping, but the initial goal is deceptively simple. Led by University of Washington professor Oren Etzioni, AI2 wants to build a computer that can pass a high school biology course. The team feeds in a textbook and gives the computer a test. So far, it’s failing those tests… but it’s getting a little better each time.
The key problem is knowledge representation: how to represent all the knowledge in the textbook in a way that allows the program to reason and apply that knowledge in other areas. Programs are good at running procedures (say, converting pounds to kilograms), and modern programs have gotten better at knowing when to run them (say, a Google search on “32 pounds to kilograms”), but they’re still managing the information as fodder for algorithms rather than facts and rules that can be generalized across different situations.
Having the computer study biology is a way of laying the groundwork for new kinds of learning and reasoning. “How do you build a representation of knowledge that does this?” Etzioni asks. How do you understand more and more sophisticated language that describes more and more sophisticated things? Can we generalize from biology to chemistry to mathematics?” That also means getting a grip on the complexity of language itself. Most language doesn’t offer discrete pieces of information for computers to piece through; it’s full of ambiguity and implied logic. Instead of simple text commands, Etzioni envisions a world where you can ask Siri something like, “Can I carry that TV home, or should I call a cab?” That means a weight calculation, sure — but it also means calculating distance and using spatial reasoning to approximate bulkiness. Siri would have to proactively ask whether the television can fit in the trunk of a cab. Siri would have to know “that TV” refers to the television you were just looking at online, and that “carry it back” means a walking trip from the affiliated store to your home. Even worse, Siri would have to know that “can I” refers to a question of advisability, and not whether the trip is illegal or physically impossible.