By: John Converse Townsend
While AI is getting powerful very quickly, and more and more jobs are threatened with automation, we are not yet at the point of science fiction where we are communicating as equals with robots. Sure, some robots already understand the concepts of trust and regret, but machines can’t yet think, reason, or communicate at an advanced level.
Microsoft’s Tay Bot—which tried to learn humanity from reading Twitter and quickly became an angry racist—serves as a clear example of the limitations of today’s technology. Siri is another example. She can can give you the weather forecast for your zip code but can’t describe her feelings on mass incarceration or even comb through the fine print of that contract you’ve been asked to sign.
Teaching machines to think, and behave, more like us is what Mo Musbah and the team at Maluuba are working on.
“Deep learning has been used to solve problems in speech recognition, machine translation, image processing. You see it in applications like self-driving cars, but it hasn’t been as utilized in the space of natural language understanding,” says Musbah, VP of product. “Fundamentally speaking, we’re trying to solve machine literacy, getting machines to the point where they can truly understand how to read, write, and speak like human beings.”
Maluuba’s current artificial intelligence is able to process words from a Wikipedia page, a George R.R. Martin novel, or a medical document and answer factual questions about the text (it can currently read in 10 languages). The AI could, for example, tell you what instrument Jerry Garcia played, or how Eddard Stark died, or about the side effects of a new treatment.
Musbah says Maluuba (a nonsense word made up by the founders’ computer science professor) can pull this off without giving its AI any previous training in any given domain, scientific or otherwise, as is usually required with machine learning.
“Questions that have definite answers are what we’ve tackled to date,” says Maluuba research scientist Adam Trischler, who leads the machine comprehension team. “We are in the process, with building new algorithms and data sets, of having AI answer ambiguous questions. So, not just about who did what or what happened next, but synthesizing information and making deductions about people’s motivations, or political machinations.”
The applications of this technology—once commercialized—are obvious and wide-ranging, from helping sleep-deprived students rip through 200 pages of reading material in an instant to aiding law clerks better research an upcoming civil rights case. And since no one reads the fine print, this type of AI could even save you money on a car rental or (in an extreme case) protect you from signing away your kidneys for harvesting.
“If you get to the point where you can teach a system to solve a problem in a language with a generalized approach, in this case reading,” says Musbah, “you’ve gotten to the point where it can scale in terms of how it applies in an AI fashion across different industries.”
At its best, artificial intelligence promises to bring about positive social and economic impacts. That’s precisely Maluuba’s vision. But while McKinsey says 45% of jobs people are paid to do could be automated with currently demonstrated technologies, there’s a long way to go still before AI will free us from menial and monotonous work, even at a cognitive level. This means it’ll be far longer before you’ll find an android doing your laundry or cooking dinner. Or serving as an impartial judge in a court of law (which will probably never happen, but it’d be awesome, given racial disparities in sentencing).
“We have a chance to make machines better than we are. But it’s not like AI provides us with ready-made solutions, it only allows us to manifest solutions that we decide on,” says Trischler. AI, in other words, is only as good as the data and instruction we provide it with: Biased data inputs will breed biased AI.
Language comprehension then isn’t just an artificial intelligence problem, but a human problem. Look at how divisive the Second Amendment is, or the multitude of human interpretations of the Bible or the Koran. The same goes for social justice. Look at the results of police departments already using AI to find criminals. They’ve succeeded only in making racial discrimination worse.
The big point, here: If we’re ever to trust AI to run software systems or manage traffic or do riskier things like grow with our brains, we need to admit we’re prone to error. We need to be better researchers and diverse designers. We need to be better to each other, too.
View source version on Fastcoexist.com