Race on to make thinking machines smarter

Researchers say computers are still a long way from being able to read and comprehend general text in the same way that humans can.

Seven years ago, a computer beat two human quizmasters on a Jeopardy! challenge. Ever since, the tech industry has been training its machines even harder to make them better at amassing knowledge and answering questions.

And it's worked, at least up to a point. Just don't expect artificial intelligence to spit out a literary analysis of Leo Tolstoy's War and Peace any time soon.

Research teams at Microsoft and Chinese tech company Alibaba reached what they described as a milestone earlier this month when their AI systems outperformed the estimated human score on a reading comprehension test. It was the latest demonstration of rapid advances that have improved search engines and voice assistants and that are finding broader applications in health care and other fields.

The answers they got wrong - and the test itself - also highlight the limitations of computer intelligence and the difficulty of comparing it directly to human intelligence.

"We are still a long way from computers being able to read and comprehend general text in the same way that humans can," said Kevin Scott, Microsoft's chief technology officer, in a LinkedIn post that also commended the achievement by the company's Beijing-based researchers.

The test developed at Stanford University demonstrated that, in at least some circumstances, computers can beat humans at quickly "reading" hundreds of Wikipedia entries and coming up with accurate answers to questions about Genghis Khan's reign or the Apollo space program.

The computers, however, also made mistakes that many people wouldn't have.

Microsoft, for instance, fumbled an easy football question about which member of the NFL's Carolina Panthers got the most interceptions in the 2015 season (the correct answer was Kurt Coleman, not Josh Norman). A person's careful reading of the Wikipedia passage would have discovered the right answer, but the computer tripped up on the word "most" and didn't understand that seven is bigger than four.

It's not uncommon for machine-learning competitions to pit the cognitive abilities of computers against humans.

Machines first bested people in an image-recognition competition in 2015 and a speech recognition competition last year, although they're still easily tricked. Computers have also vanquished humans at chess, Pac-Man and the strategy game Go .

And since IBM's Jeopardy! victory in 2011, the tech industry has shifted its efforts to data-intensive methods that seek to not just find factoids, but better comprehend the meaning of multi-sentence passages.

Like the other tests, the Stanford Question Answering Dataset, nicknamed Squad, attracted a rivalry among research institutions and tech firms, with Google, Facebook, Tencent, Samsung and Salesforce also giving it a try.


Share
Published 24 January 2018 6:34am
Source: AAP


Share this with family and friends


Get SBS News daily and direct to your Inbox

Sign up now for the latest news from Australia and around the world direct to your inbox.

By subscribing, you agree to SBS’s terms of service and privacy policy including receiving email updates from SBS.

Download our apps
SBS News
SBS Audio
SBS On Demand

Listen to our podcasts
An overview of the day's top stories from SBS News
Interviews and feature reports from SBS News
Your daily ten minute finance and business news wrap with SBS Finance Editor Ricardo Gonçalves.
A daily five minute news wrap for English learners and people with disability
Get the latest with our News podcasts on your favourite podcast apps.

Watch on SBS
SBS World News

SBS World News

Take a global view with Australia's most comprehensive world news service
Watch the latest news videos from Australia and across the world