The research findings regarding Speech Recognition, Machine Translation, Intonation and Parts of Speech recognition indicate that deep-learning neural algorithms have a bigger potential compared to the conventional natural language processing methods. At the same time, there are still plenty of unresolved issues concerning question-answering and dialog systems. This article looks at the application of modern algorithms for natural language processing and understanding. The review covers several approaches and does not claim to be complete.
Human: how many legs does a cat have ?
Machine: four, i think .
Human: What do you think about messi ?
Machine: he’s a great player.
Human: where are you now?
Machine: I’m in the middle of nowhere.
When it comes to self-learning question-answering systems, IT companies and research organizations are using different points of reference.
Facebook made a list of 20 specific logical operations and generated an artificial set of tasks for performing those operations. According to the company, the operations are necessary but not sufficient for creating artificial intelligence, For example, the system must do the following: give positive or negative answers to questions; answer the question based on one or multiple known factors; make calculations; work with uncertainties; etc.
To solve the above tasks, Facebook developed a neural self-learning architecture with Memory Networks and its End-to-End implementation.
In its Neural Turing Machine architecture, Google is using a more fundamental approach. The company built a self-learning system that knows what information it needs to record and retrieve from the memory for solving a task, and when.
However, when it comes to solving real-life tasks, this approach reveals its limitations. To sort and retrieve information from the memory, Neural Turing Machine uses a small memory size (128 locations). Neural Programmer shows a more enhanced functionality.
The system can learn to perform base logical and arithmetic operations with a data table. Suppose, there is a set of data columns and a set of base operations. The system learns to perform the right sequence of operations to solve the problem.
In its system for answering grade school level science questions (ARISTO system), Allen Institute for Artificial Intelligence uses an ontological approach. A curious thing is that its system can learn through interaction with users. The project consists of 3 stages. The system has to solve science tests for 4, 8 and 12-graders. While the system more or less managed to solve problems at the 4-grade level, it struggled with questions for 8-graders. The institute decided to seek help from the scientific community at Kaggle. The competition became known as The Allen AI Science Challenge.
The challenge participants were given a learning (2,500 questions) and testing (8,132 questions) samples, with 4 possible answers for each question. Due to its small size, the learning sample should have been used not for system learning, but for assessing the quality of the solution and how well it covers the principal 8-grade disciplines (physics, biology, geography, etc.).
The competition had a few curious rules. For example, the AI model had to work even without Internet access, which meant no Google Knowledge Graph API.
In the table below, you can see a comparative review of modern approaches to creating question-answering systems. The review was made for the workshop “Memory and Q&A systems” by Deep Learning Moscow. Here you can find a full presentation with source reference.
Source: Deep Learning Moscow Facebook Group
*IR — information retrieval;
KB — knowledge base;
IE — information extraction;
BiLSTM — bidirectional long-short term memory;
NN — neural net;
NTM — Neural Turing Machine;
IGOR —Memory Networks architecture — Input feature map, Generalization,
Output feature map, Response.