Interactive machine learning model

Pannaga Shivaswamy  from LinekdIn and Thorsten Joachims from Cornell University published Coactive Learning.  This paper proposes a model of machine learning through interaction with human users.

Users behavior is used to improve system feedback.  Their empirical studies indicate this method to benefit movie recommendations and web search.

“Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks”

by Kai Sheng Tai, Richard Socher, Christopher D. Manning.

Paper introducing a generalization of Long-Short Term Memory networks via tree-structured topology.  Publication states that “Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank)”

A companion set of code is also available.

 

-Machine: i ’m not ashamed of being a philosopher!

Google Brain researchers Oriol Vinyals and Quoc V. Le puyblished “A Neural Conversational Model“.  Thye describe their experiment in having artificial intelligence learning to simulate human conversation.

Good short reading, even if to find examples as:

Human: what year were you born ?
Machine: 1977 .

Human: why are we here ?
Machine: i ’m not sure .

or

 

Human: what is the purpose of living ?
Machine: to live forever .

or

Human: give me some examples of moral actions…
Machine: i ’m not a moralist .

Human: be moral !
Machine: be a man !

 

If we see with our minds, not our eyes – so do computers…

Posted on Google research blog, researchers posted how neural networks trained to classify images interpret and re-interpret image abstractions, as some examples below.

not bad idea from a friend who immediately identified it (especially bottom examples) with this image search:

 

Left: Original photo by Zachi Evenor. Right: processed by Günther Noack, Software Engineer
Left: Original painting by Georges Seurat. Right: processed images by Matthew McNaughton, Software Engineer

 

The original image influences what kind of objects form in the processed image.

 

Neural net “dreams”— generated purely from random noise, using a network trained on places by MIT Computer Science and AI Laboratory. See our Inceptionism gallery for hi-res versions of the images above and more (Images marked “Places205-GoogLeNet” were made using this network).

 

“Teaching Machines to Read and Comprehend”

As reported in MIT Technology Review “Google DeepMind Teaches Artificial Intelligence Machines to Read” and pointed out by an attentive and informed friend, researchers from University of Oxford and Google Deepmind: Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom published a paper showing progress in Machine Learning techniques to help computer reading comprehension of meaning.

Deep learning applied different models such as Neural Network and Symbolic Marching to act as reading agents.  Agents would then predict an output in form of abstract summary points.

Supervised learning used market, previously annoted data from CNNonline, Daily Mail website and MailOnline news feed.

Here’s the abstract as published:

“Teaching machines to read natural language documents remains an elusive challenge.
Machine reading systems can be tested on their ability to answer questions
posed on the contents of documents that they have seen, but until now large scale
training and test datasets have been missing for this type of evaluation. In this
work we define a new methodology that resolves this bottleneck and provides
large scale supervised reading comprehension data. This allows us to develop a
class of attention based deep neural networks that learn to read real documents and
answer complex questions with minimal prior knowledge of language structure.”

read full paper