“Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks”

by Kai Sheng Tai, Richard Socher, Christopher D. Manning.

Paper introducing a generalization of Long-Short Term Memory networks via tree-structured topology.  Publication states that “Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank)”

A companion set of code is also available.

 

-Machine: i ’m not ashamed of being a philosopher!

Google Brain researchers Oriol Vinyals and Quoc V. Le puyblished “A Neural Conversational Model“.  Thye describe their experiment in having artificial intelligence learning to simulate human conversation.

Good short reading, even if to find examples as:

Human: what year were you born ?
Machine: 1977 .

Human: why are we here ?
Machine: i ’m not sure .

or

 

Human: what is the purpose of living ?
Machine: to live forever .

or

Human: give me some examples of moral actions…
Machine: i ’m not a moralist .

Human: be moral !
Machine: be a man !

 

“Can We Design Trust Between Humans and Artificial Intelligence?” by Patrick Mankins

Desinger Patrick Mankins article on building trust between people and Artificial Intelligence.

Excerpts below, but read full article – it’s not long anyway:

“Machine learning and cognitive systems are now a major part many products people interact with every da…. The role of designers is to figure out how to build collaborative relationships between people and machines that help smart systems enhance human creativity and agency rather than simply replacing them.

… before self-driving cars can really take off, people will probably have to trust their cars to make complex, sometimes moral, decisions on their behalf, much like when another person is driving.
Creating a feedback loop
This also takes advantage of one of the key distinguishing capabilities of many AI systems: they know when they don’t understand something.  Once a system gains this sort of self-awareness, a fundamentally different kind interaction is possible.

Building trust and collaboration
What is it that makes getting on a plane or a bus driven by a complete stranger something people don’t even think twice about, while the idea of getting into a driverless vehicle causes anxiety? … We understand why people behave the way they do on an intuitive level, and feel like we can predict how they will behave. We don’t have this empathy for current smart systems.”