“Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks”
by Kai Sheng Tai, Richard Socher, Christopher D. Manning.
Paper introducing a generalization of Long-Short Term Memory networks via tree-structured topology. Publication states that “Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank)”
A companion set of code is also available.
-Machine: i ’m not ashamed of being a philosopher!
Google Brain researchers Oriol Vinyals and Quoc V. Le puyblished “A Neural Conversational Model“. Thye describe their experiment in having artificial intelligence learning to simulate human conversation.
Good short reading, even if to find examples as:
Human: what year were you born ?
Machine: 1977 .
…
Human: why are we here ?
Machine: i ’m not sure .
or
Human: what is the purpose of living ?
Machine: to live forever .
or
Human: give me some examples of moral actions…
Machine: i ’m not a moralist .
…
Human: be moral !
Machine: be a man !
Scientific Papers Corrections Stable Retractions on the Rise
“Why Growing Retractions Are (Mostly) a Good Sign”, by Daniele Fanelli, studies statistics of retractions and correction in scientific papers in recent history.
This paper tells us that the growing incidence of retractions is probably due to better fraud and gerneral content check. Something to be celebrated as good news, and ought to be promoted.
“Welcome to the Dawn of the Age of Robots” by VIVEK WADHWA
Article from singularityhub.com DARPA robotics challenge.
“Can We Design Trust Between Humans and Artificial Intelligence?” by Patrick Mankins
Desinger Patrick Mankins article on building trust between people and Artificial Intelligence.
Excerpts below, but read full article – it’s not long anyway:
“Machine learning and cognitive systems are now a major part many products people interact with every da…. The role of designers is to figure out how to build collaborative relationships between people and machines that help smart systems enhance human creativity and agency rather than simply replacing them.
… before self-driving cars can really take off, people will probably have to trust their cars to make complex, sometimes moral, decisions on their behalf, much like when another person is driving.
Creating a feedback loop
This also takes advantage of one of the key distinguishing capabilities of many AI systems: they know when they don’t understand something. Once a system gains this sort of self-awareness, a fundamentally different kind interaction is possible.
Building trust and collaboration
What is it that makes getting on a plane or a bus driven by a complete stranger something people don’t even think twice about, while the idea of getting into a driverless vehicle causes anxiety? … We understand why people behave the way they do on an intuitive level, and feel like we can predict how they will behave. We don’t have this empathy for current smart systems.”
It seems we can’t find what you’re looking for. Perhaps searching can help.