Readings

Translation assisted by machine learning

“Using Machine Translation to Provide Target-Language Edit Hints in Computer Aided Translation Based on Translation Memories” by Miquel Espl`a-Gomis, Felipe Sanchez-Martınez, and  Mikel L. Forcada from Universitat d’Alacant,  Spain.

Experiments show machine translation systems may benefit from translation memory.

See also their previous “Using machine translation in computer-aided translation to suggest the target-side words to change“.

 

 

 

 

Interactive machine learning model

Pannaga Shivaswamy  from LinekdIn and Thorsten Joachims from Cornell University published Coactive Learning.  This paper proposes a model of machine learning through interaction with human users.

Users behavior is used to improve system feedback.  Their empirical studies indicate this method to benefit movie recommendations and web search.

“When the Toaster Shares Your Data With the Refrigerator, the Bathroom Scale, and Tech Firms” BY VIVEK WADHWA

“Your toaster will soon talk to your toothbrush and your bathroom scale. They will all have a direct line to your car and to the health sensors in your smartphone. I have no idea what they will think of us or what they will gossip about, but our devices will be soon be sharing information about us — with each other and with the companies that make or support them.”… read more

“Growing Pains for Deep Learning” By Chris Edwards

“Advances in theory and computer hardware have allowed neural networks to become a core part of online services such as Microsoft’s Bing, driving their image-search and speech-recognition systems. The companies offering such capabilities are looking to the technology to drive more advanced services in the future, as they scale up the neural networks to deal with more sophisticated problems.

It has taken time for neural networks, initially conceived 50 years ago, to become accepted parts of information technology applications. After a flurry of interest in the 1990s, supported in part by the development of highly specialized integrated circuits designed to overcome their poor performance on conventional computers, neural networks were outperformed by other algorithms, such as support vector machines in image processing and Gaussian models in speech recognition.” read more

Can we reason physics without causality?

In this article published in aeon, Mathias Frisch discusses role of causality in shaping our knowledge.

“…In short, a working knowledge of the way in which causes and effects relate to one another seems indispensable to our ability to make our way in the world. Yet there is a long and venerable tradition in philosophy, dating back at least to David Hume in the 18th century, that finds the notions of causality to be dubious. And that might be putting it kindly.

Hume argued that when we seek causal relations, we can never discover the real power; the, as it were, metaphysical glue that binds events together. All we are able to see are regularities – the ‘constant conjunction’ of certain sorts of observation. …Which is not to say that he was ignorant of the central importance of causal reasoning…  Causal reasoning was somehow both indispensable and illegitimate. We appear to have a dilemma.

… causes seemed too vague for a mathematically precise science. If you can’t observe them, how can you measure them? If you can’t measure them, how can you put them in your equations? Second, causality has a definite direction in time: causes have to happen before their effects. Yet the basic laws of physics (as distinct from such higher-level statistical generalisations as the laws of thermodynamics) appear to be time-symmetric…

Neo-Russellians in the 21st century express their rejection of causes …

This is all very puzzling. Is it OK to think in terms of causes or not? If so, why, given the apparent hostility to causes in the underlying laws? And if not, why does it seem to work so well?” read more

“Future A.I. Will Be Able to Generate Photos We Need Out of Nothing” by Paul Melcher

“…Photo licensing companies like Shutterstock or Getty Images could let go of their tens of thousand of contributors and tens of millions of stored images and replace them with a smart bot that can instantly create the exact image you are looking for.

Think this idea is crazy? Think again. Already text algorithms create entire human readable articles from raw data they gather around a sporting event or a company’s stock. Automatically. Chances are, you’ve already read a computer generated article without even knowing it.

CGI is now so advanced that entire movies are filmed in front of a green backdrop before being completed with the necessary reality elements we would swear are true. Skilled Photoshop artists can already merge different photos to create a new one, fabricating a true-to-life scene that never existed. IKEA revealed last year that 75% of their catalog photos are not real, but are instead entirely computer generated.”  read article at petapixel

Experiments bring light to near-sleep brain dynamic.

“Co-activated yet disconnected—Neural correlates of eye closures when trying to stay awake” by
Ju Lynn Onga, Danyang Konga, Tiffany T.Y. Chiaa, Jesisca Tandia, B.T. Thomas Yeoa, b, Michael W.L. Chee published at Neuro Image studies brains activity related to sleep-deprived participants in the experiments.

Of course it’s no news that when sleep-deprived and approaching sleep with spontaneous eye closures we have a somewhat less connected and aware brain.  But this borderline state brings hurdles to collecting data and this paper bring new scientific data on the table.

“Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks”

by Kai Sheng Tai, Richard Socher, Christopher D. Manning.

Paper introducing a generalization of Long-Short Term Memory networks via tree-structured topology.  Publication states that “Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank)”

A companion set of code is also available.

 

-Machine: i ’m not ashamed of being a philosopher!

Google Brain researchers Oriol Vinyals and Quoc V. Le puyblished “A Neural Conversational Model“.  Thye describe their experiment in having artificial intelligence learning to simulate human conversation.

Good short reading, even if to find examples as:

Human: what year were you born ?
Machine: 1977 .

Human: why are we here ?
Machine: i ’m not sure .

or

 

Human: what is the purpose of living ?
Machine: to live forever .

or

Human: give me some examples of moral actions…
Machine: i ’m not a moralist .

Human: be moral !
Machine: be a man !