Golem (GNT) – sharing excess CPU

Golem is a decentralized global supercomputer.  Processing power is shared by users (providers) to applications (requestors).  Ethereum based smart contract handles transaction of remuneration.

Network nodes create sandboxes isolated in providers’ machines.  Golem technology then combine computations back to requestor application.

After a hyped launch in 2016 when it raised 820k ETH, software development delayed until they had its Brass Golem mainnet ready for their first use case: CGI rendering.  Next in line could be machine learning.

According to the whitepaper, GNT is the utility token that is used to remunerate providers from requestors to providers and other interactions with the network according to the transaction framework.

What Happens Next Will Amaze you – speech by Maciej Cegłowski

Transcript of IdleWords.com’s Maciej Cegłowski talk at FREMTIDENS INTERNET conference in Copenhagen, Denmark.

This is a very good speech but not too short – brevity is for the weak in top of Idle Words page goes as a reminder.  Topics covered:

  • The corporate side of our culture of total surveillance – The odd story of how advertisers destoyed our online privacy and then found themselves swindled by robots.
  • Six fixes Maciej Cegłowski thinks could restore Internet privacy.
  • Capitalists who act like central planners, and an industry that insists on changing the world without even being able to change San Francisco:

 

New AI player in the chess board

At least since Deep Blue beat Kasparov chess masters got used to the idea that computers may outrun human chess abilities.  Those were times that coders would handcrafted specialized evaluation functions.  Chess-designed algos.  Some comfort derived from the fact that even if computers would use brute force to practice and test millions of games in order to tune parameters, the basic features were designed and input by humans.

Now the game is changing (again).  After A.I. mastered Atari games by itself, new deep learning application made possible computers to learn how to play chess by themselves.

Matthew Lai published the paper “Giraffe: Using Deep Reinforcement Learning to Play Chess” describes an engine that could learn move decision in a way that is closer to human than previous attempts.  The leap forward is in reducing options under evaluation.  Pruning a decision tree as early as possible.

Its neural network seems to benefit a lot from playing itself.  This may be something that humans will have a hard time to beat.

 

“A Gentle Guide to Machine Learning” by Raúl Garreta

Posted at MonkeyLearn

“Machine Learning is a subfield within Artificial Intelligence that builds algorithms that allow computers to learn to perform tasks from data instead of being explicitly programmed.

(…) some of the most common categories of practical Machine Learning applications:

Image Processing (…)  : Image tagging (…) , Optical Character Recognition (…) , Self-driving cars (…)

Text Analysis(…) : Spam filtering, (…) Sentiment Analysis,(…) Information Extraction, (…)
Data Mining(…): Anomaly detection, (…) Grouping , (…), Predictions(…)
Video Games & RoboticsContinue reading

Mathematics of an updated, plural Economics

Having similar applications, users, and backgound, at a distance Machine Learning may sometimes be confused with an application of Statistics.

A closer look reveal fundamental differences, as in “Why a Mathematician, Statistician, & Machine Learner Solve the Same Problem Differently” by Nir Kaldero.

One scientific field this difference comes to surface in a distinguished manner is economics, as Noah Smith’s “Economics Has a Math Problem” sensibly puts the emphasis on the way economics uses math.

Pushing science to new fields, scientists can now employ much more data and computational power than the time when a significant part of mainstream economics was developed.  If econometric tools set the tone for neoclassic economic papers in the final decades of last century, could machine learning, Bayesian inference, and neural networks open new possibilities to economic theory?

One arguable example is “Mechanisms for Multi-unit Combinatorial Auctions with a Few Distinct Goods” by Piotr Krysta, Orestis Telelis, Carmine Ventre.  Not a coincidence, researchers are not from Economics departments.  Even if economists are stubborn enough to dismiss game theory as a non-fundamental field, message is clear: if economists don’t embrace new math, other scientists (human or not) could engulf economics less cerimoniously.

If this happens, will we find that Keynesian uncertainty and weight of arguments fits big data better than deterministic parameters of neoclassic mainstream?

Learning about our minds while teaching machines how to learn

Writing an algorithm requires a reflection of what steps, and their internal relations, are necessary to determine a desired output correctly.  Such reflection exercise involving logical and abstraction considerations not only about the operations to be performed but also, in depth, studying how our mind process that same operations.  A couple of recent articles explore that cyclical effort.

Algorithms of the Mind – What Machine Learning Teaches Us About Ourselves” by Christopher Nguyen and “Are You a Thinking Thing? Why Debating Machine Consciousness Matters” by Alison E. Berman aproach interesting points of this case.Continue reading

“Artificial Intelligence Is Already Weirdly Inhuman” by DAVID BERREBY

From Nautilus, Dark Matter issue via Azeem Azhar:

“…Artificial intelligence has been conquering hard problems at a relentless pace lately (…) neural network has equaled or even surpassed human beings at tasks like discovering new drugs, finding the best candidates for a job, and even driving a car.

(…) some hard problems make neural nets respond in ways that aren’t understandable.

 (…) Not knowing how or why a machine did something strange leaves us unable to make sure it doesn’t happen again.

But the occasional unexpected weirdness of machine “thought” might also be a teaching moment for humanity. (…)  they might show us how intelligence works outside the constraints of our species’ limitations. (…)” read full article

Artificial Intelligence tackles the Internet of Things

In “Connecting artificial intelligence with the internet of things” Andy Meek discusses some pros and cons in the future of merging Artificial Intelligence and the Internet of Things.  Reasons to be optimist, pitfalls and debate on its fears.

And Stephen Brennan’s “The Next Big Thing Is The Continuum” story is on how tech is the trends and challenges tech industry faces in trying to merge A.i. and I.O.T. in one new environment.

Adaptive data analysis

In “The reusable holdout: Preserving validity in adaptive data analysis” published in Science researchers Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth addresses the issue of data analysis adaptivity.

Applying thumb rules such as the same 5% significance test many learn when introduced to scientific method at school sometimes corroborate misleading ‘discoveries’.  Data analysis often enough is made through a re-interpretation of statistics.  So that conclusions carry much of our models and how we interpret raw data in the first place.

Author Moritz Hardt posted an interesting introduction to the paper in Google Research Blog.

how many words for a picture?

Researchers Andrej Karpathy and Li Fei-Fei presents in Deep Visual-Semantic Alignments for Generating Image Descriptions a model for estimating natural language description of images.

They share part of the code on Github so people can train their neural network to describe images.

Visual information and its correspondent descriptions such as below:

“girl in pink dress is jumping in air.”

“woman is holding bunch of bananas.”

“a young boy is holding a baseball bat.”
this is generated in layers of smantic correspondence, such as below:

“Build-a-brain” by Michael Graziano

article featured in aeon.co

“The brain is a machine: a device that processes information. (…) [and] somehow the brain experiences its own data. It has consciousness. How can that be?

That question has been called the ‘hard problem’ of consciousness (…)
Here’s a more pointed way to pose the question: can we build it? (…)

I’ve made my own entry into that race, a framework for understanding consciousness called the Attention Schema theory. The theory suggests that consciousness is no bizarre byproduct – it’s a tool for regulating information in the brain. And it’s not as mysterious as most people think. (…)

In this article I’ll conduct a thought experiment. Let’s see if we can construct an artificial brain, piece by hypothetical piece, and make it conscious. The task could be slow and each step might seem incremental, but with a systematic approach we could find a path that engineers can follow(…) “.much more to read – go to full article

Autonomous Weapons Open Letter from the Future of Life institute

Future of Life Institute published this “Autonomous Weapons: an Open Letter from AI & Robotics Researchers”:

“Autonomous Weapons: an Open Letter from AI & Robotics Researchers
Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

“Facebook Instant Articles Just Don’t Add Up for Publishers” By Michael Wolff

From MIT Tech Review:

“(…) digital content is being divided between a lucrative high-end entertainment world, where licensors receive a negotiated fee for allowing the distribution of their property, and a low-end publishing world where content is expected to be “free,” supporting itself on often elusive advertising sales and ad splits. In this particular deal, publishers can sell ads on their articles and keep all of the revenue, or have Facebook sell ads in exchange for 30 percent.

(…) The more immediate question is about whether Facebook’s “instant articles” and other republishing initiatives are digging a deeper hole for publishers or helping them get out of the one they are already in.

(…)

When the Facebook “instant articles” deal was first proposed last fall, there was no provision at all for a financial exchange. From Facebook’s point of view, it was just a further service to users and publishers. If it hosted the Times’ content it would load faster—hence a better experience for Facebook users clicking to a shared Times story. The Times and other publishers should do this, Facebook reckoned, because it would get them greater exposure to Facebook’s vast audience. It was promotional.

After some limited pushback from the publishers, the deal now resembles a conventional digital ad split—of the kind made ubiquitous by Google AdSense. That is, if Facebook sells against this content through its networks, it splits the revenues with the publisher. If the publisher sells the ad, as though in a free-standing insert model, it keeps what it kills. (Exactly the model that has consistently lowered digital ad prices—the inevitable discounting when you have many sellers of the same space.)

(…)

But what of the New York Times? (…) This is further puzzling because the Times has built a digital subscription business of almost a million users. Why subscribe to the Times if you can read it for free on Facebook?

Of course, the subscription business will not support the Times alone (indeed, its growth appears to be seriously slowing)—it needs advertising too. Most of the advertising that pays for most of the Times’ costs still comes from the actual newspaper. That revenue stream is declining quickly, however, and is far from being replaced by digital ads, which in the first quarter of 2015 yielded only $14 million a month in revenue (15 years ago, before digital balkanized the business, the Times was averaging more than $100 million a month in ad revenue).

These measly ad dollars are in part a function of the fact that Google and Facebook together take 52 percent of all digital advertising. (…)

And now, in the prevalent view, there is simply no turning back. The math has changed. The New York Times may once have made more than $100 million a month in advertising revenue on a 1.5 million circulation base; now it makes $14 million on 50 million monthly visitors on the digital side of the business. So it will need something like 350 million users to make equivalent money—which, bizarrely, Facebook might possibly provide. Except, of course, that the more numbers go up, in digital math, the more their value goes down. But pay no attention.” read full article

 

Probabilistic Inference Techniques for Scalable Multiagent Decision Making

In a colaboration by Singapore, USA, and Germany based researchers  Akshat Kumar, Shlomo Zilberstein, and Marc Toussaint published “Probabilistic Inference Techniques for Scalable Multiagent Decision Making“.

This paper introduces a new class of algorithms for machine learning applied to multiagent planning.  Especifically, in scenarios of partial observation.  Application of bayesian inference not being unheard of, this paper advances in determining conditions for scalability.

Translation assisted by machine learning

“Using Machine Translation to Provide Target-Language Edit Hints in Computer Aided Translation Based on Translation Memories” by Miquel Espl`a-Gomis, Felipe Sanchez-Martınez, and  Mikel L. Forcada from Universitat d’Alacant,  Spain.

Experiments show machine translation systems may benefit from translation memory.

See also their previous “Using machine translation in computer-aided translation to suggest the target-side words to change“.