Civil rights to autonomous artificial systems

In an open letter to the European Commission a group of ‘Political Leaders, AI/robotics researchers and industry leaders, Physical and Mental Health specialists, Law and Ethics experts gathered to’ voice their concern about negative consequences of legal status to robots.

This echoes the concept that granted corporations personhood legal status, and the long debate over its convenience, how this notion spread and became usual in modern law.

The paragraph 59 f  from report cited in the letter is by its turn based on the recommendations to the Commission on Civil Law Rules on Robotics that can give a thorough view on the grounds of what moved the Committee on Legal Affairs to propose it.

AI mastering Go

As soon as AlphaGo – Google´s Deepmind Go player defeated European champion 5-0 many people were celebrating, as a friend of mine who first shared the story.  Still pending of a contest against Lee Sedol in South Korea, I would not argue against its merits.

Deepmind team recently published “Mastering the game of Go with deep neural networks and tree search” is indeed in the cutting edge of deep learning algorithm design.

Something is off in the big picture, though, as the (unsettled) argument that followed.  If the trend is clear looking back – think of Watson’s feats and you know this was coming.  It´s tempting to say same will happen to the future.  Increasing complexity in the ways computing appears to beat humans.

Key in this discussion are the boundaries of current approach, precisely for it´s being based in computing.  Of course AI beats us in a challenge of speed, memory or brute force strenuous calculation.  But this is not all that is.  To begin with this is only possible when humans tell AI how to deal with abstract symbols (as bits and code) and how to relate the external reality to such abstractions.

That´s when people start to wonder if it should be the case of having more open minded AIs, as in “Don’t Know Mind: Zen and the Art of AGI Indecision” By Gareth John.

Even accepting that more layers (deeper learning)  may imply broader ‘minds’, it’s a different way of looking on what would be next.

“Whole Brain Emulation: Reverse Engineering a Mind” By Randal A. Koene

As fiancés know, setting a date is a double-edged sword.  Goals seem more tangible and apt to plan around, but unkept promises usually end with someone looking foolish.

Pushing the envelope and making plans about the future is what was intended at Among the high-profile thinkers speaking at Global Future 2045: Towards a New Strategy for Human Evolution, Randal A. Koene delivered a speech on brain emulation:Continue reading

A.I. weakness: relevance

As a friend told me and we are getting used to, an AI algorithm can match the average American on real SAT questions, and more of it is bound to come.  Should we worry?  If I had to guess I would say sometime in the future we will see SAT as a short-lived bad way to assess anything really relevant about humans.

What about human dominance on creativity?  taking Brazilian composer Chico Science “Computers make art, artists make money” insight: SATs are an easy field to yield to computers – not sure if left to opt between money and creativity which would artists yield…

If we have an option at all.  Algo trading is making money already – and Margaret A. Boden makes the point on MIT review that computers aren’t close to being ready to supplant human artists:Continue reading

Jobless future

If you’re worried about your kids and the fact that More than half of students chasing dying careers you are probably right.  If you think that electing a career that is not dying will help them you are probably wrong.  Of course this and other Odradeks will outlive parents, but the dismissal is not a kafkan one.  Problem is it is likely the case that the problem is not which careers, but more likely ‘careers’ itself is becoming an obsolete term.

Colleagues seem to be there for a while, though we might have to be Ready for a Robot Colleague.

In a broader view on workmate, A.I. may give us a ride in preparation for a new time occupation future.  Perhaps it’s better for us to Don’t Worry, Smart Machines Will Take Us With Them.  remember those kids chasing dying careers?  That’s only part time – the rest of it they are drooling obsessively at smart phones as much as we let them.  It may well be that the case that this is their robot education in the making.

“The Struggle to Define What Artificial Intelligence Actually Means” by Gary Lea

Posted at The Conversation:

“When we talk about artificial intelligence (AI) – (…) – what do we actually mean?

(…) having a usable definition of AI – and soon – is vital for regulation and governance because laws and policies simply will not operate without one.

(…)  Defining the terms: artificial and intelligence
For regulatory purposes, “artificial” is, hopefully, the easy bit. (…) , leaves the knottier problem of “intelligence”.

From a philosophical perspective, “intelligence” is a vast minefield, especially if treated as including one or more of “consciousness”, “thought”, “free will” and “mind”. (…)

Let’s take a step back and ask what a regulator’s immediate interest is here?

I would say that it is the work products of AI scientists and engineers, and any public welfare or safety risks that might arise from those products.

Logically, then, it is the way that the majority of AI scientists and engineers treat intelligence” that is of most immediate concern.(…)  read full post

 

Plenty of robots

It appears we can’t keep their pace.  In a recent batch of news a friend found robots painting Van Gogh style, robots beating us on rock-paper-scissors (below, and btw that’s cheating on my playground), writing adventures and so on.

On the more troubling strain we try to make sure we control military robots, and wonder how great an idea is it to build system that deceive us, or how good an AI boss can be.

And while we can’t make ethical robots, and they are not yet out there firing (at) us, humans may enjoy treating robots like Yo-Yo Ma’s cello — as an instrument for human intelligence.

New AI player in the chess board

At least since Deep Blue beat Kasparov chess masters got used to the idea that computers may outrun human chess abilities.  Those were times that coders would handcrafted specialized evaluation functions.  Chess-designed algos.  Some comfort derived from the fact that even if computers would use brute force to practice and test millions of games in order to tune parameters, the basic features were designed and input by humans.

Now the game is changing (again).  After A.I. mastered Atari games by itself, new deep learning application made possible computers to learn how to play chess by themselves.

Matthew Lai published the paper “Giraffe: Using Deep Reinforcement Learning to Play Chess” describes an engine that could learn move decision in a way that is closer to human than previous attempts.  The leap forward is in reducing options under evaluation.  Pruning a decision tree as early as possible.

Its neural network seems to benefit a lot from playing itself.  This may be something that humans will have a hard time to beat.

 

Mark my words

In “What Searchable Speech Will Do To You” , published in Nautilus, James Somers discuss some interesting aspects on the coming possibility of having all we say recorded.  And then labeled, tagged, searched…

“We are going to start recording and automatically transcribing most of what we say. (…) It will happen by our standard combination of willing and allowing. It will happen because it can. It will happen sooner than we think.

(…) But would all of this help or hurt us? (…) The more we come to rely on a tool, the less we rely on our own brains.

(…) By offloading more of memory’s demands onto the Record (…) it might not be that we’re making space for other, more important thinking. We might just be depriving our brains of useful material. (…)

The worry, then, is twofold: If you stopped working out the part of your brain that recalls speech (…) your mind would become a less interesting place.Continue reading

“A Gentle Guide to Machine Learning” by Raúl Garreta

Posted at MonkeyLearn

“Machine Learning is a subfield within Artificial Intelligence that builds algorithms that allow computers to learn to perform tasks from data instead of being explicitly programmed.

(…) some of the most common categories of practical Machine Learning applications:

Image Processing (…)  : Image tagging (…) , Optical Character Recognition (…) , Self-driving cars (…)

Text Analysis(…) : Spam filtering, (…) Sentiment Analysis,(…) Information Extraction, (…)
Data Mining(…): Anomaly detection, (…) Grouping , (…), Predictions(…)
Video Games & RoboticsContinue reading

Computers outrunning our brain. What about choice?

About when people would seem enough to think of computing capacity in terms of FLOPS, supercomputers development makes the point that a better measure is TEPS.  TEPS stand for Traversed edges per second, which is sort of FLOPS weighted by communication cost.

Anyway, fact is AI Impacts produced estimates for our Brain performance in TEPS.   Next thing was the ubiquitous, of course.  It would seem we can hire this computational power in the next decade by $ 100/hour.  But for the time being this cost is estimated to be around $4,700 – $170,000/hour.  So go to your boss and tell him he’s renting your brain for a bargain.

IF you do so, your odds are better if you skip the info below and make it simple.  New studies show that our brains do consider cognitive effort when making choices.  This ‘TLDR’ feature of brain wiring may be the culprit in preventing you to go through the paper “Separate and overlapping brain areas encode subjective value during delay and effort discounting” that says so.

Mathematics of an updated, plural Economics

Having similar applications, users, and backgound, at a distance Machine Learning may sometimes be confused with an application of Statistics.

A closer look reveal fundamental differences, as in “Why a Mathematician, Statistician, & Machine Learner Solve the Same Problem Differently” by Nir Kaldero.

One scientific field this difference comes to surface in a distinguished manner is economics, as Noah Smith’s “Economics Has a Math Problem” sensibly puts the emphasis on the way economics uses math.

Pushing science to new fields, scientists can now employ much more data and computational power than the time when a significant part of mainstream economics was developed.  If econometric tools set the tone for neoclassic economic papers in the final decades of last century, could machine learning, Bayesian inference, and neural networks open new possibilities to economic theory?

One arguable example is “Mechanisms for Multi-unit Combinatorial Auctions with a Few Distinct Goods” by Piotr Krysta, Orestis Telelis, Carmine Ventre.  Not a coincidence, researchers are not from Economics departments.  Even if economists are stubborn enough to dismiss game theory as a non-fundamental field, message is clear: if economists don’t embrace new math, other scientists (human or not) could engulf economics less cerimoniously.

If this happens, will we find that Keynesian uncertainty and weight of arguments fits big data better than deterministic parameters of neoclassic mainstream?

Bias dynamics in A.I.

The more algos we live by, the more “Computer Scientists Find Bias in Algorithms” as the story by Lauren J. Young tells us.

We may, of course, think that bias is unavoidable, so the best we can do is be aware and go on.  How much aware may find some psychological or commercial barriers, as in Jerry Kaplan’s “Would You Buy a Car That’s Programmed to Kill You? You Just Might.

Maybe we can only hope that something good may come from algos interacting and trying to learn what are their new preferred actions (from their adjusted biases) as Daniel Hennes and Michael Kaisers paper on “Evolutionary Dynamics of Multi-Agent Learning” indicates its possible.

 

Learning about our minds while teaching machines how to learn

Writing an algorithm requires a reflection of what steps, and their internal relations, are necessary to determine a desired output correctly.  Such reflection exercise involving logical and abstraction considerations not only about the operations to be performed but also, in depth, studying how our mind process that same operations.  A couple of recent articles explore that cyclical effort.

Algorithms of the Mind – What Machine Learning Teaches Us About Ourselves” by Christopher Nguyen and “Are You a Thinking Thing? Why Debating Machine Consciousness Matters” by Alison E. Berman aproach interesting points of this case.Continue reading

“Artificial Intelligence Is Already Weirdly Inhuman” by DAVID BERREBY

From Nautilus, Dark Matter issue via Azeem Azhar:

“…Artificial intelligence has been conquering hard problems at a relentless pace lately (…) neural network has equaled or even surpassed human beings at tasks like discovering new drugs, finding the best candidates for a job, and even driving a car.

(…) some hard problems make neural nets respond in ways that aren’t understandable.

 (…) Not knowing how or why a machine did something strange leaves us unable to make sure it doesn’t happen again.

But the occasional unexpected weirdness of machine “thought” might also be a teaching moment for humanity. (…)  they might show us how intelligence works outside the constraints of our species’ limitations. (…)” read full article