As a friend told me and we are getting used to, an AI algorithm can match the average American on real SAT questions, and more of it is bound to come. Should we worry? If I had to guess I would say sometime in the future we will see SAT as a short-lived bad way to assess anything really relevant about humans.
What about human dominance on creativity? taking Brazilian composer Chico Science “Computers make art, artists make money” insight: SATs are an easy field to yield to computers – not sure if left to opt between money and creativity which would artists yield…
If we have an option at all. Algo trading is making money already – and Margaret A. Boden makes the point on MIT review that computers aren’t close to being ready to supplant human artists:Continue reading
Maria Popova on “Telling is Listening”, part of Ursula K. Le Guin’s collection of nonfiction writings published at “The wave in the mind : talks and essays on the writer, the reader, and the imagination”
“Every act of communication is an act of tremendous courage in which we give ourselves over to two parallel possibilities: the possibility of planting into another mind a seed sprouted in ours and watching it blossom into a breathtaking flower of mutual understanding; and the possibility of being wholly misunderstood, reduced to a withering weed. Candor and clarity go a long way in fertilizing the soil, but in the end there is always a degree of unpredictability in the climate of communication — even the warmest intention can be met with frost. Yet something impels us to hold these possibilities in both hands and go on surrendering to the beauty and terror of conversation, that ancient and abiding human gift. And the most magical thing, the most sacred thing, is that whichever the outcome, we end up having transformed one another in this vulnerable-making process of speaking and listening.Continue reading
Originally published as a postscript of Amanda Palmer’a The “Art of Asking: How I Learned to Stop Worrying and Let People Help”
Below an excerpt – read full post at brain pickings.
“We are embodied spirits who need raw material, both physical and spiritual, to create. But we forget that we are also social beasts who need not slash through the bramble of those needs alone.
In Buddhism and other ancient Eastern traditions, there is a beautiful concept connoted by the Pali word dana (pronounced DAH-nah), often translated as the virtue of generosity. But at its heart is something far more expansive — a certain quality of open-handedness in dynamic dialogue with need and organically responsive to it. The practice ofdana has sustained the Buddhist tradition for two and a half millennia — monks give their teachings freely, and the lay people who benefit from them give back to the monks by making sure their sustenance needs are met.Continue reading
Craig Mod’s tale of rise and fall of his enchantment over digital books. A critical view on how current closed ebooks platforms controlled by Amazon and Apple contributes to stagnating digital books development. Article from aeon magazine.
“From 2009 to 2013, every book I read, I read on a screen. And then I stopped. (…)
By 2009, it was impossible to ignore the Kindle. (…)
The Kindle was all of that and more. Neatly bundled up. I was in love.
(…) Granite, wood, wax, silk, paper, metal type, the Gutenberg press, Manutius’s octavo editions, Penguin paperbacks, desktop publishing software, digital type, on‑demand printing, .epub: the evolutionary path of ‘books’ has been punctuated by technological changes large and small. And so, too, with the Kindle.
(…) Containers matter. They shape stories and the experience of stories. Choose the right binding, cloth, trim size, texture of paper, margins and ink, and you will strengthen the bond between reader and text. Choose badly and the object becomes a wedge between reader and text.
(…) I was critical of Kindle typography and layouts from day one, but I assumed that these errors would be remedied quickly. My book notes felt locked away in Amazon’s ecosystem, but I assumed they would eventually produce better interfaces or export options for more rigorous readers.
(…) But in the past two years, something unexpected happened: I lost the faith. Continue reading
Posted on brain pickings:
“David Bohm: Reality is what we take to be true. What we take to be true is what we believe. What we believe is based upon our perceptions. What we perceive depends on what we look for. What we look for depends on what we think. What we think depends on what we perceive. What we perceive determines what we believe. What we believe determines what we take to be true. What we take to be true is our reality.
Matthieu Ricard: No matter how complex our instruments may be, no matter how sophisticated and subtle our theories and calculations, it’s still our consciousness that finally interprets our observations. And it does so according to its knowledge and conception of the event under consideration. It’s impossible to separate the way consciousness works from the conclusions it makes about an observation. The various aspects that we make out in a phenomenon are determined not only by how we observe, but also by the concepts that we project onto the phenomenon in question.”
If you’re worried about your kids and the fact that More than half of students chasing dying careers you are probably right. If you think that electing a career that is not dying will help them you are probably wrong. Of course this and other Odradeks will outlive parents, but the dismissal is not a kafkan one. Problem is it is likely the case that the problem is not which careers, but more likely ‘careers’ itself is becoming an obsolete term.
Colleagues seem to be there for a while, though we might have to be Ready for a Robot Colleague.
In a broader view on workmate, A.I. may give us a ride in preparation for a new time occupation future. Perhaps it’s better for us to Don’t Worry, Smart Machines Will Take Us With Them. remember those kids chasing dying careers? That’s only part time – the rest of it they are drooling obsessively at smart phones as much as we let them. It may well be that the case that this is their robot education in the making.
Starting from a no-fun that more people have died from selfies than shark attacks this year as a anecdotal case for interaction between natural selection and technology. It’s too far a shot, since anyone may well ponder that shark killing were never a key driver of human selection to begin with.
Having sex, tho, have always been a key driver. And looking good to potential mates does have a play in this. In this light the selfie-selection link start being not so naive. Even then, selfie is too short a fling to make an impact in the big picture. As a further analogy, though, it is arguable that selfie is the current mode of a mediated relation that has for a long time being around in human kind reproduction.
If we consider the big impact some fundamental technological innovations such as tool making, language, and culture have had in human survival and reproduction abilities, then the evidence turns around; it is very hard to deny technology has not been one of the key drivers of evolution even before homo sapiens.
A few short, recent articles on this discuss Is Technology Unnatural—Or Is It ‘What Makes Us Human’? makign the point that technology is part of us. Looking forward, A Genomics Revolution: Evolution by Natural Selection to Evolution by Intelligent Direction points to the fact that if in the role of technology in human evolution was rather passive, genomics can shift that into a very active designing. But then if Science Says the Internet Is Turning Us into Shallow Thinkers, what sort of evolution would technology-driven world lead us to?
This is a very good speech but not too short – brevity is for the weak in top of Idle Words page goes as a reminder. Topics covered:
- The corporate side of our culture of total surveillance – The odd story of how advertisers destoyed our online privacy and then found themselves swindled by robots.
- Six fixes Maciej Cegłowski thinks could restore Internet privacy.
- Capitalists who act like central planners, and an industry that insists on changing the world without even being able to change San Francisco:
“When we talk about artificial intelligence (AI) – (…) – what do we actually mean?
(…) having a usable definition of AI – and soon – is vital for regulation and governance because laws and policies simply will not operate without one.
(…) Defining the terms: artificial and intelligence
For regulatory purposes, “artificial” is, hopefully, the easy bit. (…) , leaves the knottier problem of “intelligence”.
From a philosophical perspective, “intelligence” is a vast minefield, especially if treated as including one or more of “consciousness”, “thought”, “free will” and “mind”. (…)
Let’s take a step back and ask what a regulator’s immediate interest is here?
I would say that it is the work products of AI scientists and engineers, and any public welfare or safety risks that might arise from those products.
Logically, then, it is the way that the majority of AI scientists and engineers treat intelligence” that is of most immediate concern.(…) read full post
From NY Times Op-Ed
“In the mid-1980s, a University of Arizona surgery professor, Marlys H. Witte, proposed teaching a class entitled “Introduction to Medical and Other Ignorance.” (…)
(…) She wanted her students to recognize the limits of knowledge and to appreciate that questions often deserve as much attention as answers. Eventually, the American Medical Association funded the class, which students would fondly remember as “Ignorance 101.”
Classes like hers remain rare, but in recent years scholars have made a convincing case that focusing on uncertainty can foster latent curiosity, while emphasizing clarity can convey a warped understanding of knowledge.
(…) By inviting scientists of various specialties to teach his students about what truly excited them — not cold hard facts but intriguing ambiguities — Dr. Firestein sought to rebalance the scales.
Presenting ignorance as less extensive than it is, knowledge as more solid and more stable, and discovery as neater also leads students to misunderstand the interplay between answers and questions.
(…) Questions don’t give way to answers so much as the two proliferate together. Answers breed questions. Curiosity isn’t merely a static disposition but rather a passion of the mind that is ceaselessly earned and nurtured.
(…) The resulting state of uncertainty, psychologists have shown, intensifies our emotions: not only exhilaration and surprise, but also confusion and frustration.
The borderland between known and unknown is also where we strive against our preconceptions to acknowledge and investigate anomalous data, a struggle Thomas S. Kuhn described in his 1962 classic, “The Structure of Scientific Revolutions.” (…)
The study of ignorance — or agnotology, a term popularized by Robert N. Proctor, a historian of science at Stanford — is in its infancy. (…)
Our students will be more curious — and more intelligently so — if, in addition to facts, they were equipped with theories of ignorance as well as theories of knowledge.” Read full story
Seth R. Bordenstein and Kevin R. Theis’s “Host Biology in Light of the Microbiome: Ten Principles of Holobionts and Hologenomes” combines impressive qualities. It suggests no less than a holistic redefinition of zoology, botany, and biology. And they are careful to re-state historical achievements of Darwin, Mendel and modern scientists in this new framework; animals and plants are more appropriately understood as a mutli-species association than autonomous individuals. Both at biologic and genetic level.
As a sense of justice welcome the new status of our former ‘junior’ associates, I wonder how the implosion of the self into a multitude of beings fits well in a society that may overvalue individuality. Instead of a dissolution of the self into a common spirituality, we see the multiplication of ‘I’ into multiple individuals. The untold story of mitochondria et alii paints egocentric narrative in a more altruist light.
This does not take the great service such reconstruction may do to science. Excerpts below:
It appears we can’t keep their pace. In a recent batch of news a friend found robots painting Van Gogh style, robots beating us on rock-paper-scissors (below, and btw that’s cheating on my playground), writing adventures and so on.
And while we can’t make ethical robots, and they are not yet out there firing (at) us, humans may enjoy treating robots like Yo-Yo Ma’s cello — as an instrument for human intelligence.
Watching chimpanzees reactions to infanticide imagery bring evidence of Chimpanzee’s sense of right and wrong. Not far from what humans would deem relevant: From Claudia Rudolf et alli paper “Chimpanzees’ Bystander Reactions to Infanticide”
“we presented chimpanzees with videos depicting a putative norm violation: unfamiliar nonspecific engaging in infanticidal attacks on an infant chimpanzee. The chimpanzees looked far longer at infanticide scenes than at control videos showing nut cracking, hunting a colobus monkey, or displays and aggression among adult males. Furthermore, several alternative explanations for this looking pattern could be ruled out. However, infanticide scenes did not generally elicit higher arousal. We propose that chimpanzees as uninvolved bystanders may detect norm violations but may restrict emotional reactions to such situations to in-group contexts. We discuss the implications for the evolution of human morality.”
This latter behavior described is not unheard in humans, as Kitty Genovese murder case, when many bystanders failed to help to her as she was murdered and raped in two separate attacks. Not to mention mobs, lynching and other human habits.
Evidence at least as old as the bible is a reminder of how getting the sense of what is right is not enough – in case anyone say chimpanzee’s morals shouldering human’s is a clear upgrade.
“There was a time – some years ago – when to profess disbelief in a Supreme Being could be hazardous to one’s health. (…). Today, atheism has taken its comfortable seat by the fire and has its feet up. (…). Atheism has never been so respectable.
That is why perhaps we now ought to pause and ask if it has actually earned the easy place it enjoys. (…) Before we begin the trial, perhaps we ought to clarify the case. What is ‘atheism’?
(…) It claims there exists no kind of god.
That’s basic. But we might ask, ‘Is it really necessary to understand atheism as so categorical? Can’t we make room for softer versions of skepticism, so as to be more inclusive?’
(…) But secondly, and more importantly, including agnostics in their position is going to give away the game at the start (…), it is a personal declaration of doubt, not a categorical one. In its strongest form, agnosticism says something like, “I really, really, really strongly don’t think there is any God, because I’ve seen no evidence anywhere near sufficient to make me think there is one.” But the savvy atheist is going to detect the problem: as a personal declaration, it fails to bind anyone else.Continue reading
“Digital star chamber” featured on aeon magazine – by Frank Pasquale:
“The infancy of the internet is over. As online spaces mature, Facebook, Google, Apple, Amazon, and other powerful corporations are setting the rules that govern competition among journalists, writers, coders, and e-commerce firms. (…)
Algorithms are increasingly important because businesses rarely thought of as high tech (…) are collecting data from both workers and customers, using algorithmic tools to make decisions, to sort the desirable from the disposable.(…)
For wines or films, the stakes are not terribly high. But when algorithms start affecting critical opportunities for employment, career advancement, health, credit and education, they deserve more scrutiny. (…)Continue reading
In “What Searchable Speech Will Do To You” , published in Nautilus, James Somers discuss some interesting aspects on the coming possibility of having all we say recorded. And then labeled, tagged, searched…
“We are going to start recording and automatically transcribing most of what we say. (…) It will happen by our standard combination of willing and allowing. It will happen because it can. It will happen sooner than we think.
(…) But would all of this help or hurt us? (…) The more we come to rely on a tool, the less we rely on our own brains.
(…) By offloading more of memory’s demands onto the Record (…) it might not be that we’re making space for other, more important thinking. We might just be depriving our brains of useful material. (…)
The worry, then, is twofold: If you stopped working out the part of your brain that recalls speech (…) your mind would become a less interesting place.Continue reading
Posted at MonkeyLearn
“Machine Learning is a subfield within Artificial Intelligence that builds algorithms that allow computers to learn to perform tasks from data instead of being explicitly programmed.
(…) some of the most common categories of practical Machine Learning applications:
Image Processing (…) : Image tagging (…) , Optical Character Recognition (…) , Self-driving cars (…)
Text Analysis(…) : Spam filtering, (…) Sentiment Analysis,(…) Information Extraction, (…)
Data Mining(…): Anomaly detection, (…) Grouping , (…), Predictions(…)
Video Games & RoboticsContinue reading
When self-driving cars hit the streets, among other features it is expected that they take pedestrian safety seriously. This hope is illustrated by this couple of anecdotal stories show Google self driving car reactions to a lady chasing ducks on her wheelchair or a cyclist making a track stand.
But what next? After self-driving is supposedly stablished and approved technology, people will probably get back to the trend on posing pedestrian the blame. Ravi Mangla “The secret history of jaywalking: The disturbing reason it was outlawed — and why we should lift the ban” shows it happened in the past. Adoption of self-driving driving cars may be the opportunity window to have people free to walk again.
A similar claim comes from “The end of walking” by Antonia Malchik. When it comes to getting around, sitting apes have the high ground on the standing ones. It should also be noticed that such anti-pedestrian behaviour is growing among cyclists as well. Even if cyclists are in general much more civilized than car drivers. But so were drivers in automobile early days as well…
About when people would seem enough to think of computing capacity in terms of FLOPS, supercomputers development makes the point that a better measure is TEPS. TEPS stand for Traversed edges per second, which is sort of FLOPS weighted by communication cost.
Anyway, fact is AI Impacts produced estimates for our Brain performance in TEPS. Next thing was the ubiquitous, of course. It would seem we can hire this computational power in the next decade by $ 100/hour. But for the time being this cost is estimated to be around $4,700 – $170,000/hour. So go to your boss and tell him he’s renting your brain for a bargain.
IF you do so, your odds are better if you skip the info below and make it simple. New studies show that our brains do consider cognitive effort when making choices. This ‘TLDR’ feature of brain wiring may be the culprit in preventing you to go through the paper “Separate and overlapping brain areas encode subjective value during delay and effort discounting” that says so.
Having similar applications, users, and backgound, at a distance Machine Learning may sometimes be confused with an application of Statistics.
A closer look reveal fundamental differences, as in “Why a Mathematician, Statistician, & Machine Learner Solve the Same Problem Differently” by Nir Kaldero.
One scientific field this difference comes to surface in a distinguished manner is economics, as Noah Smith’s “Economics Has a Math Problem” sensibly puts the emphasis on the way economics uses math.
Pushing science to new fields, scientists can now employ much more data and computational power than the time when a significant part of mainstream economics was developed. If econometric tools set the tone for neoclassic economic papers in the final decades of last century, could machine learning, Bayesian inference, and neural networks open new possibilities to economic theory?
One arguable example is “Mechanisms for Multi-unit Combinatorial Auctions with a Few Distinct Goods” by Piotr Krysta, Orestis Telelis, Carmine Ventre. Not a coincidence, researchers are not from Economics departments. Even if economists are stubborn enough to dismiss game theory as a non-fundamental field, message is clear: if economists don’t embrace new math, other scientists (human or not) could engulf economics less cerimoniously.
If this happens, will we find that Keynesian uncertainty and weight of arguments fits big data better than deterministic parameters of neoclassic mainstream?