“Rethinking the Manufacturing Robot” By Mike Orcutt

“Baxter, introduced two years ago, was designed to be far simpler, safer, and more intuitive to operate than a conventional industrial robot (see “This Robot Could Transform Manufacturing”). Traditional industrial robots are expensive to install and use, and must be separated from human workers for safety. To function properly they typically require that their environments be highly structured and unchanging.

Rethink says sales of Baxter, which costs $25,000, and is only available to manufacturers in the U.S., have only reached “several hundred.” This limited success, and the effort to develop Sawyer, suggest that Rethink may have misjudged the opportunity that comes with balancing simplicity and safety with accuracy and speed…” full story in MIT Technology Review

“How Artificial Intelligence Is Primed to Beat You at Where’s Waldo” BY JASON DORRIER

Microsoft revealed its image recognition software was wrong just 4.94% of the time

A month later, Google reported it had achieved a rate of 4.8%.

Now, Chinese search engine giant, Baidu, says their specialized supercomputer, Minwa, has bested Google with an error rate of 4.58%

Anexpert human ?  5.1%.

In this article Jason Dorrier tells us how deep learning A.I. software is applying big data to improve it further.

“Putting the Data Science into Journalism” By Keith Kirkpatrick

“The key attributes journalists must have—separate fact from opinion, find and develop sources, and curiosity to ask probing, intelligent questions—are still relevant today’s 140-character-or-less, ADHD-esque society. Yet increasingly, journalists dealing with technical topics often found in science or technology are turning to tools that were once solely the province of data analysts and computer scientists.

Data mining, Web scraping, classifying unstructured data types, and creating complex data visualizations uncover data that would be impossible to compile manually.

“It is about giving the audience information that is unique, in-depth, that allows them to explore the data, and also engage with the audience,” says David Herzog, a professor at the University of Missouri…” read story

 

I knew you were wondering when this would come up…

in “Why Brain-to-Brain Communication Is No Longer Unthinkable”, published at Smithsonian.com, Jerry Adler guides you to the amazing ways science is throttling towards new ways brains can communicate with each other.
That is, on top of the already ingenious solutions humans developed over time, such as body, oral an then written languages to mention the most popular.
As in other developments, we rely greatly on our brains to find out how.  Only it seems we may the skipping the instensive use of senses in the near future.

Daily Activity as Password

In this recent paper researchers explores the possiblity of creating security checks directly from users activity.  Verification based on activity logs of apps in the mobile would make an automated key creator.

Despite some limitation to security for some applications that require especific criptography and legal restictions to what can be used as verification, system could bring benefits: no need of previous passowrd checking, confirmation and registry, limitation of password sharing.

Users would free part of the valuable RAM memory used to remember the multitude of passwords we deal daily.

“Digital tattoo lets you control devices with mind power alone” by Hal Hodson

It’s been a while we know electroencephalograms give information on what is happening inside our brains.  But it’s not something you’d want to be getting around in your head – or is it?

John Rogers at the University of Illinois at Urbana-Champaign led the team that built a flexible electronic skin that conforms to the body, which is so light that it sticks to the skin through van der Waals force – the same mechanism that lets geckos’ feet stick to surfaces. It only falls off when the build-up of dead skin beneath it makes it lose its grip.

read full story

“The Coming Problem of Our iPhones Being More Intelligent Than Us” by VIVEK WADHWA

“Ray Kurzweil made a startling prediction in 1999 that appears to be coming true: that by 2023 a $1,000 laptop would have the computing power and storage capacity of a human brain.  He also predicted that Moore’s Law, which postulates that the processing capability of a computer doubles every 18 months, would apply for 60 years — until 2025 — giving way then to new paradigms of technological change.

Kurzweil, a renowned futurist and the director of engineering at Google, now says that the hardware needed to emulate the human brain may be ready even sooner than he predicted — in around 2020 — using technologies such as graphics processing units (GPUs), which are ideal for brain-software algorithms. He predicts that the complete brain software will take a little longer: until about 2029.

The implications of all this are mind-boggling…”  read full story

“Online Fact-Checking Tool Gets a Big Test with Nepal Earthquake” – By Mike Orcutt

An organization crowdsources the verification of rumors on social media in the Nepal disaster zone.

Shortly after a 7.8 magnitude earthquake hit Nepal on Saturday, social media services lit up with unverified reports of people trapped and buildings damaged. But how could humanitarian organizations know where to respond first? How could they know which accounts were actually true? – Read story from MIT Technology Review…

“Determining Possible and Necessary Winners Given Partial Orders” by L. Xia and V. Conitzer

“Usually a voting rule requires agents to give their preferences as linear orders. However, in some cases it is impractical for an agent to give a linear order over all the alternatives. It has been suggested to let agents submit partial orders instead. Then, given a voting rule, a profile of partial orders, and an alternative (candidate) c, two important questions arise: first, is it still possible for c to win, and second, is c guaranteed to win? These are the possible winner and necessary winner problems, respectively….”

“Scheduling Conservation Designs for Maximum Flexibility via Network Cascade Optimization” by Shan Xue, Alan Fern and Daniel Sheldon

“One approach to conserving endangered species is to purchase and protect a set of land parcels in a way that maximizes the expected future population spread. Unfortunately, an ideal set of parcels may have a cost that is beyond the immediate budget constraints and must thus be purchased incrementally. This raises the challenge of deciding how to schedule the parcel purchases in a way that maximizes the flexibility of budget usage while keeping population spread loss in control. In this paper, we introduce a formulation of this scheduling problem that does not rely on knowing the future budgets of an organization. In particular, we consider scheduling purchases in a way that achieves a population spread no less than desired but delays purchases as long as possible…”

“Lazy Model Expansion: Interleaving Grounding with Search” by Broes De Cat, Marc Denecker, Maurice Bruynooghe and Peter Stuckey

Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowledge representation languages is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory.
An important bottleneck is the blow-up of the size of the theory caused by the grounding phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(.) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it…

“Scaling up Heuristic Planning with Relational Decision Trees” by T. De la Rosa, S. Jimenez, R. Fuentetaja and D. Borrajo

“Current evaluation functions for heuristic planning are expensive to compute. In numerous planning problems these functions provide good guidance to the solution, so they are worth the expense. However, when evaluation functions are misguiding or when planning problems are large enough, lots of node evaluations must be computed, which severely limits the scalability of heuristic planners. In this paper, we present a novel solution for reducing node evaluations in heuristic planning based on machine learning…”