At least since Deep Blue beat Kasparov chess masters got used to the idea that computers may outrun human chess abilities. Those were times that coders would handcrafted specialized evaluation functions. Chess-designed algos. Some comfort derived from the fact that even if computers would use brute force to practice and test millions of games in order to tune parameters, the basic features were designed and input by humans.
Now the game is changing (again). After A.I. mastered Atari games by itself, new deep learning application made possible computers to learn how to play chess by themselves.
Matthew Lai published the paper “Giraffe: Using Deep Reinforcement Learning to Play Chess” describes an engine that could learn move decision in a way that is closer to human than previous attempts. The leap forward is in reducing options under evaluation. Pruning a decision tree as early as possible.
Its neural network seems to benefit a lot from playing itself. This may be something that humans will have a hard time to beat.