Blondie24

David B. Fogel

Mentioned 4

"Blondie24 tells the story of a computer that taught itself to play checkers far better than its creators ever could by emulating the principles of Darwinian evolution and discovering innovative ways to approach the game. In this year of 2001, as we remember Arthur C. Clarke's predictions, David Fogel dramatically demonstrates how evolutionary computation may enable humans to create a thinking machine far more readily than the techniques traditionally used in the study of artificial intelligence."--BOOK JACKET.

More on Amazon.com

Mentioned in questions and answers.

I was recently in a discussion with a non-coder person on the possibilities of chess computers. I'm not well versed in theory, but think I know enough.

I argued that there could not exist a deterministic Turing machine that always won or stalemated at chess. I think that, even if you search the entire space of all combinations of player1/2 moves, the single move that the computer decides upon at each step is based on a heuristic. Being based on a heuristic, it does not necessarily beat ALL of the moves that the opponent could do.

My friend thought, to the contrary, that a computer would always win or tie if it never made a "mistake" move (however do you define that?). However, being a programmer who has taken CS, I know that even your good choices - given a wise opponent - can force you to make "mistake" moves in the end. Even if you know everything, your next move is greedy in matching a heuristic.

Most chess computers try to match a possible end game to the game in progress, which is essentially a dynamic programming traceback. Again, the endgame in question is avoidable though.

Edit: Hmm... looks like I ruffled some feathers here. That's good.

Thinking about it again, it seems like there is no theoretical problem with solving a finite game like chess. I would argue that chess is a bit more complicated than checkers in that a win is not necessarily by numerical exhaustion of pieces, but by a mate. My original assertion is probably wrong, but then again I think I've pointed out something that is not yet satisfactorily proven (formally).

I guess my thought experiment was that whenever a branch in the tree is taken, then the algorithm (or memorized paths) must find a path to a mate (without getting mated) for any possible branch on the opponent moves. After the discussion, I will buy that given more memory than we can possibly dream of, all these paths could be found.

I think you are dead on. Machines like Deep Blue and Deep Thought are programmed with a number of predefined games, and clever algorithms to parse the trees into the ends of those games. This is, of course, a dramatic oversimplification. There is always a chance to "beat" the computer along the course of a game. By this I mean making a move that forces the computer to make a move that is less than optimal (whatever that is). If the computer cannot find the best path before the time limit for the move, it might very well make a mistake by choosing one of the less-desirable paths.

There is another class of chess programs that uses real machine learning, or genetic programming / evolutionary algorithms. Some programs have been evolved and use neural networks, et al, to make decisions. In this type of case, I would imagine that the computer might make "mistakes", but still end up in a victory.

There is a fascinating book on this type of GP called Blondie24 that you might read. It is about checkers, but it could apply to chess.

I want to program a chess engine which learns to make good moves and win against other players. I've already coded a representation of the chess board and a function which outputs all possible moves. So I only need an evaluation function which says how good a given situation of the board is. Therefore, I would like to use an artificial neural network which should then evaluate a given position. The output should be a numerical value. The higher the value is, the better is the position for the white player.

My approach is to build a network of 385 neurons: There are six unique chess pieces and 64 fields on the board. So for every field we take 6 neurons (1 for every piece). If there is a white piece, the input value is 1. If there is a black piece, the value is -1. And if there is no piece of that sort on that field, the value is 0. In addition to that there should be 1 neuron for the player to move. If it is White's turn, the input value is 1 and if it's Black's turn, the value is -1.

I think that configuration of the neural network is quite good. But the main part is missing: How can I implement this neural network into a coding language (e.g. Delphi)? I think the weights for each neuron should be the same in the beginning. Depending on the result of a match, the weights should then be adjusted. But how? I think I should let 2 computer players (both using my engine) play against each other. If White wins, Black gets the feedback that its weights aren't good.

So it would be great if you could help me implementing the neural network into a coding language (best would be Delphi, otherwise pseudo-code). Thanks in advance!

Read blondie24 : http://www.amazon.co.uk/Blondie24-Playing-Kaufmann-Artificial-Intelligence/dp/1558607838.

It deals with checkers instead of chess but the principles are the same.

I am not a mathematician. I enjoy a good math puzzle, but I admit my weaknesses whole heartedly. That said, I've always had an interest in Neural Networks, and while I understand them enough to implement them from scratch, I hit a wall when I need to understand any concept that I can only find mathematic proofs for. Where is the programmer's guide to neural networks, using code instead of formula to explain the practical reasonings?

Unfortunately, I don't know if there's a good single "programmers source" that will give you all of the concepts. I liked Neural and Adaptive Systems: Fundamentals through Simulations.

The best way to have a "programmer's understanding" of neural networks is not so much by examining the code, but in the problem and the correct results. So, if you don't want to look at math, I recommend you look at a given problem. For example, consider the XOR problem as an example of why you need non-linear activation functions, look at the number of variables and their possible values for understanding why a neural network needs to be of a certain size and toplogy to be effective, and split your data into train/test regimes and do studies to see why overfitting is dangerous. Examine the code with the data.

I also recommend not getting too hung up, but reading further. Certain practices in feed-forward networks become more clear once you see their generalization in recurrent and constructive neural networks. I also recommend going wider: Bayesian networks, fuzzy cognitive maps, SOM, Boltzman machines, simulated annealing, and reinforcement learning all have intuitions.

Does this go towards answering your question?

Another alternative is a non-math, non-programming explanation. The book Blondie24: Playing at the Edge of AI contains a really great explanation of neural networks. It's about a checkers-playing AI developed by the author. It's not completely without programming references, but it does a great job of explaining how the algorithms work without getting into the code of the solution.

I have personally used:

Practical Neural Network Recipes in C++

http://www.amazon.com/Practical-Neural-Network-Recipes-C/dp/0124790402/ref=pd_bxgy_b_img_b/179-4083507-8029219

The author in my opinion does not fully utilize the more powerful functionality of C++, in many cases it reads more like traditional C with classes. The book is also a little dated by now.

HOWEVER - if you need explanations of the algorithms and techniques used in neural networks, explained in a way that an intelligent layperson could understand, so that you can go away and try these things for yourself, then I would certainly give this book a try. Not much navel-gazing goes on here, which is what I liked.

It takes you through all the main things needed to program a neural network - how to compare the actual output with the desired in order to obtain an error signal, and then use this error signal in conjunction with back propagation algorithms to modify the network link connection strengths, doing this iteratively so that gradually the neural network 'learns' the task.

I am looking for good resources for AI programming (any language), both books and online stuff. I am particularly interested in neural networks implementations.

Programming Collective Intelligence by Toby Segaran is an excellent book. It covers lots of different AI/data-mining techniques (including neural networks), with interesting example applications for each. All the code is in Python, but it's easy to follow even if you don't know Python.

I highly recommend the excellent book Blondie 24, about the most advanced checkers playing AI (at the time the book was written), and Introduction to Neural Networks for Java, which talks a lot about the open source JOONE neural engine.