I will provide full code samples, written in C#, which can be compiled and run by downloading Visual C# 2008 Express Edition from Microsoft at http://www.microsoft.com/express/vcsharp/
The Perceptron is the grand daddy of all neural networks. It was invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. The Perceptron is the simplest type of feed forward neural network, known as a linear classifier. This means that the type of problems the network can solve must be linearly separable. In the graphs shown below it is easy to visualise what this means in two dimensions.
This is linearly separable.
This is linearly non-separable.
The green line represents the separation between the two classes of data the network is attempting to classify. In three dimensions this would be represented by a plane, and in four dimensions or more by a hyperplane. There are other types of network that can solve linearly non-separable problems, which I will cover in later posts.
To begin to solve this problem we need a network as represented below.
As you can see, each input node is connected directly to the output node. By adjusting the value of the connecting weights, the network is able to learn.
In the demonstration below each of the points plotted in the first graph above will be used as training data. The training data will be repeatedly applied to the input nodes of the network and the weights adjusted until the network is performs as desired.
Once the network has been trained the application will step through the range of input data to prove that the network has learnt correctly and is able to perform good generalisation. The following code shows how this is done.
The output from the network after training looks like this, which proves generalisation has been correctly achieved. Each pair of x and y coordinates when presented to the network returns the classification expected, i.e. blue or red.
Whilst this may seem like a trivial problem to solve in two dimensions, the Perceptron can perform very powerful analysis in higher dimensions. The same general principals shown here will apply.
In my next post, I will start to write about the multi-layer perceptron, which can be used to solve linearly non-separable problems, like the one presented in the second graph above.
I welcome feedback on this post, and would be interested in receiving suggestions for future posts.