Neural Network Analysis

By Andrew Wolfe

Describe your overall approach to implementing the algorithm in code. How are your classes/data structures organized? How do you keep track of the necessary pieces for back-propagation.



I created a class/object for both the layers and nodes. The layers had weight initializers for the first data row, as well as a function to give the activations as the inputs for the next layer. Each node as a weight, activation, h value, and clean function.

For back propogation, I had an array of nodes and errors that I used to access. I would access each node by backwards for-loops

Describe the part of the assignment that gave you the most trouble, and how you overcame it.



The most trouble came from troubleshooting the last node layer weights, and updating them. I don’t quite rememeber what the problem was, but I had to debug with print statements just about everywhere.

But it also has given me trouble trying to find a good combination of learning rates and epoch amounts that work.

Produce at least one graph to show the training progress for the Iris dataset.





Compare your results on the Iris dataset to those of an existing implementation.



Sometimes the existing implementation will work with the parameters I give it, but it will seldom work at the same time as my implementation. As seen above, it may have worked, but the error rate staying at 100% for so long is a little puzzling. Even random guessing should be an error around 60%. But on the run featured above, the existing implementation was 33% accuracte after 200 epochs. But sometimes it will guess around 80-90% with the right parameters.



Produce at least one graph to show the training progress for the Diabetes dataset.





Compare your results on the Diabetes dataset to those of an existing implementation



At best, my implementation had about 69% accuracy on epoch 210, whereas the existing implementation had an accuracy of 71%, so very close! It seemed to work better with the Pima dataset.