Class NeuralNetworkGenerator (AML part 7)

In this part to consider, how to works class to created and train of neural network for any amount layers. On this step i am not adding bias neuron. I will show difference between source code and current result. For begin consider source code, evrything is simple here.

For begin consider source code, evrything is simple here.
Here exist three functions: __init__, train and query. The most difficult function is the “train”, thus I am start to her.
I begin to the moment, what all information about neural network contain in array. The array have a lenght and values. Each value have a index. For example, this array [784, 200, 10] have three index [0, 1, 2]. If adding values in array his lenght will change, thus needed starting from lenght of array. If lenght of array equaly three, then the number of weights factors will be one less and will be equal two. In the result the function __init__ must give two arrays contained weights factors. The number of weights factors maybe any, in this cause they storing to self.weights array. Look this example for understand how it view when variables self.wih and self.who exchanged to indexes of self.weights array. 

Also in function __init__ contain next variables learning rate and activation function. These functions are used in the future. So well, I have a function of initialisation of neural network. This function get array of parameters and generate weight factors and following wrote them in self.weights array, its good!

Before start of training of new neural network, you need consider of data of targets. For this in source code used such variables as inputs and targets. For back propagation of error used multiplication of matrixes a first layer weights and transposed matrix of inputs. The next operation the multiplicate matrixes of second layer weights and previous result. Look this example for understand how it view when variables self.wih and self.who exchanged to indexes of self.weights array.

Well, next change was create a functions layer_inputs and layer_outputs. Function layer_outputs return analog such variables as hidden_outputs and final_outpus and convert they to indexes for cycle. You can see, function lauers_outputs taking arguments from layers_inputs function. Independent of nums layers, they return new weight factors include in array. A variable outputs contain this array.

Notice an array x, why return an array starting from rhe first element? The first element of x array is input value. His index is zero, this value is not considered. So, my goal is to convert all variables to an array index, for looping. 

This code returns an error. Since the error of the output layer is calculated only once, you need to get the output layer by the index that it has in the array. To quickly calculate the last index of an array, I wrote the self.lr (last index) function. Errors array stores errors for each layer, the first error in it is the output error, indexed 0. This code returns an error. Since the error of the output layer is calculated only once, you need to get the output layer by the index that it has in the array. To quickly calculate the last index of an array, I wrote the self.lr function. The errors array stores errors for each layer, the first error in it is the output error, indexed 0. To supplement the array with errors from other layers, I wrote a loop. Before sending to the loop self.weights, I flipped the array of weight coefficients. This is done so that the index i is the same for all values. After doing the calculations, I turned it back over. The most difficult thing remains – updating the weight coefficients.

This part of the code was not easy for me. The fact is that there are many variables that needed to be brought up to the index i. In addition, updating the weights of the output layer is different from updating the weights for the rest of the layers in the neural network. Perhaps I made a big mistake in not saving the process of finding a solution. This would allow a much better understanding of exactly how I came to this decision. However, this option works well. It remains to rewrite the query function.

As you can see, it turned out to be very compact thanks to the above functions. Now the full version of the neural network class.

Sumptuously! I tested how it works with different options. The most interesting were neural networks with expanding layers, such as [784, 100, 200, 10]. The program works fine.

It remains to put the code in order. Add neuron bias and inputs. Add save to base and csv file. Create a load of the trained neural network with the saved parameters.

The full version code here https://github.com/scisoftdev/Python/blob/master/Neural_Networks_Generator/NeuralNetworkGeneratror.py

Stages of creating a solution.
The first step is the idea. Once you have a clear idea of ​​how the program should work, you can start collecting information on its implementation. This includes libraries, code samples, instructions, and manuals.
The second step is the distribution of the project components. It is necessary to determine what functionality will be automated, what dependencies will be in the project. What additional solutions need to be developed to speed up testing and development processes.
The third step is building the project. It often happens that the project code is scattered in different folders, or even resides on different computers. Therefore, you need to put everything together. It is best to always use a virtual environment. It often happens that it is very difficult then to determine which libraries and which version are needed for everything to work as it should.
Fourth step. When the foundation of the project is ready, you need to understand where to go next. To do this, you need to determine the main directions of development. Based on this, it is possible to outline further stages of development and development of the project.
The fifth step is to master the project. Over time, not only the functionality of the program grows, but also its complexity and size. She begins to consume more and more resources. To avoid collapse, you need to look for ways to optimize.

You can find a very simple code on the Internet. This can be an example or a small open source project. And then, using object-oriented programming, you can get a program with very broad functionality. The cooking recipe is quite simple. A little imagination, understanding the principles of algorithms, understanding the principles of writing programs, a little mathematics and physics (maybe another science). You can also spice it up with new ideas, a little maximalism and that’s it, the project is ready!

All comments containing advertising and spam will not be moderated and will be deleted!

Leave a Reply

Your email address will not be published. Required fields are marked *