To better understand how the neural network from the book works, reverse engineering is needed. For example, understand how an image library converts an image into a digital set and sends them to the input layer of a neural network?
I’m using PyCharm code editor, so the code for IPython won’t work, so it will be slightly different. To create a neural network that will work in an automatically learning program, need to break the program into three stages. The first stage is creating a neural network and generating weights. The second stage is training the neural network and storing the weights for their subsequent use. The third stage is to write a program that will apply ready-made weights for object recognition.
To successfully complete these three points, necessary to understand how the code from the book of Tariq Rashid works. This understanding also helps to understand another, more complex book by Simon Haykin, “Neural Networks”, second edition. Having at your disposal these two powerful sources of knowledge, realy not only create neural networks and train them, but also optimize all processes in the neural network at the mathematical level. For me, Tariq Rashid’s book was the beginning of the path to more complex and exciting mathematics.
Link to Tariq Rashid’s blog https://makeyourownneuralnetwork.blogspot.com/
I’m not promoting this author, it’s just that his book really helped me)
Moving on, studying the program code and mathematical principles, I broaden my horizons. However, for full-fledged artificial intelligence, neural networks are clearly not enough, or rather, this is only a small fraction of what should be.
So, in order to save the weight coefficients of a neural network, necessary to understand where the weight coefficients already adjusted after training come from. They are in the query function. Necessary the arrays obtained after applying the sigmoid. Since there are only two weight matrices in this example, there will only be two arrays to store. For stored a these datasets maybe use various variant. For example, text document or special databases such as graph databases (SPARQL), or file with the h5py extension.
Referring to the source code, the link to the source code is https://github.com/makeyourownneuralnetwork/makeyourownneuralnetwork/blob/master/part2_neural_network.ipynb. For the convenience of its use in PyCharm I will rewrite it like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
# (c) Tariq Rashid 2016 # License GPLv2 import numpy import math import glob import imageio class neuralNetwork: def __init__(self, inputnodes, hiddennodes, outputnodes, learningrate): self.inodes = inputnodes self.hnodes = hiddennodes self.onodes = outputnodes self.wih = numpy.random.normal(0.0, pow(self.inodes, -0.5), (self.hnodes, self.inodes)) self.who = numpy.random.normal(0.0, pow(self.hnodes, -0.5), (self.onodes, self.hnodes)) self.lr = learningrate self.activation_function = lambda x: 1 / (1 + (math.e ** (-x))) def train(self, inputs_list, targets_list): inputs = numpy.array(inputs_list, ndmin=2).T targets = numpy.array(targets_list, ndmin=2).T hidden_inputs = numpy.dot(self.wih, inputs) hidden_outputs = self.activation_function(hidden_inputs) final_inputs = numpy.dot(self.who, hidden_outputs) final_outputs = self.activation_function(final_inputs) output_errors = targets - final_outputs hidden_errors = numpy.dot(self.who.T, output_errors) self.who += self.lr * numpy.dot((output_errors * final_outputs * (1.0 - final_outputs)), numpy.transpose(hidden_outputs)) self.wih += self.lr * numpy.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)), numpy.transpose(inputs)) def query(self, inputs_list): inputs = numpy.array(inputs_list, ndmin=2).T hidden_inputs = numpy.dot(self.wih, inputs) hidden_outputs = self.activation_function(hidden_inputs) final_inputs = numpy.dot(self.who, hidden_outputs) final_outputs = self.activation_function(final_inputs) return final_outputs |
The query function returns already prepared weights, so they only need to be saved. The next step is to create a neural network class that will use the stored weights.
To better understand how the code works, I needed to write and test several of my own versions of the code. First I tried to write matrix multiplication without using libraries like nampy or pandas.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
weights = [ [1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12] ] inputs = [10, 20, 30, 40] x = str(len(weights[0])) y = str(len(weights)) print("products of matrix " + y + " X " + x + " by matrix 1 X 4") new_matrix = [] for j in range(len(weights)): mass = [] summa = 0 for i in range(len(weights[0])): mass.append(weights[j][i] * inputs[i]) summa = summa + mass[i] new_matrix.append(summa) print(mass, summa) print(new_matrix) |
The result of the program’s work shows a new matrix. Here I was interested in the multiplication process itself, so I did not write a generator or add a sigmoid.
1 2 3 4 5 |
products of matrix 3 X 4 by matrix 1 X 4 [10, 40, 90, 160] 300 [50, 120, 210, 320] 700 [90, 200, 330, 480] 1100 [300, 700, 1100] |
To better understand how a simple neural network, consisting of three layers and two neurons in each layer, works, I wrote a simple program for myself. The program is written in a procedural style to make it as easy as possible to understand the processes occurring in the neural network.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
import math import numpy # random number generator def rand(): rand = numpy.random.normal(0.0, 0.9) return rand # inputs inputs = [0.8, 0.2] # we write the weight coefficients into this array, this is a 2 by 2 matrix weights = [] # it is a generator of a matrix of weights of any size # useful for large neural networks # this function creates the width of the matrix or the number of neurons of the next layer # in this case hidden hiddens = 2 outputs = 2 def weights_inputs_hiddens(): inputs_hiddens = [] for i in range(hiddens): inputs_hiddens.append(rand()) return inputs_hiddens # this cycle sets its height or the number of neurons in the previous layer # in this case the input for j in range(len(inputs)): weights.append(weights_inputs_hiddens()) def neuron_hiddens(): global sigmoid neuron_hiddens = [] for j in range(len(weights)): mass_weights = [] amount = 0 for i in range(len(weights[0])): mass_weights.append(weights[j][i] * inputs[i]) # calculate the sum (matrix product) amount = amount + mass_weights[i] # the resulting product of matrices multiplied by the sigmoid sigmoid = 1 / (1 + (math.e ** (-amount))) # write the finished result to a new layer of neurons neuron_hiddens.append(sigmoid) return neuron_hiddens print(neuron_hiddens()) # now you need to somehow deal with the output layer of neurons # we also need to generate weights for them weights_outputs = [] def weights_hiddens_outputs(): hiddens_outputs = [] for ii in range(outputs): hiddens_outputs.append(rand()) return hiddens_outputs for jj in range(2): weights_outputs.append(weights_hiddens_outputs()) # and then, according to the same scheme, calculate the final result with the sigmoid def neuron_outputs(): global sigmoid neuron_outputs = [] for j in range(len(weights_outputs)): mass_weights = [] amount = 0 for i in range(len(weights_outputs[0])): mass_weights.append(weights_outputs[j][i] * neuron_hiddens()[i]) # calculate the sum (matrix product) amount = amount + mass_weights[i] # the resulting product of matrices multiplied by the sigmoid sigmoid = 1 / (1 + (math.e ** (-amount))) # write the finished result to a new layer of neurons neuron_outputs.append(sigmoid) return neuron_outputs print(neuron_outputs()) # well, now you're done! # you can even enter parameters for MNIST # However, there is not enough training, now it is necessary to deal with errors ... |
It was interesting to study the work of the imageio. I printed out the work of the program gradually, I watched how it works line by line and what it does. Above the original code, below mine. The same will need to be done with pyaudio if I write speech recognition or text-to-speech (TTS / STT).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
import imageio import glob import numpy import matplotlib.pyplot # our_own_dataset = [] # # for image_file_name in glob.glob('2828_my_own_?.png'): # label = int(image_file_name[-5:-4]) # print("loading ... ", image_file_name) # img_array = imageio.imread(image_file_name, as_gray=True) # # img_data = 255.0 - img_array.reshape(784) # # img_data = (img_data / 255.0 * 0.99) + 0.01 # print(numpy.min(img_data)) # print(numpy.max(img_data)) # # record = numpy.append(label, img_data) # our_own_dataset.append(record) our_own_dataset = [] image_file_name = '2828_my_own_3.png' img = imageio.imread('MyNeuralNetwork/2828_my_own_3.png', as_gray=True) label = int(image_file_name[-5:-4]) img_data = 255.0 - img.reshape(784) img_data = (img_data / 255.0 * 0.99) + 0.01 # print(img_data) print(numpy.min(img_data)) print(numpy.max(img_data)) |
Thus, exploring the program step by step, I found a way to save the weights, figured out how to create my own versions of a neural network, how to connect libraries to work with input data. In the course of the study, I had many questions, one of them sounds like this: How to automate the process of building a neural network? How to determine the required number of layers, neurons in a layer and the number of learning epochs? I already heard about such developments, but I wanted to do something of my own. Maybe I’ll get lucky and my solution will be better?
The next step is to write a program that uses ready-made weighting factors in the image recognition program.