A neural network is to be built that behaves according to the table in Figure 15.13, which represents the Boolean AND operation. Input to the network consists of two binary signals; the single output line fires exactly when both input signals are 1.
a. Find values for the weights and the threshold of the output neuron in Figure 15.14 that cause the network to behave properly.
b. Because this is a relatively simple problem, it is easy to guess and come up with a combination of weights and threshold values that works. The solution is not unique; there are many combinations that produce the desired result. In a large network with many connections, it is impossible to find a solution by guessing. Instead, the network learns to find its own solution as it is repeatedly exercised on a set of training data. For networks with hidden layers, the back propagation algorithm can be used for training. For a general class of networks of the form shown in Figure 15.15, an easier training algorithm exists, which will be described here. Note that in
Figure 15.15, the input signals are binary, and all neurons are assumed to have the same threshold value θ. The table in Figure 15.16 sets up the notation needed to describe the algorithm.
Initially, the network is given arbitrary values between 0 and 1 for the weights w1, w2, . . . , and the threshold value θ. A set of input values x1, x2, . . . from the training data is then applied to the network. Because we are working with training data, the correct result t for this set of input values is known. The actual result from the network, y, is computed and compared to t. The difference between the two values is used to compute the next round of values for the weights and the threshold value, which are then tested on another set of values from the training data. This process is repeated until the weights and threshold value have settled into a combination for which the network behaves correctly on all of the training sets. The network is fully trained at this point.
Each new weight wi ’ is computed from the previous weight by the formula
i. If the network behaved correctly for the current set of data—that is, if the computed output y equals the desired output t—then the quantity a(t 2 y) has the value 0, so when we use formulas (A) and (B), the new weights and threshold value will equal the old ones. The algorithm makes no adjustments for behavior that is already correct.
ii. If the output y is 0 when the target output t is 1, then the quantity a(t 2 y) has the value a, a small positive value. Each weight corresponding to aninput xi that was active in this computation (i.e., had the value 1) gets increased slightly by formula (A). This is because the output neuron didn’t fire when we wanted it to, so we stimulate it with more weight coming into it. At the same time, we lower the threshold value by formula (B), again so as to stimulate the output neuron to fire.
iii. If the output y is 1 when the target output t is 0, then the quantity a(t 2 y) has the value 2a, a small negative value. Each weight corresponding to an input xi that was active (i.e., had the value 1) gets decreased slightly by formula (A). This is because the output neuron fired when we didn’t
want it to, so we dampen it with less weight coming into it. At the same time, we raise the threshold value by formula (B), again so as to discourage the output neuron from firing.
We will use the training algorithm to train an AND network. The training set will be the four pairs of binary values shown in the table of Figure 15.13. (Here the training set is the entire set of possible input values; in most cases, a neural network is trained on some input values for which the answers are known and then is used to solve other input cases for which the answers are unknown.) For starting values, we choose (arbitrarily) w1 = 0.6, w2 = 0.1, u = 0.5, and a = 0.2. The value of a stays fixed and should be chosen to be relatively small; otherwise, the corrections are too big and the values don’t have a chance to settle into a solution. The initial picture of the network is therefore that of Figure 15.17. Note that with these choices we did not stumble on a solution because input values of x1 = 1 and x2 = 0 do not produce the correct result
The following table shows the first three training sessions. The current network behaves correctly for the first two cases (x1 = 0 and x2 = 0; x1 = 0 and x2 = 1), so no changes are made. For the third case (x1 = 1 and x2 = 0), an adjustment takes place in the weights and in the threshold value.
After these changes, the new network configuration is that now shown in Figure 15.18. Continue the table from this point, cycling through the four sets of input pairs until the network produces correct answers for all four cases.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.Read more
Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.Read more
Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.Read more
Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.Read more
By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.Read more