Back propagation
Overview
Backpropagation (or Back error propagation) in machine learning practices is a widely use supervised algorithm. It is used for deep learning practices and has applications in artificial neural networks. Developers also use backpropagation for supervised learning processes, which form a significant part of the artificial intelligence systems and their functions. As compared to other algorithms, backpropagation is relatively easier to implement.
History
Scientists Williams, Hinton, and Rumelhart came up with the idea of backpropagation and its applications in neural networks back in 1986. The published paper, which was written by them, covered the topic, and gave it worldwide recognition. The basis of backpropagation came from the concept of control theory, which was developed separately by Henry Kelley in 1960, and Arthur Bryan in 1961, with the use of dynamic principle as the fundamental programming. Backpropagation was derived by scientists and was designed to be compatible enough to run on computers in 1970 by Finnish mathematician Seppo Linnainmaa. In the 2010s, backpropagation became further useful as it found its applications in speech recognition software, machine vision, and others. Don't use plagiarised sources.Get your custom essay just from $11/page
How does backpropagation work?
Backpropagation uses more than one hidden layer. The network used by the algorithm is considered to be feedforward, as there is no interconnection between the output of a processing layer and the input of a present node or the preceding one. The correct patterns are provided externally and are compared with the output of the corresponding neural network during the supervised training process. While the network is categorizing the patterns of the supervised training in the most correct manner possible, the produced feedback is used to adjust the weights. The error level is set in advance by the user, which decides the correctness of the patterns arranged by the network. The weights are corrected by comparing the obtained output with the desired output. The process begins with the outermost layer, and subsequent corrections are made for previous layers.
The learning algorithm includes specific procedures. The first step begins with initializing the weights present with arbitrary values (Gaxiola et al.,2016). Then, values for other included parameters are set. The second step includes reading the input vector and the required output. The third step involves calculating the actual output and progressing through the layers. The error is computed in the fourth step, and in the final step, the weights are changed accordingly, working in reverse from the outermost layer to the inner hidden layers. This set of steps are repeated until the values of the output obtained, and the calculated output are close to one another, albeit some minor and negligible difference. A bigger network can take up a long time to train since the steps included take up some time, and the calculation is complicated and has high requirements (Chen, 2017). Therefore, to speed up the process, a combined calculated error is fed through the layers which are made to run forward. The network might sometimes not be up to the same required parameters. This depends on many factors, such as the value of the initial weights, network parameters, and others. If that happens, then it is required to generate a new set of random weights. Also, the initial network parameters might be needed to be altered as per the situation.
Advantages of backpropagation
Experts who were well versed in backpropagation found out that the nodes in neural networks have developed features similar to that designed by a human user and that of a biological neural network found in mammalian brains. Unlike other algorithms that could work with a limited number of outputs, backpropagation could function with any number of outputs. Also, because of its high efficiency, backpropagation allowed machine learning systems to be widely applicable to many fields of use, which were previously out of limits due to high cost and time requirements (Lee, Delbruck & Pfeiffer, 2016). In 1993, Wan was the first person to win an international competition for pattern recognition by using backpropagation.
Disadvantages of Backpropagation
Like its advantages, backpropagation has its own set of limitations, too (Fougstedt et al., 2017). It cannot work by using the batch technique of other algorithms and instead uses the matrix-based approach. The output and performance of backpropagation of a given problem is directly dependent on the input values. Furthermore, using backpropagation with noisy data could be problematic. Additionally, normalization of the input vectors, which improves performance, is not included in the backpropagation process.
Conclusion:
With the advancement of applications of neural networking, backpropagation methods have become very much useful in today’s modern computing world. Machine learning processes and artificial intelligence have become widely used today, backed by backpropagation techniques. This has enabled new prospects of machine learning use in new areas of application. Also, it has formed the basis for many other fields of research and development and has been developed further over the years since its invention.
References
Al Huda, F., Mahmudy, W. F., & Tolle, H. (2016). Android malware detection using a backpropagation neural network. Indonesian Journal of Electrical Engineering and Computer Science, 4(1), 240-244.
Chen, D. (2017). Research on traffic flow prediction in the big data environment based on the improved RBF neural network. IEEE Transactions on Industrial Informatics, 13(4), 2000-2008.
Fougstedt, C., Mazur, M., Svensson, L., Eliasson, H., Karlsson, M., & Larsson-Edefors, P. (2017, March). Time-domain digital back propagation: Algorithm and finite-precision implementation aspects. In Optical Fiber Communication Conference (pp. W1G-4). Optical Society of America.
Gaxiola, F., Melin, P., Valdez, F., Castro, J. R., & Castillo, O. (2016). Optimization of type-2 fuzzy weights in backpropagation learning for neural networks using GAs and PSO. Applied Soft Computing, 38, 860-871.
Lee, J. H., Delbruck, T., & Pfeiffer, M. (2016). Training deep spiking neural networks using backpropagation. Frontiers in neuroscience, 10, 508.