MANUFACTURING CELL FORMATION USING BACK PROPAGATION NETWORKS
5.1 OBJECTIVE
Cellular Manufacturing System (CMS) is an application of Group Technology (GT) in which functionally different machines are grouped to form a family of parts. This work gives an overview of the Back Propagation Network (BPN) based approaches to create the machine cells and component grouping for minimizing the exceptional elements and bottleneck machines. This method is applied to the known benchmark problems found in literature, and it is found to be equal or best when compared to in terms of minimizing the number of exceptional elements.
5.2 INTRODUCTION
In today’s manufacturing companies, relentlessly places pressure on manufacturing systems to be enhanced in both efficiency and effectiveness. This is manifested by the fact of consumer markets in the rising tendency of a greater variety of products and a decrease in the product life cycle. Traditional manufacturing systems, such as product and process layout, do not have met the dynamic manufacturing environment.
Many newer manufacturing systems have been proposed such as agile, flexible, intelligent etc., among these more modern manufacturing systems Group Technology (GT) and Cellular Manufacturing Systems (CMS) have drawn considerable attention in manufacturing organizations. The original concept of GT was first proposed. CMS is the application of the GT to identify part families and their associated machine groups so that each part family is processed within a machine group. The advantages of using CMS include reduction of set up time, material
Handling time, and work in process inventory, throughput time, delivery time and space. Cell formation is considered to be the most challenging step in CMS design.
The objective of this paper is to minimize the number of exceptional elements and bottleneck machines using Back Propagation Network (BPN).
The machine-part incidence matrix-based cell information has attracted most of the researchers, resulting there have been several methods to solve the cell formation problem without going into the details; some of the conventional approaches are:
- Coding and classification
- Production flow analysis
- Similarity coefficient approach
- Mathematical programming
- Graph-theoretic approach
- Search methods
- Artificial Intelligence
These methods are found to produce the right solution for well-structured matrices, where part families and machine cells exist naturally. However, they fail to provide so, for ill-structured forms with many exceptional elements. The other primary type is Recurrent Networks, in which there is a flow of information or feedback from output to input.
Feedforward networks are relatively simpler to implement. These have numerous applications where the nonlinear mapping is to be done between inputs and outputs and predicting future states, e.g. nonlinear behavioural modelling, adaptive control, image recognition etc. You cannot use feed-forward ANN where the outputs are also dependent upon the previous state of inputs, which are also referred to as memory effects. If you want to use ANN also familiarize yourself with ANN training methods.
5.2.1 ARTIFICIAL NEURAL NETWORK
The artificial neural networks are also known as a connection-based system that can be computing the system vaguely inspired by the biological-based neural network that can constitute an animal brain. In the order that can learn for the performance-based tasks due to the considering the example. Commonly without they programmed with the term of specific rules. With an image-based recognition. They might learn for the identification of the images for they contain the determination of the images has been manually labelled.
The artificial neural network depends on a set of the connected unit as well as nodes is known as artificial neurons, which can be loosely method for the neurons considers. In each connection such as the synapses at the biological. They can transmit a signal to the other neurons. Then the artificial based neurons can receive the message that can be processed at the neuron that can be connected to it.
The artificial neural network when the signal that can be linked can be considered as the real number, as well as the output of each neuron, can be computed from some non-linear based operation of the sum of its inputs. The connection can be considered as the edges. When the neurons, as well as, tips can be found as the type it has the weight can be adjusted at they proceed that the learning.
When the increase as well as decrease based weight than the strength of the signal in the connection. The neurons that may have a threshold of the message can be sent only for the aggregate signal crosses can be the threshold. When they typically neurons can be aggregated into layers. The various kinds of layers that can be performing the multiple transformations depend on their inputs. When the signals can be travel from the first layer can be represented as the input layer to the last layer can be considered as the output layer that can be possible after they are traversing the sheet in the many duration.
When the initially aim based on the artificial neural network approach can solve the problem, occur the similar way. But over duration attention that can be moved for performing the specific term, lead for the deviation. The artificial neural network can be considered as the variety of the time such as computer vision, recognition based speech, translation based machine, a system based on the social filtering, playing-based board as well as games based video, diagnosis of the medical.
Feed-Forward
Network Inputs Network Output
Input Layer Output Layer
Hidden Layer
Back-Forward
5.1 Artificial Neural Network
When the function based on the artificial neural network can be the same for the way at neurons based work in the system are shown in figure 5.1. Then the network based on the neural pass again the early 1970s. They order them for understanding at work in the artificial neural network. In the artificial neural network consists of the three essential based layers.
They contain the necessary three layers. That can be considered in the following.
- Input layer
- Output layer and
- Hidden layer
INPUT LAYER
When the input based layer can be considered as the first layer in the artificial neural network they can be received the information in the data at the form of different texts, amount, files based audio, each pixel value in the images and so on.
HIDDEN LAYER
When the hidden layer can be considered as the middle layer in the artificial neural network. There can be considered as the single buried based layer as in the case based on the perception as well as multiple based hidden layers. Then the hidden based layer can be performed the different kinds of the mathematic computation on the input based data. They recognize the patterns that can be part of it.
OUTPUT LAYER
When the output layer can be considered as the result that can be obtained at the rigorously based computation can be performed in the middle layer. In the artificial based neural network consider as the many parameters they also consider as the hyper based parameter, they can affect the perforation occur in the method. The synthetic neural network-based output can be regarded as the widely used that can be depending on the parameters. In some of the set consists of weights, rate based on learning, size based on batch and so on. In each node based on the artificial neural network contain some weight.
In each node occur in the artificial neural network can have some influence that attains to it. When the operation based on the transfer can be used for they calculate the weighted sum of input as well as bias.
TYPES OF ANNs
The artificial neural network contains the two various kinds
They are
- Feed-Forward neural network
- Feed-Back neural network
- FEED-FORWARD NEURAL NETWORK
In the feed-forward neural network, when the flow of the information can take place occur in the only one direction. Then the info based flow can be formed as the input based layer to the hidden based layer. Finally, the output can be considered. When there are no loops, based feedback can occur at the artificial based neural network. In various kind of systems based on the neural can be wide the used for the purposed of supervised learning at the instances that can be known as image recognition, classification and so on when the data can’t be sequential. A feed-forward neural network is an artificial neural network where the nodes never form a cycle. This kind of neural network has an input layer, hidden layers, and an output layer. It is the first and most straightforward type of artificial neural network.
- FEED – BACKWARD NEURAL NETWORK
When the feed backward neural network in the ANNs, here loop can be considered as the parts. In such type of neural-based system can be widely used for the purposed of retention depend on memory that can be included in the case of re-present neural network. The various kinds of neural networks can be suited for the location when the data can be considered as the sequential as well as dependent on the time.
APPLICATIONS
- Handwritten character recognition
- Speech recognition
- Signature classification
- Facial recognition
5.2.2 BACK PROPAGATION IN ARTIFICIAL NEURAL NETWORKS
They train the neural-based network, for examples of input as well as output mapping. Finally, the neural network finishes the training, then the test based on the neural network they cannot provide the mapping. The artificial neural network determined output, and they also evaluate the correct output with the help of different error based operation. Finally, depending on the result method can be adjusted, the neural-based network that can optimize the system due to gradient network descent in the chain rule is shown in figure 5.2.
Input Layer Hidden Layer Output Layer
Figure 5.2 Backpropagation in ANNs
When the purpose of the artificial neural-based network can have various kinds of application. At the hope based data can be flair that they prove the better. They can help to determine the neural network inaccurate manner by using the backpropagation based process. A neural network is a group of connected it I/O units where each connection has a weight associated with its computer programs. Backpropagation is a short form for “backward propagation of errors.” It is a standard method of training artificial neural networks. Backpropagation is fast, simple and easy to program. A feed-forward neural network is an artificial neural network. Two Types of Backpropagation Networks are
1) Static Back-propagation
2) Recurrent Backpropagation
In 1961, the basics concepts of continuous backpropagation were derived in the context of control theory by J. Kelly, Henry Arthur, and E. Bryson. Backpropagation simplifies the network structure by removing weighted links that have a minimal effect on the trained network. It is especially useful for deep neural networks working on error-prone projects, such as image or speech recognition. The biggest drawback of the Backpropagation is that it can be sensitive for noisy data.
The backpropagation algorithm is used in the classical feed-forward artificial neural network.
- It is the technique still used to train large deep learning
- In this tutorial, you will discover how to implement the backpropagation algorithm for a neural network.
After completing this tutorial, you will know:
- How to forward-propagate an input to calculate an output.
- How to back-propagate error and train a network.
- How to apply the backpropagation algorithm to a real-world predictive modelling problem.
5.2.2.1ADVANTAGES
- The Backpropagation can be considered as the fast, easy, as well as simple for the process.
- When there is no parameter for they tuned apart from the amount of its input. These methods can be considered as a flexible method, and they cannot be required as prior based knowledge about the network
- They also feel like the standard method that commonly works well.
- They cannot need for the many contain the special mentions of their characteristic of the function that can be learned
5.2.2.2 TYPES OF BACKPROPAGATION NETWORKS
There are two types of backpropagation based network can be considered as,
- Static Back-propagation
- Recurrent Back-propagation
STATIC BACK-PROPAGATION:
It is one kind of backpropagation network which produces a mapping of a static input for static output. It is useful to solve static classification issues like optical character recognition.
A model where you have a static set of data and need them to map to a static set of output, your input will not change or exhibit a dynamic nature. Ex: Characters recognition.
RECURRENT BACKPROPAGATION:
Recurrent backpropagation is fed forward until a fixed value is achieved. After that, the error is computed and propagated backward.
In this model, a fixed amount is obtained by multiplying it forward and then after that error value is calculated by multiplying it backward.
The overall difference is that mapping is quick and fast (rapid) in the static model and non-static in the recurrent model since the inputs will have dynamic functions
The main difference between both of these methods is: that the mapping is rapid in static back-propagation while it is non-static in recurrent backpropagation.
5.2.2.3 FEATURES
- Simplifies the network structure by elements weighted links that have the least effect on the trained network
- You need to study a group of input and activation values to develop the relationship between the input and hidden unit layers.
- It helps to assess the impact that a given input variable has on a network output. The knowledge gained from this analysis should be represented in rules.
- Backpropagation is especially useful for deep neural networks working on error-prone projects, such as image or speech recognition.
- Backpropagation takes advantage of the chain, and power rules allow backpropagation to function with any number of outputs.
Cells are created in the workplace to facilitate flow. This is accomplished by bringing together operations or machines or people involved in a processing sequence of products natural flow and grouping them close to one another, distinct from other groups. This grouping is called a cell. These cells are used to improve many factors in a manufacturing setting by allowing one-piece flow to occur. An example of one-piece flow would be in the production of a metallic case part that arrives at the factory from the vendor in separate pieces, requiring assembly. First, the pieces would be moved from storage to the cell, where they would be welded together, then polished, then coated, and finally packaged. All of these steps would be completed in a single cell, to minimize various factors (called non-value-added processes/steps) such as time required transporting materials between levels. Some standard formats of individual cells are the U-shape (suitable for communication and quick movement of workers), the straight line, or the L-shape. The number of workers inside these formations depend on current demand and can be modulated to increase or decrease production. For example, if a cell is usually occupied by two workers and demand is doubled, four workers should be placed in the cell. Similarly, if demand halves, one worker will fill the cavity. Since batteries have a variety of differing equipment, it is, therefore, a requirement that any employee is skilled at multiple processes.
A hole is a small organizational unit designed to exploit similarities in how you process information, make products, and serve customers. Manufacturing cells [firmly locate] people and equipment required for processing families of like products. [Before cellularization, parts] may have travelled miles to visit all the equipment and labour needed for their fabrication. After the reorganization, families of similar elements are produced together within the physical confines of cells that house most or all of the required resources, facilitating the rapid flow and efficient processing of material and information. Furthermore, cell operators can be cross-trained in several machines, engage in job rotation, and assume responsibilities for tasks [that] previously belonged to supervisors and support staff [including] activities such as planning and scheduling, quality control, troubleshooting, parts ordering, interfacing with customers and suppliers, and record-keeping.
This arrangement of layers of inputs is called input; they are followed by their successor called output. The connecting points of these are called weights. The weights transmit the input data from one to another via the layers. The mechanism of transmission of data from one input to another and the next one is called learn indoor training of the neural network. This learning will be either with supervision or unsupervised. Supervised learning has desired and defined outputs. Unattended is for grouping the data and representing them based on their similarities shared. With all the essential, necessary foundations explained, now backpropagation is the kind of supervised learning method. It is called an algorithm because it measures, finds the formula to relate the input variables.
5.3 PROBLEM STATEMENT
In this work, cell formation problems in cellular manufacture can be presented. That can be done by using the backpropagation in the artificial neural network. In the mobile manufacturing system can face exceptional element based issues. Based approaches to form the machine cells and component grouping for minimizing the outstanding elements and bottleneck machines. This method is applied to the known benchmark problems found in literature, and it is found to be equal or best when compared to in terms of minimizing the number of exceptional elements.
5.4 PROPOSED WORK
5.4.1 ARTIFICIAL NEURAL NETWORK FOR MANUFACTURING CELL FORMATION
Artificial Neural Networks can be viewed as parallel and distributed processing systems which consists of a considerable number of simple and massively connected processors. These networks can be trained offline for complicated mappings, such as of forming of manufacturing cells and determining the various faults can then be used efficiently. The Multi-Layer Perceptron architecture is the most popular paradigm of artificial neural networks in use today. Figure 1 shows a standard multilayer feed-forward network with three layers. The neural network architecture in this class shares a standard feature that all neurons in a layer are connected to all neurons in adjacent layers through unidirectional branches. That is, the offices and links can only broadcast information in one direction, that is, the “forward direction”. The offices have associated weights that can be adjusted according to a defined learning rule.
Feedforward neural network training is usually carried out using the backpropagation algorithm. Training the network with backpropagation algorithm results in a non-linear mapping between the input and output variables. Thus, given the input/output pairs, the system can have its weights adjusted by the backpropagation algorithm to capture the non-linear relationship. After training, the networks with fixed weights can provide the output for the given input.
Figure 5.3: Two-layer feed-forward network
The standard backpropagation algorithm for training the network is based on the minimization of an energy function representing the instantaneous error. In other words, we desire to minimize a function defined as
Where dq represents the desired network output for the qth input pattern, and yq is the actual output of the neural network. Each weight is changed according to the rule:
Where k is a constant of proportionality, E is the error function, and wij represents the weights of the connection between neuron j and neuron i. The weight adjustment process is repeated until the difference between the node output and actual output are within some acceptable tolerance.
5.4.2 DEVELOPMENT OF NEURAL NETWORK MODEL FOR MANUFACTURING CELL FORMATION
The models are developed for the manufacturing cell formation to group parts and machines into clusters by sequencing the rows and columns of a machine part incidence matrix, to minimize the exceptional elements of the block diagonal matrix. The proposed methodology for cell formation is based on using Artificial Neural Network (ANN) for reducing the outstanding features and bottleneck machines. The primary purpose of selecting ANN as a tool is good generalization ability, fast real-time operation, and to perform the complicated mapping without functional relationship. Feedforward neural networks trained by backpropagation algorithm are used for this purpose. The information required for the development of the neural network model is collected from, for cell formation and also through offline simulation.
In this paper, the input is a machine component incidence matrix [A=aij] made up of zero and ones such as rows indicate machines, columns represent components or parts and while aij=1 means that component j processing on the computer i otherwise aij=0. Hence machine component incidence matrix [A=aij] is taken as the input of the developed ANN model, as shown in figure 2.
The sections having the same size of forms are considered while designing the neural network model. Based on this consideration, the following four cases, neural network models were developed for the cell formation problem.
Case 1: Data set 1
Example 2: Data set 2
Case 3: Data set 3
Case 4: Data set 4
The neural network approach for this purpose has two phases; training and testing. During the training phase, the neural network is trained to capture the underlying relationship between the chosen inputs and outputs. After training, the systems are evaluated with a test data set, which was not used for training. Once the networks are trained and tested, they are ready to solve the cell formation problem. The following issues are to be addressed while developing a neural network model for the cell formation problem.
- Selection of input and output variables
- Training data generation
- Data normalization
- Selection of network structure
- Network training
- Selection of input and output variables
For the application machine learning approaches, it is essential to accurately select the input variables, as ANNs are supposed to learn the relationships between input and output variables based on input-output pairs provided during variables represent the machine component incidence matrix, and the output is the block diagonal matrix.
5.4.2.1 TRAINING DATA GENERATION
The generation of training data is an essential step in the development of neural network models. To achieve excellent performance of the neural network, the training data should represent complete information about the machine part incidence matrix. The training data is required for this purpose is generated through offline simulation. The machine part incidence matrixes of four cases are collected.
Figure 5.4 24x 40 Machine part incidence matrix (Data set 1)
- 4.2.2 DATA NORMALIZATION
During the training of the neural network, higher valued input variables may tend to suppress the influence of smaller ones. Also, if the raw data is directly applied to the network, there is a risk of the simulated neurons reaching the saturated conditions. If the neurons get saturated, then the changes in the input value will produce a minimal change or no change in the output value. This affects the network training to a great extent. To avoid this, the raw data is normalized before the actual application to the neural network. One way to normalize the data x is by using the expression:
Where xn is the normalized value, and xmin and xmax are the minima and maximum values of the variable x.
5.4.2.3 SELECTION OF NETWORK STRUCTURE
To make a neural network to perform some specific task, one must choose how the units are connected. This includes the selection of the number of hidden nodes and the type of the transfer function used. The number of hidden nodes is directly related to the capabilities of the network. For the best network performance, an optimal amount of hidden nodes must be adequately determined using the trial and error procedure. The input layers of neurons equal to the number of parts and machines and output layers have neurons similar to the number of cells.
- RESULTS AND DISCUSSION
This section presents the details of the simulation study carried out on the cell formation problem using the proposed method. The details of the ANN models developed to cell formation are presented here. The generated training data are normalized and applied to the neural network with corresponding output, to learn the input-output relationship. The ANN model used here has one hidden layer of sigmoidal neurons, which receives the inputs, then broadcast their productions to an output layer of linear neurons, which compute the corresponding values. The back propagation-training algorithm propagates the error from the output layer to the hidden layer to update the weight matrix. The algorithm used for the training of an artificial neural network model is given below:
Step 1: -Load the data in a file.
Step 2: -Separate the input and output data.
Step 3: -Separate the training and test data.
Step 4: -Normalize all the input and output values.
Step 5: -Define the network structure.
Step 6:-Initialize the weight matrix and biases.
Step 7: -Specify the number of epochs.
Step 8: -Train the network with the train data.
Step 9: -Test the network with the test data.
Step 10: -Re-normalize the results.
The neural network model was trained using the backpropagation algorithm with the help of MATLAB neural network toolbox. At the end of the training process, the model obtained consists of the optimal weight and the bias vector. After training, the generalization performance of the network is evaluated with the help of the test data of four models obtained, as shown in Table 5.1 shows the various parameters of the neural network model. From this table, it is found that the network has correctly classified all the data during the testing stage. This shows that the trained ANN can produce the correct output even for the new input. The BPN results with exceptional elements are shown in table 5.2. The diagrammatic results of the four data set problems are shown in the appendix.
Table 5.1: Parameters of the neural network model
Number of Hidden Layer | 1 (for all four data set) | |||
Number of Hidden nodes | 4 (for all four data set) | |||
Transfer Function used | Tansigmoidal (for all four data set) | |||
Maximum Number of Epochs | 1500 (for all four data set) | |||
Percentage of Classification | 100% (for all four data set) | |||
Training time (seconds)
Problem No 1 Problem No 2 Problem No 3 Problem No 4 | Data set 1 | Data set 2 | Data set 3 | Data set 4 |
0.079 | 0.078 | 0.11 | 0.078 | |
0.063 | 0.062 | 0.079 | 0.062 | |
0.062 | 0.062 | 0.078 | 0.063 | |
0.079 | 0.063 | 0.062 | 0.062 |
Table 5.2: (Exceptional elements)
Data set 1 | Data set 2 | Data set 3 | Data set 4 | Existing method | ||||||
G | EE | G | EE | G | EE | G | EE | G | EE | |
Problem No 1 | 7 | 0 | 7 | 0 | 7 | 15 | 7 | 28 | 7 | 0 |
Problem No 2 | 7 | 23 | 7 | 10 | 7 | 10 | 7 | 38 | 7 | 10 |
Problem No 3 | 7 | 20 | 7 | 41 | 7 | 20 | 7 | 20 | 7 | 20 |
Problem No 4 | 7 | 20 | 7 | 20 | 7 | 20 | 7 | 20 | 7 | 20 |
5.6 SUMMARY
This paper has presented a neural network-based approach for cell formation problem. Four separate models were developed for the four cases. The data required for the development of neural network model have been obtained through the offline simulation and machine part incidence matrix are considered. For the ANN Model, the testing data are fed to the designed model to check the accuracy. The testing samples are different from the training samples, and they are new to the trained network. Simulation results show that this neural network approach is very much efficient standard matrix in the cell formation. To further improve the performance of the model input feature of the network can be selected through dimensionality reduction technique.