PREDICTION OF ZINC CONSUMPTION AS SACRIFICIAL ANODE IN CATHODIC PROTECTION OF STEEL IN SEA WATER USING ARTIFICIAL NEURAL NETWORK

Corrosion has gained special attention due to its significance, when predicting corrosion rates. However, the complexity and variability makes it hard to model its effects. This study evaluates the usefulness of Artificial Neural Networks (ANN) to predict the corrosion rate as a function of several factors which have been related in previous studies to the protectiveness of low carbon steel in sea water, i


INTRODUCTION
Corrosion is an electrochemical process in which a current leaves a structure at the anode site, passes through an electrolyte and reenters the structure at the cathode site.Current flows because of a potential difference between the anode and cathode that is the anode potential is more negative than the cathode potential, and the difference is the driving force for the corrosion current.The total system-anode, cathode, electrolyte and metallic connection between anode and cathode is termed a corrosion cell 1 .There are many methods for corrosion control as illustrated some of them in the 2 : 1. Cathodic protection.2. Anodic protection.3. Protective coating such as paint.4. Corrosion-resistant metals and alloys.5. Addition of inhibitors.6. Very pure metals.The selecting method depends on many factors such as cost, availability, contamination of environment with corroding metal …etc.Cathodic protection is unique amongst all the methods of corrosion control in that if required it is able to stop corrosion completely, but it remains within the choice of the operator to accept a lesser, but quantifiable, level of protection.Manifestly, it is an important and versatile technique.In principle, cathodic protection can be applied to all the so-called engineering metals.

CATHODIC PROTECTION PRINCIPLES
It is possible to envisage what might happen if an electrical intervention was made in the corrosion reaction by considering the impact on the anodic and cathodic reaction.For example, if electrons were withdrawn from the metal surface it might be anticipated that the anodic reaction would speed up (to replace the lost electrons) and cathodic reaction would slow down because of the existing shortfall of electrons.It follows that the rate of metals consumption would increase.By contrast, if additional electrons were introduced at the metal surface, the cathodic reaction would speed up (to consume the electrons) and the anodic reaction would be inhibited; metal dissolution would be slowed down.This is basis of cathodic protection. 3Fig. 1 Schematic illustration of partial cathodic protection of steel in an aerated environmen Artificial neural networks (ANNs) are simplified models of central nervous system.The network of highly inter connected neural computing elements that have the ability to respond to input stimuli and learn to adapt to environment. 4As the term of artificial neural networks implies early work in the field of neural networks centered on modeling the behavior of neurons found in the human brain, engineering systems are considerably lees complex than the brain, hence from an engineering view point ANN can be viewed as non linear empirical models that are especially useful in representing input-output data.Making predication, classifying data, reorganization patterns, and control process.ANN which will be referred to as a node in this work and is analogous to a single neuron in the human brain.The advantages of using artificial neural network in construct with first principles models or other empirical models. 51.ANN can be highly non linear.2. The structure can be more complex and hence more representative than most other empirical models.3. The structure does not have to be prespecified.4. Quite flexible models.

THEORY AND MODELING OF ANN
Artificial Neural Networks (ANNs) have been increasingly applied to many problems in transport planning and engineering, and the feed forward network with the error back propagation learning rule, usually called simply Back propagation (Bp), has been the most popular neural network . 6ack propagation networks are among the most popular and widely used neural networks because they are relatively simple and powerful.Back propagation was one of the first general techniques developed to train multi-layer networks, which does not have many of the inherent limitations of the earlier, single -layer neural nets criticized by Minsky and Papert.These networks use a gradient descent method to minimize the total squared error of the output.A back propagation net is a multilayer, feed forward network that is trained by back propagating the errors using the generalized Delta rule. 7he input is the input to the hidden layer and the output layer is the output from the immediate previous layer, so it is called feed forward neural network.The number of the input units and the output units are fixed to a problem, but the choice of the number of the hidden units is somehow flexible as shown in figure 2. Too many hidden units may cause over fitting, but if the number of hidden units is too small, the problem may not converge at all.Usually a large number of training cases may allow more hidden units if the problem requires so. 8he conventional algorithm used for training a MLFF is the Bp algorithm, which is an iterative gradient algorithm designed to minimize the mean-squared error between the desired output and the actual output for a particular input to the network . 9Basically, Bp learning consists of two passes through the different layers of the network: a forward pass and backward pass as shown in figure 2. During the forward pass the synaptic weights of the network are all fixed.During the backward pass, on the other hand, the synaptic weights are all adjusted in accordance with an error-correction rule. 10A MLFF consists of layers of interconnected denoted as the input layer, the hidden layer and the output layer.So the number of neurons in the hidden layer can be varied based on the complexity of the problem and the size of the input information. 10Fig. 2 Multi-layer feed forward network (one hidden layer).
Two learning factors that significantly affect convergence speed as well as accomplish avoiding local minima, are the learning rate and momentum.The learning rate () determines the portion of weight needed to be adjusted.However, the optimum value of  depends on the problem.Even though as small learning rate guarantees a true gradient descent, it slows down the network convergence process.If the chosen value of  is too large for the error value, the search path will oscillate about the ideal path and converges more slowly than a direct descent.The momentum () determines the fraction of the previous weight adjustment that is added to current weight adjustment.It accelerates the network convergence process.During the training process, the learning rate and the momentum are adjusted to bring the network out of its local minima, and to accelerate the convergence.The algorithm of the error back-propagation training is given below 9 : Step1: initialize network weight values.
Step2: sum weighted input and apply activation function to compute output of hidden layer.
(1( Where h j : The actual output of hidden neuron j for input signals X. X i : Input signal of input neuron (i).W ij : Synaptic weights between input neuron hidden neuron j and i. Step3: sum weighted output of hidden layer and apply activation function to compute output of output layer.
where O k : The actual output of output neuron k.W jk : Synaptic weight between hidden neuron j and output neuron k.
where f  :The derivative of the activation function.d k : The desired of output neuron k.
Step6: sums delta input for each hidden unit and calculate error term.The success of Bp methods very much depends on problem specific parameter settings and on the topology of the network [Leonard 1990].So in the next section the quick propagation will be presented.

MODELING CORRELATION OF ANN
In the current study, neural networks are used to fit a set of experimental points in order to provide a purely empirical model.The experimental points are called the training cases (or learning cases) and another are called testing cases.They consist of input vectors (values of input variables) associated with the experimental output value.To solve a problem with a back-propagation network, it is shown sample inputs with the desired outputs, while the network learns by adjusting its weights.If it solves the problem, it would have found a set of weights that produce the correct output for every input.This work includes computer simulation, implemented on a Pentium 4 computer using MATLAB, version7.The modeling of ANN correlation began with the collection of large data bank following the learning file was made by randomly selecting about 70% of the data base to train the network.The remaining 30% of data is then used to check the generalization capability of the model.The last step is to perform a neural correlation and to validate it statistically.So that the steps of modeling are:-Collection of Data: The first step in neural network modeling is collection of data.The data is necessary to train the network and to estimate its ability to generalize.In this model about 256 experimental points have been collected for corrosion rate used cathodic protection of low carbon steel in sea water [Khalid, 2006].The data were divided into training and test sets: the neural network was trained on 70% (180) of the data and tested on 30% (76).The data includes a large range of temperature, flow rate, pH, and time.All of these parameters are input to neural network and there is one output; it is the corrosion rate.

THE STRUCTURE OF ARTIFICIAL NEURAL NETWORK
In this work, a multilayer neural network has been used, as it is effective in finding complex non-linear relationships.It has been reported that multilayer ANN models with only one hidden layer are universal approximates.Hence, three layer feed forward neural network are chosen as a correlation model.The weighting coefficients of the neural network are calculated using MATLAB programming.Structure of artificial neural network built as:-1.Input layer: A layer of neurons that receive information from external sources and pass this information to the network for processing.These may be either sensory inputs or signals from other systems outside the one being modeled.In this work four input neurons in the layer and there is a set of (180) data points available of the training set.

Hidden layer:
A layer of neurons that receives information from the input layer and processes them in hidden way.It has no direct connections to the outside world (inputs or output).All connections from the hidden layer are to other layers within the system.The w number of neuron in the first hidden layer consists of nine neurons and the second hidden layer consists of sixteen neurons.This gave best results and was found by trial and error.If the number of neurons in the hidden layer is more, the network becomes complicated.Results probably indicate that, the present problem is not too complex to have a complicated network routing.Hence, the results can be satisfactorily achieved by keeping the number of neurons in hidden layer at a best value with two hidden layer.

Output layer:
A layer of one neuron that receives processed information and sends output signals out of the system.Here the output is the corrosion rate or Zn consumption as sacrificial anode in Cathodic protection of a steel in sea water.

Bias:
The function of the bias is to provide a threshold for activation of neurons.The bias input is connected to each of hidden neurons in network.The structure of multi layer ANN modeling is illustrated in figure .3Fig. 3Structure of a layer neural network

TRAINING OF ARTIFICIAL NEURAL NETWORK
Training is just the procedure of estimating the values of the weights and establishing the network structures and the algorithm used to do this is called a "learning" algorithm.Learning typically occurs through training or exposure to set of input, output data where the training algorithm iteratively adjusts the connection weights.These connection weights represent the knowledge necessary to solve specific problems (i.e.calculates the coefficients of correlation).
The training phase starts with randomly chosen initial weight values.Then a backpropagation algorithm is applied after each iteration, the weights are modified so that the cumulative error decreases.In back-propagation, the weight changes are proportional to the negative gradient of error.Back-propagation may have an excellent performance, this algorithm is used to calculate the values of the weights the following procedure is then used (called "supervised learning") to determine the values of weights of the network:-1.For a given ANN architecture, the value of the weights in the network is initialized as small random numbers.2. The input of the training set is sent to the network and resulting outputs are calculated.
3. The measure of the error between the outputs of the network and the known correct (target) values is calculated.4. The gradients of the objective function with respect to each of the individual weights are calculated.5.The weights are changed according to the optimization search direction.6.The procedure returns to step 2. 7. The iteration terminates when the value of the objective function calculated using the data in the test approaches experimental value.The trial and error to find the best ANN correlation model shown in table 1:-  The learning process includes the procedure when the data from the input neurons is propagated through the network via the interconnections.Each neuron in a layer is connected to every neuron in adjacent layers.A scalar weight is associated with each interconnection.
Neurons in the hidden layers receive weighted inputs from each of the neurons in the previous layer and they sum the weighted inputs to the neuron and then pass the resulting summation through a non-linear activation function (tan sigmoid function).
Artificial neural networks learn patterns for this study can be equated to determining the proper values of the connection strengths (i.e. the weight matrices 2 h 1 h w , w and o w illustrated in figure .3 that allow all the nodes to achieve the correct state of activation for a given pattern of inputs.The matrix, bias, and vector, given eq.( 11), ( 12), ( 13) and ( 14) illustrate the result of coefficient weights for ANN correlation for this case where h w is the matrix containing the weight vectors for the nodes in the hidden layer and o w is the vector containing the weight for the nodes in the output layer.Available online @ iasj.net 2506

CONCLUSIONS
The ANN correlation shows noticeable improvement in the prediction of corrosion rate.The neural network correlation yields an AARE of 0.09% and standard deviation of 0.46%.The number of input units and output units are fixed to a problem, but the choice of the number of the hidden units is flexible.In this work it was started with a number of neurons in the hidden layer, but was found that two hidden layer the first layer consists of nine neurons and second consists of sixteen neurons gives butter results of prediction corrosion rate shown in table(2).Below two layer of hidden the network cannot to be trained.


The learning rate Fig.4 illustrates the number of epochs with MSE for corrosion rat

Fig. 5
Fig.5 Comparison between experimental and prediction for Zinc consumption in training set

Fig( 6 )
Fig(6) Comparison between experimental and prediction for Zinc consumption in testing set.

hox
The derivation of the activation function i The actual output of hidden neuron j k The actual output of output neuron k P The number of patterns in the training set ij V Weight on link from Xi to Zj ij W Synaptic weights between input and hidden neurons jk W Synaptic weight between hidden and output neuron i Input

S. YARO Prediction of Zinc Consumption as Sacrificial Anode in Cathodic Protection of Steel in Sea Water Using Artificial Neural Network
It is capable of approximating arbitrary non-linear mappings.However, it is noted that two serious disadvantages in the Bp algorithm are the slow rate of convergence, requiring very long training times, and getting stuck in local minima. A.

Table (
p k d =The desired output.

A. S. YARO Prediction of Zinc Consumption as Sacrificial Anode in Cathodic Protection of Steel in Sea Water Using Artificial Neural Network
RESULTSThe network architecture used for prediction corrosion rate for cathodic protection of low carbon steel in sea water in fig.3consist of four inputs neurons corresponding to the state variables of the system, with hidden neurons and one output neuron.All neurons in each layer were fully connected to the neurons in an adjacent layer.The prediction of ANN correlation result is plotted in figure.5 compares the predicted of corrosion rate with experimental for training set.

Table ( 2
) Gives the statistical information of neural networks models for prediction of corrosion rate.
AcknowledgmentPartial support for this work was provided by Suhaiela A.Akkar.The author appreciate her valuable discussion.