Abstract
This paper continues the discussion of weight evolution algorithm for solving the local minimum problem of back-propagation by changing the weights of a multi-layer neural network in a deterministic way. During the learning phase of back-propagation, the network weights are adjusted intentionally in order to have an improvement in system performance. The idea is to work backward from the error components and the system outputs to deduce a deterministic perturbation on particular network weights for optimization purpose. Simulation results show that the weight evolution algorithm always outperforms the other traditional methods in achieving global convergence. From mathematical analysis, it can be found that the weight evolution between the hidden and output layers can accelerate the convergence speed, whereas the weight evolution between the input and hidden layers can assist in solving the local minima problem.
Original language | English |
---|---|
Pages | 251-255 |
Number of pages | 5 |
Publication status | Published - 2001 |
Event | Congress on Evolutionary Computation 2001 - Soul, Korea, Republic of Duration: 27 May 2001 → 30 May 2001 |
Conference
Conference | Congress on Evolutionary Computation 2001 |
---|---|
Country/Territory | Korea, Republic of |
City | Soul |
Period | 27/05/01 → 30/05/01 |
Keywords
- Back-propagation Learning
- Neural networks
- Weight Evolution