NeuralNetworkTrain variability
jdorsey
NeuralNetworkTrain Iterations = 1e6, MinError = 1e-8, Momentum=0.075, LearningRate=0.05, nhidden=18, input=trainingDataWave ,output=trainingResultWave
Using the same input and output data, I get different RMS errors from run to run after 1E6 iterations (deleting the interconnection weight waves between runs), and therefore I guess slightly different interconnection weights. Is this not designed to be totally repeatable?
I'm not too worried, as the RMS errors on my training data are respectably small, but just wanted to check I wasn't missing something fundamental here...
Thanks in advance for reassurance or for pointing out my stupidity.
Neural network training starts by initializing the array of weights using a pseudo-random number generator. If you are concerned about repeatability you have the option of providing your own set of weights (see the weightsWave1 and weightsWave2 keywords).
As for getting "different" results: If you consider this as an optimization problem you should note that the end of N-iterations does not assure you of either a unique or a converged solution. It is prudent to run subsequent tests to make sure that the resulting network actually works as you expect.
If you are going to provide your own starting weights, one is tempted to feed back into the operation the results of a previous training run. At this point, if your momentum parameter is small, your solution is likely to remain in the same local minimum so it is not at all certain that you gain much by repeat training.
A.G.
WaveMetrics, Inc.
August 23, 2011 at 09:57 am - Permalink
August 24, 2011 at 02:30 am - Permalink