Network: Computation in Neural vol:6 issue:4 pages:619-633
The effects of nonlinear modulation of the Hebbian learning rule on the performance of a perceptron are investigated. Both random classification and classification provided by a teacher perceptron are considered. It is seen that both the generalization and learning rate depend on the overlap between the teacher and the student and the signal-to-noise ratio in the local field. Furthermore, they are independent of the specific teacher distribution when the ratio between the number of training examples and the perceptron size is small. An analytic expression is obtained for the optimal modulation function for different classification schemes. For random and Gaussian teacher classifications the best choice for modulation appears to be linear. For binary teachers it is shown to be the hyperbolic tangent. The modifications on the latter from diluting the binary teacher are also obtained in analytic form.