Abstract:
Learning from examples plays a central role in artificial neural networks. The success of many learning schemes is not guaranteed, however, since algorithms like backprop...Show MoreMetadata
Abstract:
Learning from examples plays a central role in artificial neural networks. The success of many learning schemes is not guaranteed, however, since algorithms like backpropagation may get stuck in local minima, thus providing suboptimal solutions. For feedforward networks, optimal learning can be achieved provided that certain conditions on the network and the learning environment are met. This principle is investigated for the case of networks using radial basis functions (RBF). It is assumed that the patterns of the learning environment are separable by hyperspheres. In that case, we prove that the attached cost function is local minima free with respect to all the weights. This provides us with some theoretical foundations for a massive application of RBF in pattern recognition.<>
Published in: IEEE Transactions on Neural Networks ( Volume: 6, Issue: 3, May 1995)
DOI: 10.1109/72.377979
References is not available for this document.
Select All
1.
P. Baldi and K. Hornik, "Neural networks and principal component analysis: Learning from examples without local minima", Neural Networks, vol. 2, pp. 53-58, 1989.
2.
R. Bellman, Introduction to Matrix Analysis, New York:McGraw-Hill, 1974.
3.
M. Brady, R. Raghavan and J. Slawny, "Backpropagation fails to separate where perceptrons succeed", IEEE Trans. Circuits Syst., vol. 36, pp. 665-674, 1989.
4.
P. Frasconi and M. Gori, "Backpropagation for linearly separable patterns: A detailed analysis", Proc. IEEE Int. Conf. Neural Networks, pp. 1818-1822, 1993.
5.
P. Frasconi, M. Gori and A. Tesi, "Successes and failures of backpropagation: A theoretical investigation" in Progress in Neural Networks, NJ, Norwood:Ablex Publishing.
6.
M. Gori and A. Tesi, "On the problem of local minima in backpropagation", IEEE Trans. Pattern Anal. Machine Intell., vol. 14, no. 1, pp. 76-86, 1992.
7.
R. Jacobs, M. Jordan and A. Barto, Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks, 1990.
8.
T. Kohonen, "The self-organizing map", Proc. IEEE, vol. 78, no. 9, pp. 1464-1480, 1990.
9.
Y. LeCun, "Generalization and network design strategies" in Connectionism in Perspective, Amsterdam:Elsevier, 1989.
10.
J. Moody and C. Darken, "Fast learning in networks of locally-tuned processing units", Neural Computation, vol. 1, pp. 281-294, 1989.
11.
N. Nilsson, Learning Machines, New York:McGraw-Hill, 1965.
12.
J. Park and I. W. Sandberg, "Universal approximation using radial-basis-function networks", Neural Computation, vol. 3, no. 2, pp. 246-257, 1991.
13.
D. Plaut and G. Hinton, "Learning set of filters using backpropagation", Comput. Speech Language, vol. 2, pp. 35-61, 1987.
14.
T. Poggio and F. Girosi, "Networks for approximation and learning", Proc. IEEE, vol. 78, no. 9, pp. 1481-1497, 1990.
15.
T. Poston, C. Lee, Y. Choie and Y. Kwon, "Local minima and backpropagation", Proc. Int. Joint Conf. Neural Networks, pp. 173-176, 1991.
16.
D. Rumelhart, G. Hinton and R. Williams, "Learning internal representations by error propagation" in Parallel Distributed Processing, MA, Cambridge:MIT Press, vol. 1, pp. 318-362, 1986.
17.
E. Sontag and H. J. Sussman, "Backpropagation separates when perceptrons do", Proc. Int. Joint Conf. Neural Networks, 1989.
18.
S. Wang and C. H. Hsu, "Terminal attractor learning algorithms for backpropagation neural networks", Proc. Int. Joint Conf. Neural Networks, pp. 183-189, 1991-Nov.
19.
L. Wessels and E. Barnad, "Avoiding false local minima by proper initialization of connections", IEEE Trans. Neural Networks, vol. 3, no. 6, pp. 899-905, 1992.
20.
X. Yu, "Can backpropagation error surface not have local minima?", IEEE Trans. Neural Networks, vol. 3, no. 6, pp. 1019-1020, 1992.
21.
M. Zak, "Terminal attractors in neural networks", Neural Networks, vol. 2, pp. 259-274, 1989.