I. Introduction
In this paper, we study the perceptron model. Proposed initially in the 1960’s [6]–[9], this is a toy model of one-layer neural network storing random patterns as well as a very natural model in high-dimensional probability. Let , be i.i.d. random patterns to be stored. Storage of these patterns is achieved if one finds a vector of synaptic weights consistent with all Xi: that is, for . There are two main variants of the perceptron: when the vector lies on on sphere in (the spherical perceptron) and when (the binary or Ising perceptron). For more on the spherical perceptron see [10–14]; in this paper we will focus only on the binary perceptron.