The BCM model describes the synaptic plasticity via a dynamic adaptation of post-synaptic activity. The model explains the behavior of cortical neurons by a combination of long-term potentiation and log-term depression given by a series of stimuli applied to pre-synaptic neurons [scholarpedia]. Starting from the Hebbian learning rule, which established that repeated and persistent activities could determine a transmission of information between neurons, BCM model aims to overcome mathematical issues related to stability and applicability of perceptron models.
In this work we refer to the BCM implementation proposed by Law and Cooper in 1994 [pnas], which is described by the set of equations:
\begin{align*}\label{eq:bcm} y &= \sigma \left(\sum_i w_i x_i\right) \\ \frac{dw_i}{dt} &= \frac{y (y - \theta) x_i}{\theta} \\ \theta &= \mathop{\mathbb{E}}[y^2] \\ \end{align*}
where y_i and \sigma are the post-synaptic activity of the i-th neuron and a non-linear activation function, respectively.
Shouval et al. [shouval] proved the high selectivity of artificial neurons trained by BCM equations: synaptic connections tend to produce highly oriented receptive fields during the training, making neurons responsive to only a subset of provided patterns. Several authors extended these results also to network architectures of BCM neurons [kirkwood, blais], highlighting the presence of receptive fields in neurons' synaptic.
Castellani et al. [castellani] proposed to extend the classical BCM model including lateral connections between neurons. Lateral connections between BCM neurons would allow to inhibit/increment the post-synaptic activities in relation to the state of neurons' neighborhood. In other words, it involves the introduction of an extra matrix term (\mathcal{L}), which influences the post-synaptic vector as
\begin{equation} \mathbf{y} = \sigma \left((1 - \mathcal{L})^{-1} W X \right) \end{equation}
where W and X are the synaptic weights matrix and the input matrix, respectively. The introduction of lateral connections determines the selectivity level of the BCM neurons. Inhibitory lateral connections would tend to discourage neurons from memorize the same patterns, while positive lateral connections increment the probability to reach the same stationary state (convergence) by several neurons. Therefore, the strength of lateral interaction directly determines the learning capacity of the model.
A complete documentation about the mathematical background of the BCM model can be found here.