31. Plasticity and Coding

How is plasticity related to neuronal coding

Outlines

Learning to be fast

The fundamental idea behind this simple model is that predictions of actions before it happens strengthens the weight.

The simple model in the book is a neuron that takes input from 20 neurons. The input from these neurons is arranges in time orders. The first neuron fires then the second then the third and so on. Due to the input the post neuron would first at some time. Interestingly, after many repetitions the firing time of the post neuron becomes earlier the the initial test.

Suppose the post neuron fires around the input of the 9th pre neuron initially. The learning strengthens the weight of neuron 7,8,9 for example. Thus the post neuron will fire early in the next run of the experiment. This requires an asymmetric learning window.

This procedure is similar to human learning with experience.

The learning window is usually as short as 100ms. However, monkeys can have conditioning that span seconds. This could be related to reverberating oscillations.

Learning to be precise

  1. Nonlinear Poisson model, $\nu(u) = \nu_{\mathrm{max}} \Theta(u -\theta)$, where $\nu_{\mathrm{max}}$ is the reliability not the maximum rate. $\nu_{\mathrm{max}}\to\infty $ means the neuron will first immediately when the threshold is reached, thus noiseless. Here by noise we mean how well synchronized are the inputs. Noiseless indicates that the inputs are all synchronized in time.
  2. Combined with a non-Hebbian learning rule. Eqn 12.5
  3. Calculate joint probability of pre and post synaptic firings, by neglecting the correlation between post synaptic firing and a individual presynaptic firing if the postsynaptic neuron takes in a large number of inputs.
  4. Fig 12.6: small noise respect the asymmetric learning window we used while large noise smears out the learning windows.

I have an idea of a possible phenomenon. In these models, we have soft bounds that drives the plasticity less efficient when the weight approaches the bounds.

However, imagine the weight was changed to some value larger than the upper bound by some brute force. The network would become unstable. The treatment for such a network is to suppress the weights until everyone is within the reasonable bounds.

Sequence learning

  1. Fig 12.9: a simulation shows that this indeed exist in this artificial network.
  2. However, sequence memory requires a lot of neurons and connections. A limited network would break down if too many sequences are stored.

Planted: by ;

Current Ref:

  • snm/31.plasticity-and-coding.md