21.Asynchronous Firing

Asynchronous firing of homogeneous network

Review of Concepts

  1. Population activity $A(t)$: fraction of neurons fired per unit time at time $t$.

  2. Interval distribution $P_I(t-\hat t\vert t)$: given external input $I$, the

  3. Mean firing rate $\nu$: reciprocal of mean firing interval $\langle T\rangle$

    $$ \begin{equation} \nu = \frac{1}{\langle T \rangle} = \int s P(s) ds, \end{equation} $$

    where $s$ is the interval between firing.

Asynchronous Firing

Asynchronous firing is a state when the population activity $A(t)=A_0$ is a constant of time. In this section we deal with homogeneous network and constant external input.

Whether it is possible to have equilibrium if we apply variant external input is still up for discussion.

Equilibrium means that we need only the parameter $s=t-\hat t$ to describe each neuron. Hence we simplify the probabilities

$$ \begin{align} P_I(t-\hat t\vert \hat t) &= P_0(s), \
S_I(t-\hat t\vert \hat t) &= S_0(s). \end{align} $$

This corresponds to equilibrium state of the whole system. In statistical physics, we have the concept of equal priori probability 1. The question is, does it hold for neural networks?

The first question, though, is how to describe a microstate of a neural network. To simplify the idea, we discuss equilibrium systems. We could easily deny the idea of using spiking states, since it obviously gives much less information we should have.

My idea is to use the time lapsed after the spike $s = t-\hat t$. If we can know exactly the $s$ of each neuron for an equilibrium state or asynchronous firing, we can determine the firing of the system probabilistically or on average.

Here is an example. Suppose we have 4 neurons in a network, and the system reaches equilibrium.

| Neuron Number | 1 | 2 | 3 | 4 | | $s^{(1)}$ | 0.1 | 0.7 | 0.4 | 0.2 | | $s^{(2)}$ | 0.5 | 0.8 | 0.3 | 0.4 |

The indices ${}^{(i)}$'s are the encoding of ensembles. Do they have equal probability?

NO! This is why we have the interval distribution!

This either means we do not have equal priori probability principle or I didn’t find the correct way to describe all the accessible states of the system.

However, another notion that bothers me is that in statistical physics, we have Avogadro number of particles so that the fluctuations $1/\sqrt{N}$ is small. What about neural networks? How do we justify the analogy of equilibrium state/ensemble?

Is $A_0$ the same as mean firing rate?

Naively, we would expect that $A_0$ equals the mean firing rate since we have only one time scale for the whole system which is the mean firing interval and $A_0$ has dimension $1/[T]$. Here is a handwaving argument.

$$ \begin{align} &\text{Total number of neurons} \cdot A_0 \cdot \Delta t \
\equiv & \text{Number of neurons firing during time interval $\Delta t$} \
\equiv & \text{Total number of neurons} \cdot \frac{\Delta t}{\text{Mean firing interval of each neuron} } \end{align} $$

The last step is an educated guess. We are pretty sure about this forumla because we can take the limit that $\Delta t=\text{Mean firing interval}$ and all neurons have constant firing interval.

To prove this conjecture, we need to investigate the equations for $A_0$. We consider the normalization equation

$$ \int_{-\infty}^t A_0 S_0(t-\hat t) d\hat t = 1. $$

Change integration variable from $\hat t$ to $s$, which are related through $\hat t=t-s$,

$$ \int_{-\infty}^t A_0 S_0(t-\hat t) d\hat t = \int_0^{\infty}A_0 S_0(s)ds. $$

We integrate by parts

$$ \begin{align} &\int_0^{\infty}A_0 S_0(s)ds \
=& A_0 \int_0^\infty d(S_0 \cdot s) - A_0 \int_0^\infty s d S_0 \
=& A_0 \cdot s\cdot S_0(s)\vert_0^\infty - A_0 \int_0^\infty s \frac{dS_0}{ds} ds\
=& A_0 \int_0^\infty s P_0(s) ds \end{align} $$

In the last step, we used

$$ \begin{equation} \frac{dS_0(s)}{ds} = - P_0(s). \end{equation} $$

The term $\int_0^\infty s P_0(s) ds$ is exactly the definition of mean interspike interval $\langle T\rangle$. So we have proved

$$ \begin{equation} A_0 \langle T\rangle = 1. \end{equation} $$

Numerically, they show that the averaged fluctuations $\langle \lvert \bar A(t) - A_0 \rvert^2 \rangle$ decrease as the number of neurons increase. Fig. 6.9.

Gain function and fixed points

Gain function: $1/\langle T \rangle$ as a function of input current.

$$ \begin{equation} A_0 = g(I_0), \end{equation} $$

where

$$ I_0 = I^{\mathrm{ext}}_0 + I^{\mathrm{int}}_0. $$

The subscript $0$ indicates that we are talking about stationary cases. If we apply SRM0 model,

$$ \begin{equation} h(t) = J_0 A_0 \int_0^\infty \epsilon_0(s) ds + I^{\mathrm{ext}}_0 \int_0^\infty \kappa_0 (s)ds \end{equation} $$

becomes time independent.

Since $\int_0^\infty \epsilon_0(s) ds$ is time independent, we can redefine a new quantitity

$$ J_0^{(1)} \equiv J_0 \int_0^\infty \epsilon_0(s) ds. $$

Meanwhile, our observation tells us that $\int_0^\infty \kappa_0 (s)ds$ is a effective resistance.

Thus we obtain

$$ h = J_0^{(1)} A_0 + I^{\mathrm{ext}}_0 R. $$

To obtain the current, we devided the equation by $R$,

$$ I_0 = J_0^{(2)} A_0 + I^{\mathrm{ext}}_0, $$

where

$$ J_0^{(2)} = J_0^{(1)} /R. $$

For convinience we call just drop all these superscripts and name $J_0^{(2)}$ as $J_0$, i.e.,

$$ I_0 = J_0 A_0 + I^{\mathrm{ext}}_0. $$

The gain function becomes

$$ A_0 = g(J_0 A_0 + I^{\mathrm{ext}}_0). $$

This equation seems to be complicated. A graphical method is provided in the textbook. An easy way to understand this method is to rewrite the equation into two,

$$ \begin{align} A_0 = & g(I_0) \
A_0 =& \frac{I_0 - I^{\mathrm{ext}}_0 }{J_0}, \end{align} $$

which is equivalently

$$ \begin{equation} \frac{I_0 - I^{\mathrm{ext}}_0 }{J_0} =g(I_0) \label{eq-graphical-solution-illustration} \end{equation} $$

If we plot $A_0$ as a function of $I_0$, the solution to the system of equations should be where the LHS = RHS in Eq.(\ref{eq-graphical-solution-illustration}). More explicitly, we plot out LHS and RHS as a function of $I_0$ and the intersection is the solution.

Low Connectivity Networks

Same neurons but the connections are not full.

  1. Each neuron takes input from $C$ neurons.
  2. Sparse connectivity: $C/N \gg 1$.
  3. Sparse also means any two neurons i and j hardly take input from the same neurons.
  4. IF the presynaptic signals are stochastic, the neurons take stochastic inputs, thus described by diffusive noise.

How can we be sure that the presynaptic signals are stochastic? This loophole is solved by proving that the system has such solutions.

Assuming two types of populations, excitatory neurons $N_E$ and inhibitory neurons $N_I$, each neuron has connectivity $C_E$ from $N_E$ and $C_I$ from $N_I$.

$$ \begin{equation} h_0 = R I^{\mathrm{ext}} + \tau_m \sum_j \nu_j w_j \end{equation} $$

We have

$$ \begin{align} w_j = & w_E C_E + w_I C_I \
=& w_E C_E (1 - g\gamma), \end{align} $$

where

$$ \begin{align} g = & - \frac{w_I}{w_E} \
\gamma = & \frac{C_I}{C_E}. \end{align} $$

Similarly,

$$ \begin{align} \sigma^2 = & \tau_m \sum_j v_j w_j^2 \
=& \tau_m v (w_E^2 C_E + w_I^2 C_I) \
=& \tau_m v w_E^2 C_E^2 (1+g^2\gamma). \end{align} $$

Then we can calculate the population activity: Eq. 6.110, which is a function that depends on $h_0$ and $\sigma$. Hence we can find population activity given all the parameters:

  1. Threshold $\theta$
  2. Life time $\tau_m$
  3. $w_E$
  4. $C_E$
  5. $g$
  6. $\gamma$
  7. External $I^{\mathrm{ext}}$
  8. Resistance $R$

The book has two examples of verifying that in some conditions such randomly connected sparse network is equivalent to noise

References and Notes

  1. Weiss Theory of Ferromagnetism

  1. For some explaination of this concept, please read The principle of equal a priori probabilities and Probability calculations. ↩︎

Planted: by ;

Current Ref:

  • snm/21.asynchronous-firing.md