ginzburg_neuron#
- class brainpy.state.ginzburg_neuron(in_size, tau_m=Quantity(10., 'ms'), theta=Quantity(0., 'mV'), c_1=Quantity(0., '1 / mV'), c_2=1.0, c_3=Quantity(1., '1 / mV'), y_initializer=Constant(value=0.0), stochastic_update=True, rng_seed=0, name=None)#
Binary stochastic neuron with sigmoidal/affine gain function.
This model re-implements the NEST
ginzburg_neuron, a binary neuron that updates its output state \(y \in \{0, 1\}\) stochastically at Poisson-distributed intervals. The transition probability depends on a persistent input state \(h\) via a combined linear-sigmoidal gain function.1. Model Dynamics
The neuron maintains a persistent input \(h\) (in mV) and a binary output \(y \in \{0, 1\}\). State transitions occur at Poisson-distributed times with mean interval \(\tau_m\). At each update, the transition probability is:
\[g(h) = c_1 h + c_2 \frac{1 + \tanh(c_3 (h - \theta))}{2}\]where:
\(c_1\) (1/mV): linear gain coefficient
\(c_2\) (dimensionless): sigmoidal amplitude prefactor
\(c_3\) (1/mV): sigmoidal slope parameter
\(\theta\) (mV): threshold for sigmoidal activation
The new binary state is sampled as:
\[y \leftarrow \mathbb{1}[U < g(h + c)],\]where \(U \sim \mathrm{Uniform}(0, 1)\) and \(c\) is the current input for the present time step.
2. Update Scheduling
When
stochastic_update=True(default), updates occur stochastically:At initialization, draw \(\Delta t_0 \sim \mathrm{Exp}(\tau_m)\) and set \(t_{\text{next}} = \Delta t_0\).
At each time step, check if \(t + dt > t_{\text{next}}\) (strict inequality).
If true, perform state transition and draw new \(\Delta t \sim \mathrm{Exp}(\tau_m)\), then update \(t_{\text{next}} \leftarrow t_{\text{next}} + \Delta t\).
When
stochastic_update=False, the neuron updates at every time step, but transitions remain stochastic according to \(g(h+c)\).3. Input Accumulation
Following NEST semantics, the update order is:
Accumulate delta inputs (from binary events) into \(h\).
Read current input \(c\) for the present step.
Evaluate gain function \(g(h + c)\) with total input.
Sample new binary state if scheduled for update.
Delta inputs represent state-change events from upstream binary neurons: positive for up-transitions (0→1), negative for down-transitions (1→0).
4. Gain Function Properties
The combined linear-sigmoidal gain allows modeling both:
Linear neurons (\(c_2 = 0\), \(c_1 \neq 0\)): \(g(h) = c_1 h\)
Sigmoidal neurons (\(c_1 = 0\), \(c_2 = 1\)): \(g(h) = \frac{1 + \tanh(c_3(h - \theta))}{2}\)
Hybrid models (\(c_1, c_2 \neq 0\)): affine-shifted sigmoid with linear component
The sigmoidal component saturates between 0 and \(c_2\), with steepness controlled by \(c_3\) and center at \(\theta\).
5. Probability Clipping
As in NEST, probabilities \(g(h+c)\) are not explicitly clipped. The comparison \(U < g(h+c)\) provides implicit clipping:
\(g < 0\) → probability 0 (never transition to 1)
\(g > 1\) → probability 1 (always transition to 1)
This avoids numerical issues with negative or super-unitary probabilities while maintaining mathematical equivalence.
6. Numerical Implementation
All state variables use
float64precision for accurate random sampling.Random number generation uses
jax.randomwith stateful PRNGKey updates.State transitions use
jax.lax.stop_gradientto prevent backpropagation through stochastic sampling operations.
- Parameters:
in_size (
Size) – Number or shape of neurons in the population. Can be an integer (1D array) or tuple of integers (multi-dimensional array).tau_m (
ArrayLike, optional) – Mean inter-update interval \(\tau_m\) (time units). Must be strictly positive. Controls the expected time between state transitions in Poisson update mode. Default:10.0 * u.ms.theta (
ArrayLike, optional) – Threshold parameter \(\theta\) for sigmoidal component (voltage units). Determines the input level at which the sigmoid reaches half-maximum. Default:0.0 * u.mV.c_1 (
ArrayLike, optional) – Linear gain coefficient \(c_1\) (1/voltage units). Sets the slope of the linear component. Use0.0 / u.mVfor purely sigmoidal neurons. Default:0.0 / u.mV.c_2 (
ArrayLike, optional) – Sigmoidal gain prefactor \(c_2\) (dimensionless). Amplitude of the sigmoidal component. Use1.0for standard sigmoid or0.0for purely linear neurons. Default:1.0.c_3 (
ArrayLike, optional) – Sigmoidal slope parameter \(c_3\) (1/voltage units). Controls the steepness of the sigmoid. Larger values produce sharper transitions around \(\theta\). Default:1.0 / u.mV.y_initializer (
Callable[[Size,Optional[int]],ArrayLike], optional) – Initializer for binary state \(y\). Should return array of 0.0 or 1.0 values. Default:braintools.init.Constant(0.0)(all neurons start in state 0).stochastic_update (
bool, optional) – IfTrue(default), use Poisson-distributed update times as in NEST. IfFalse, update at every time step (synchronous updates), but transitions remain stochastic according to gain function. Default:True.rng_seed (
int, optional) – Seed for internal random number generator. Affects both uniform sampling for state transitions and exponential sampling for update intervals. Default:0.name (
str, optional) – Unique identifier for this module instance. IfNone, auto-generated.
Parameter Mapping
Correspondence with NEST
ginzburg_neuron:brainpy.state
NEST
Notes
tau_mtau_mMean update interval
thetathetaSigmoid threshold
c_1c_1Linear gain
c_2c_2Sigmoid amplitude
c_3c_3Sigmoid slope
yS_(state variable)Binary output (0 or 1)
hh_(state variable)Persistent input
stochastic_update=TrueDefault NEST behavior
Poisson update times
stochastic_update=FalseNot directly available
Synchronous updates
State Variables
- yShortTermState, shape=(in_size,), dtype=float64
Binary output state. Values are 0.0 (inactive) or 1.0 (active). Updated stochastically according to gain function.
- hShortTermState, shape=(in_size,), dtype=float64, units=mV
Persistent input state. Accumulates delta inputs from upstream neurons and determines transition probability via gain function.
- t_nextShortTermState, shape=(in_size,), dtype=float64, units=ms
Next scheduled update time (only present when
stochastic_update=True). Incremented by exponentially-distributed intervals after each update.- rng_keyShortTermState, shape=(2,), dtype=uint32
JAX PRNG key for random number generation. Automatically split and updated on each random sample.
Notes
Binary Communication: In NEST, binary neurons communicate state changes (not absolute states) via spike multiplicity encoding:
0→1 transition sends +1 event
1→0 transition sends -1 event (represented as 2× outgoing spike)
No change sends no event
In brainpy.state, this is represented via delta inputs: positive delta for up-transition, negative for down-transition. Projections connecting binary neurons should use
align_pre_projectionto properly encode state changes.Gain Function Design: The mixed linear-sigmoidal form allows flexible response properties:
Pure sigmoid (\(c_1=0, c_2=1\)): bounded response, saturates at high inputs
Linear (\(c_2=0\)): unbounded response, no saturation
Mixed: linear baseline with sigmoidal nonlinearity
For biological realism, typical settings might be \(c_1=0, c_2=1, c_3>0\), producing a graded sigmoidal response. For theoretical work (e.g., mean-field analysis), \(c_1 \neq 0\) can simplify calculations.
Stochasticity: This model introduces two sources of randomness:
Update timing (when
stochastic_update=True): Poisson process with rate \(1/\tau_m\)State transitions: Bernoulli trial with probability \(g(h+c)\)
These combine to produce rich stochastic dynamics even with constant input.
Performance Considerations: Binary neurons are computationally lightweight (no differential equations to integrate), making them suitable for large-scale network simulations. The
stochastic_update=Falsemode eliminates exponential sampling overhead while retaining stochastic transitions.See also
erfc_neuronBinary neuron with error-function gain
mcculloch_pitts_neuronDeterministic binary threshold neuron
References
Examples
Basic usage with default sigmoidal gain:
>>> import brainpy.state as bst >>> import saiunit as u >>> import brainstate >>> >>> # Create population of 100 binary neurons with sigmoidal gain >>> neurons = bst.ginzburg_neuron(100, tau_m=10*u.ms, theta=5*u.mV, c_3=0.5/u.mV) >>> >>> # Initialize and simulate >>> with brainstate.environ.context(dt=0.1*u.ms): ... neurons.init_all_states() ... # Apply constant input and observe stochastic transitions ... states = [] ... for _ in range(1000): ... y = neurons.update(x=8*u.mV) ... states.append(y.mean()) # Average activity across population
Linear neuron (c_2=0):
>>> # Pure linear gain: g(h) = c_1 * h >>> linear_neurons = bst.ginzburg_neuron( ... 50, c_1=0.1/u.mV, c_2=0.0, tau_m=5*u.ms ... )
Hybrid linear-sigmoidal neuron:
>>> # Combined gain with linear baseline >>> hybrid_neurons = bst.ginzburg_neuron( ... 50, ... tau_m=8*u.ms, ... theta=3*u.mV, ... c_1=0.05/u.mV, # Linear component ... c_2=0.8, # Sigmoid amplitude ... c_3=0.3/u.mV # Sigmoid slope ... )
Synchronous updates (stochastic_update=False):
>>> # Update at every time step instead of Poisson times >>> sync_neurons = bst.ginzburg_neuron( ... 100, tau_m=10*u.ms, stochastic_update=False ... ) >>> >>> with brainstate.environ.context(dt=0.1*u.ms): ... sync_neurons.init_all_states() ... # Transitions occur every step, but stochastically ... for _ in range(100): ... y = sync_neurons.update(x=5*u.mV)
Network with binary-binary connections:
>>> import brainevent as be >>> >>> pre = bst.ginzburg_neuron(100, theta=0*u.mV, c_2=1.0, c_3=1.0/u.mV) >>> post = bst.ginzburg_neuron(100, theta=2*u.mV, c_2=1.0, c_3=0.5/u.mV) >>> >>> # Connect with fixed probability, encoding state changes as delta inputs >>> proj = be.nn.align_pre_projection( ... pre=pre, post=post, ... comm=be.nn.FixedProb(100, 100, prob=0.1, weight=0.5*u.mV) ... ) >>> >>> net = brainstate.nn.Module([pre, post, proj])
- init_state(**kwargs)[source]#
Initialize neuron state variables.
Creates binary output state \(y\), persistent input \(h\), PRNG key, and (if
stochastic_update=True) the next update time \(t_{\text{next}}\).- Parameters:
**kwargs – Unused compatibility parameters accepted by the base-state API.
Notes
Binary state \(y\) initialized using
y_initializer(default: all zeros).Input state \(h\) initialized to zero.
Next update time \(t_{\text{next}}\) drawn from \(\mathrm{Exp}(\tau_m)\) distribution when
stochastic_update=True.All state arrays use
float64dtype for precise random sampling.
- update(x=Quantity(0., 'mV'))[source]#
Perform one simulation step with stochastic state transition.
Accumulates inputs, evaluates gain function, and (if scheduled or in synchronous mode) performs Bernoulli trial for state transition.
- Parameters:
x (
ArrayLike, optional) – External current input for this time step (voltage units). Can be scalar (broadcast to all neurons) or array with shape matchingin_size. Default:0.0 * u.mV.- Returns:
y – Updated binary state (0.0 or 1.0) after stochastic transition.
- Return type:
jax.Array,shape=(in_size,),dtype=float64
Notes
Update sequence (matching NEST):
Accumulate delta inputs into \(h\): \(h \leftarrow h + \Delta h\)
Compute total input: \(h_{\text{total}} = h + c\) (current inputs)
Evaluate gain: \(p = g(h_{\text{total}})\)
If scheduled (
stochastic_update=True) or always (stochastic_update=False):Draw \(U \sim \mathrm{Uniform}(0,1)\)
Set \(y \leftarrow \mathbb{1}[U < p]\)
If
stochastic_update=True, update \(t_{\text{next}}\)
Stochastic update timing:
When
stochastic_update=True, updates occur when \(t + dt > t_{\text{next}}\) (strict inequality). After update, draw \(\Delta t \sim \mathrm{Exp}(\tau_m)\) and set \(t_{\text{next}} \leftarrow t_{\text{next}} + \Delta t\).Synchronous mode:
When
stochastic_update=False, neurons update every time step. Transitions are still stochastic (Bernoulli with probability \(p\)), but no longer Poisson-distributed in time.Non-differentiability:
State transitions use
jax.lax.stop_gradientto prevent backpropagation through stochastic sampling. For gradient-based learning, consider differentiable rate-based neurons or surrogate gradient methods.