threshold_lin_rate_opn#

class brainpy.state.threshold_lin_rate_opn(in_size, tau=Quantity(10., 'ms'), sigma=1.0, mu=0.0, g=1.0, theta=0.0, alpha=inf, mult_coupling=False, linear_summation=True, rate_initializer=Constant(value=0.0), noise_initializer=Constant(value=0.0), noisy_rate_initializer=Constant(value=0.0), name=None)#

NEST-compatible output-noise threshold-linear rate neuron.

Implements the NEST threshold_lin_rate_opn model, an output-noise rate neuron with threshold-linear gain function. Unlike the input-noise variant (threshold_lin_rate_ipn), noise is applied to the output after deterministic dynamics, leading to different stationary distributions and noise scaling.

Mathematical Description

1. Continuous-Time Deterministic Dynamics with Output Noise

The rate state \(X(t)\) evolves according to the deterministic ODE:

\[\tau\frac{dX(t)}{dt} = -X(t) + \mu + I_\mathrm{net}(t),\]

where:

  • \(\tau > 0\) is the time constant (ms).

  • \(\mu\) is the mean drive (dimensionless, external constant input).

  • \(I_\mathrm{net}(t)\) is the network input (see below).

The output rate \(X_\mathrm{noisy}(t)\) is obtained by adding noise to the deterministic state:

\[X_\mathrm{noisy}(t) = X(t) + \sqrt{\frac{\tau}{h}}\,\sigma\,\xi(t),\]

where:

  • \(\sigma \ge 0\) is the output-noise scale (dimensionless).

  • \(\xi(t)\sim\mathcal{N}(0,1)\) is standard Gaussian white noise.

  • \(h=dt\) is the simulation time step (ms).

The \(\sqrt{\tau/h}\) scaling ensures the noise variance is independent of the time step for small \(h\).

2. Threshold-Linear Gain Function

The input nonlinearity \(\phi(h)\) is identical to the input-noise variant:

\[\phi(h) = \min(\max(g(h-\theta), 0), \alpha),\]

where:

  • \(g > 0\) is the gain slope (dimensionless).

  • \(\theta\) is the activation threshold (dimensionless).

  • \(\alpha > 0\) is the saturation level (dimensionless).

3. Network Input Structure

The network input \(I_\mathrm{net}(t)\) is computed according to:

\[I_\mathrm{net}(t) = \phi(I_\mathrm{ex}(t) + I_\mathrm{in}(t)) \quad\text{(if linear\_summation=True)},\]

or:

\[I_\mathrm{net}(t) = \phi(I_\mathrm{ex}(t)) + \phi(I_\mathrm{in}(t)) \quad\text{(if linear\_summation=False)},\]

where \(I_\mathrm{ex}(t)\) and \(I_\mathrm{in}(t)\) are excitatory and inhibitory branches (sign-separated by event weight).

Note: Multiplicative coupling is not supported (mult_coupling parameter is accepted for API compatibility but has no effect).

4. Discrete-Time Integration (Exponential Euler)

For time step \(h=dt\) (in ms), the deterministic dynamics are integrated using exponential Euler:

\[X_{n+1} = P_1 X_n + P_2 (\mu + I_\mathrm{net,n}),\]

where:

\[P_1 = \exp\left(-\frac{h}{\tau}\right), \quad P_2 = 1 - P_1 = -\mathrm{expm1}\left(-\frac{h}{\tau}\right).\]

The noisy output is:

\[X_\mathrm{noisy,n} = X_n + \sqrt{\frac{\tau}{h}}\,\sigma\,\xi_n,\]

where \(\xi_n\sim\mathcal{N}(0,1)\).

5. Update Ordering (Matching NEST ``rate_neuron_opn_impl.h``)

Per simulation step:

  1. Draw noise: sample \(\xi_n\sim\mathcal{N}(0,1)\), compute \(\mathrm{noise}_n=\sigma\,\xi_n\).

  2. Build noisy output: compute \(X_\mathrm{noisy,n}=X_n+\sqrt{\tau/h}\,\mathrm{noise}_n\) and store as both delayed_rate and instant_rate (outgoing values for projections).

  3. Propagate deterministic dynamics: apply exponential Euler to update \(X_n\).

  4. Read event buffers: drain delayed events arriving at current step; accumulate instantaneous events.

  5. Apply network input with threshold-linear gain:

    • linear_summation=True: \(X_{n+1} \gets X_{n+1} + P_2\,\phi(I_\mathrm{ex}+I_\mathrm{in})\).

    • linear_summation=False: \(X_{n+1} \gets X_{n+1} + P_2\,[\phi(I_\mathrm{ex})+\phi(I_\mathrm{in})]\).

  6. Update state variables: rate, noise, noisy_rate, delayed_rate, instant_rate, _step_count.

Note: Unlike input-noise variant, there is no rectification option for output-noise neurons. The noise is applied to the output only and does not affect the internal deterministic state.

6. Numerical Stability and Computational Complexity

  • Construction enforces \(\tau>0\), \(\sigma\ge 0\).

  • The threshold-linear gain is evaluated using np.minimum and np.maximum for numerically stable clipping.

  • Per-call cost is \(O(\prod\mathrm{varshape})\) with vectorized NumPy operations in float64.

  • The exponential Euler scheme is numerically stable for all \(h>0\).

Parameters:
  • in_size (Size) – Population shape (tuple or int). All per-neuron parameters are broadcast to self.varshape.

  • tau (ArrayLike, optional) – Time constant \(\tau\) (ms). Scalar or array broadcastable to self.varshape. Must be \(>0\). Default: 10.0 * u.ms.

  • sigma (ArrayLike, optional) – Output-noise scale \(\sigma\) (dimensionless). Scalar or array broadcastable to self.varshape. Must be \(\ge 0\). Default: 1.0.

  • mu (ArrayLike, optional) – Mean drive \(\mu\) (dimensionless). Scalar or array broadcastable to self.varshape. External constant input to the rate dynamics. Default: 0.0.

  • g (ArrayLike, optional) – Gain slope \(g\) (dimensionless) for the threshold-linear function \(\phi(h)=\min(\max(g(h-\theta),0),\alpha)\). Scalar or array broadcastable to self.varshape. Default: 1.0.

  • theta (ArrayLike, optional) – Activation threshold \(\theta\) (dimensionless). The gain function is zero for \(h<\theta\). Scalar or array broadcastable to self.varshape. Default: 0.0.

  • alpha (ArrayLike, optional) – Saturation level \(\alpha\) (dimensionless). The gain function saturates at \(\alpha\) for large inputs. Scalar or array broadcastable to self.varshape. Default: np.inf (no saturation).

  • mult_coupling (bool, optional) – API compatibility flag. Has no effect on dynamics for threshold-linear neurons (multiplicative coupling factors are constant 1.0). Default: False.

  • linear_summation (bool, optional) – Controls where the threshold-linear gain is applied. If True, the gain is applied to the sum of excitatory and inhibitory inputs. If False, the gain is applied separately to each input branch (matching NEST event semantics). Default: True.

  • rate_initializer (Callable, optional) – Initializer for the rate state variable \(X_0\). Callable compatible with braintools.init API. Default: braintools.init.Constant(0.0).

  • noise_initializer (Callable, optional) – Initializer for the noise state variable (records last noise sample \(\sigma\,\xi_{n-1}\)). Callable compatible with braintools.init API. Default: braintools.init.Constant(0.0).

  • noisy_rate_initializer (Callable, optional) – Initializer for the noisy_rate state variable \(X_\mathrm{noisy,0}\) (initial noisy output). Callable compatible with braintools.init API. Default: braintools.init.Constant(0.0).

  • name (str or None, optional) – Module name for identification in hierarchies. If None, an auto-generated name is used. Default: None.

Parameter Mapping

The following table maps NEST threshold_lin_rate_opn parameters to brainpy.state equivalents:

NEST Parameter

brainpy.state

Default

tau

tau

10 ms

sigma

sigma

1.0

mu

mu

0.0

g (gain slope)

g

1.0

theta (threshold)

theta

0.0

alpha (saturation)

alpha

inf

mult_coupling

mult_coupling (no effect)

False

linear_summation

linear_summation

True

Note: Unlike threshold_lin_rate_ipn, this model does not have lambda (passive decay is fixed at 1.0), rectify_rate, or rectify_output parameters.

rate#

Current deterministic rate state \(X_n\) (float64 array of shape self.varshape or (batch_size,) + self.varshape).

Type:

brainstate.ShortTermState

noise#

Last noise sample \(\sigma\,\xi_{n-1}\) (float64 array, same shape as rate).

Type:

brainstate.ShortTermState

noisy_rate#

Noisy output rate \(X_\mathrm{noisy,n}=X_n+\sqrt{\tau/h}\,\sigma\,\xi_n\) (float64 array, same shape as rate).

Type:

brainstate.ShortTermState

instant_rate#

Noisy rate value used for instantaneous projections (float64 array, same shape as rate).

Type:

brainstate.ShortTermState

delayed_rate#

Noisy rate value used for delayed projections (float64 array, same shape as rate).

Type:

brainstate.ShortTermState

_step_count#

Internal step counter for delayed event scheduling (int64 scalar).

Type:

brainstate.ShortTermState

_delayed_ex_queue#

Internal queue mapping step_idx to accumulated excitatory delayed events.

Type:

dict

_delayed_in_queue#

Internal queue mapping step_idx to accumulated inhibitory delayed events.

Type:

dict

Raises:
  • ValueError – If tau <= 0 or sigma < 0.

  • ValueError – If instant_rate_events contain non-zero delay_steps.

  • ValueError – If delayed_rate_events contain negative delay_steps.

  • ValueError – If event tuples have length other than 2, 3, or 4.

Notes

Runtime Event Semantics

Event formats are identical to threshold_lin_rate_ipn:

  • instant_rate_events: Applied in the current step without delay.

  • delayed_rate_events: Scheduled with integer delay_steps.

  • Sign convention: weight >= 0 → excitatory, weight < 0 → inhibitory.

Comparison to Input-Noise Variant

The key differences between threshold_lin_rate_opn (output noise) and threshold_lin_rate_ipn (input noise) are:

  • Noise location: Output noise is added after nonlinearity; input noise is integrated before nonlinearity.

  • Stationary distribution: Output noise does not affect the mean of the deterministic attractor; input noise shifts the effective drive.

  • Dynamics: Output-noise model has simpler deterministic dynamics (\(\lambda=1.0\) fixed) with additive output corruption.

  • Rectification: Input-noise variant supports rectify_output; output- noise variant does not (noise is on output only).

Failure Modes

  • No automatic failure handling. Negative time constants or noise parameters are caught at construction by _validate_parameters.

  • Invalid event formats raise ValueError during update.

  • The noise scaling \(\sqrt{\tau/h}\) can become large for small time steps, but this is by design to ensure correct variance scaling.

Examples

Example 1: Minimal output-noise threshold-linear neuron.

>>> import brainpy.state as bst
>>> import saiunit as u
>>> model = bst.threshold_lin_rate_opn(
...     in_size=10, tau=20*u.ms, sigma=0.5, g=2.0, theta=1.0
... )
>>> model.init_all_states(batch_size=1)
>>> rate = model(x=0.5)  # deterministic state
>>> noisy_rate = model.noisy_rate.value  # noisy output

Example 2: Saturating threshold-linear neuron with output noise.

>>> model = bst.threshold_lin_rate_opn(
...     in_size=5,
...     tau=10*u.ms,
...     sigma=0.2,
...     g=1.5, theta=0.5, alpha=5.0
... )
>>> model.init_all_states()

Example 3: Update with events (identical to input-noise variant).

>>> model = bst.threshold_lin_rate_opn(in_size=3, tau=10*u.ms, sigma=0.1)
>>> model.init_all_states()
>>> instant_event = {'rate': 2.0, 'weight': 0.1}
>>> delayed_event = {'rate': 1.5, 'weight': -0.05, 'delay_steps': 3}
>>> rate = model.update(
...     x=0.2,
...     instant_rate_events=instant_event,
...     delayed_rate_events=delayed_event
... )

References

See also

threshold_lin_rate_ipn

Input-noise variant of the threshold-linear rate neuron.

rate_neuron_opn

General output-noise rate neuron with custom gain functions.

lin_rate

Deterministic linear rate neuron (sigma=0, no threshold).

init_state(**kwargs)[source]#

Initialize all state variables for simulation.

Parameters:

**kwargs – Unused compatibility parameters accepted by the base-state API.

Notes

This method initializes:

  • rate: Deterministic rate state \(X_n\).

  • noise: Last noise sample \(\sigma\,\xi_{n-1}\).

  • noisy_rate: Noisy output \(X_\mathrm{noisy,n}\).

  • instant_rate: Noisy rate for instantaneous projections.

  • delayed_rate: Noisy rate for delayed projections.

  • _step_count: Internal step counter for delay scheduling.

  • _delayed_ex_queue, _delayed_in_queue: Delay queues.

All state arrays are initialized as float64 NumPy arrays using the provided initializers. Both instant_rate and delayed_rate are initialized to noisy_rate (outgoing values are noisy).

property receptor_types#

Receptor type dictionary for projection compatibility.

Returns:

{'RATE': 0}. Rate neurons have a single unified receptor port for all rate-based inputs. Excitatory vs. inhibitory separation is handled internally via event weight signs.

Return type:

dict[str, int]

Notes

This property is used by projection objects to validate connection targets. Unlike spiking neurons with separate AMPA/GABA receptor ports, rate neurons use sign-based branch routing (weight >= 0 → excitatory branch, weight < 0 → inhibitory branch).

property recordables#

List of state variable names that can be recorded during simulation.

Returns:

['rate', 'noise', 'noisy_rate']. The rate variable records the deterministic rate state \(X_n\), noise records the last noise sample \(\sigma\,\xi_{n-1}\), and noisy_rate records the noisy output \(X_\mathrm{noisy,n}\).

Return type:

list of str

Notes

These variables can be accessed via recording tools in BrainPy for post-simulation analysis. The noisy_rate is the value transmitted to downstream neurons via projections.

update(x=0.0, instant_rate_events=None, delayed_rate_events=None, noise=None)[source]#

Perform one simulation step of deterministic threshold-linear rate dynamics with output noise.

Parameters:
  • x (ArrayLike, optional) – External drive (scalar or array broadcastable to self.varshape). Added to mu as constant forcing. Default is 0.0.

  • instant_rate_events (None, dict, tuple, list, or iterable, optional) – Instantaneous rate events applied in the current step without delay. See class docstring for event format. Default is None.

  • delayed_rate_events (None, dict, tuple, list, or iterable, optional) – Delayed rate events scheduled with integer delay_steps (units of simulation time step). See class docstring for event format. Default is None.

  • noise (ArrayLike, optional) – Externally supplied noise sample \(\xi_n\) (scalar or array broadcastable to state shape). If None (default), draws \(\xi_n\sim\mathcal{N}(0,1)\) internally.

Returns:

rate_new – Updated deterministic rate state \(X_{n+1}\) (float64 array of shape self.rate.value.shape).

Return type:

np.ndarray

Notes

Update algorithm:

  1. Draw noise and compute noisy output:

    \[\mathrm{noise}_n = \sigma\,\xi_n, \quad X_\mathrm{noisy,n} = X_n + \sqrt{\frac{\tau}{h}}\,\mathrm{noise}_n.\]

    Store \(X_\mathrm{noisy,n}\) as delayed_rate and instant_rate (outgoing values for projections).

  2. Collect input contributions:

    • Delayed events arriving at current step (from internal queues).

    • Newly scheduled delayed events with delay_steps=0.

    • Instantaneous events.

    • Delta inputs (sign-separated into excitatory/inhibitory).

    • Current inputs via sum_current_inputs(x, rate).

  3. Compute propagator coefficients (deterministic exponential Euler):

    \[P_1 = \exp(-h/\tau), \quad P_2 = 1 - P_1 = -\mathrm{expm1}(-h/\tau).\]
  4. Propagate deterministic dynamics:

    \[X_{n+1} = P_1 X_n + P_2(\mu + \mu_\mathrm{ext}).\]
  5. Apply network input with threshold-linear gain:

    • linear_summation=True: \(X_{n+1} \gets X_{n+1} + P_2\,\phi(I_\mathrm{ex}+I_\mathrm{in})\).

    • linear_summation=False: \(X_{n+1} \gets X_{n+1} + P_2\,[\phi(I_\mathrm{ex})+\phi(I_\mathrm{in})]\).

    where \(\phi(h)=\min(\max(g(h-\theta),0),\alpha)\).

  6. Update state variables: rate, noise, noisy_rate, delayed_rate, instant_rate, _step_count.

Key difference from input-noise variant: Noise is added to the output before the deterministic update, not during the stochastic integration. This means the internal state \(X_n\) evolves deterministically, and only the transmitted rate is noisy.

Numerical stability: The threshold-linear gain uses np.minimum and np.maximum for stable clipping. The exponential Euler scheme uses np.expm1 for numerically stable evaluation of \(1-e^{-x}\). The noise scaling \(\sqrt{\tau/h}\) ensures correct variance scaling as \(h\to 0\).

Failure modes: No automatic failure handling. Negative time constants or noise parameters are caught at construction by _validate_parameters. Invalid event formats raise ValueError.