threshold_lin_rate_opn#
- class brainpy.state.threshold_lin_rate_opn(in_size, tau=Quantity(10., 'ms'), sigma=1.0, mu=0.0, g=1.0, theta=0.0, alpha=inf, mult_coupling=False, linear_summation=True, rate_initializer=Constant(value=0.0), noise_initializer=Constant(value=0.0), noisy_rate_initializer=Constant(value=0.0), name=None)#
NEST-compatible output-noise threshold-linear rate neuron.
Implements the NEST
threshold_lin_rate_opnmodel, an output-noise rate neuron with threshold-linear gain function. Unlike the input-noise variant (threshold_lin_rate_ipn), noise is applied to the output after deterministic dynamics, leading to different stationary distributions and noise scaling.Mathematical Description
1. Continuous-Time Deterministic Dynamics with Output Noise
The rate state \(X(t)\) evolves according to the deterministic ODE:
\[\tau\frac{dX(t)}{dt} = -X(t) + \mu + I_\mathrm{net}(t),\]where:
\(\tau > 0\) is the time constant (ms).
\(\mu\) is the mean drive (dimensionless, external constant input).
\(I_\mathrm{net}(t)\) is the network input (see below).
The output rate \(X_\mathrm{noisy}(t)\) is obtained by adding noise to the deterministic state:
\[X_\mathrm{noisy}(t) = X(t) + \sqrt{\frac{\tau}{h}}\,\sigma\,\xi(t),\]where:
\(\sigma \ge 0\) is the output-noise scale (dimensionless).
\(\xi(t)\sim\mathcal{N}(0,1)\) is standard Gaussian white noise.
\(h=dt\) is the simulation time step (ms).
The \(\sqrt{\tau/h}\) scaling ensures the noise variance is independent of the time step for small \(h\).
2. Threshold-Linear Gain Function
The input nonlinearity \(\phi(h)\) is identical to the input-noise variant:
\[\phi(h) = \min(\max(g(h-\theta), 0), \alpha),\]where:
\(g > 0\) is the gain slope (dimensionless).
\(\theta\) is the activation threshold (dimensionless).
\(\alpha > 0\) is the saturation level (dimensionless).
3. Network Input Structure
The network input \(I_\mathrm{net}(t)\) is computed according to:
\[I_\mathrm{net}(t) = \phi(I_\mathrm{ex}(t) + I_\mathrm{in}(t)) \quad\text{(if linear\_summation=True)},\]or:
\[I_\mathrm{net}(t) = \phi(I_\mathrm{ex}(t)) + \phi(I_\mathrm{in}(t)) \quad\text{(if linear\_summation=False)},\]where \(I_\mathrm{ex}(t)\) and \(I_\mathrm{in}(t)\) are excitatory and inhibitory branches (sign-separated by event weight).
Note: Multiplicative coupling is not supported (
mult_couplingparameter is accepted for API compatibility but has no effect).4. Discrete-Time Integration (Exponential Euler)
For time step \(h=dt\) (in ms), the deterministic dynamics are integrated using exponential Euler:
\[X_{n+1} = P_1 X_n + P_2 (\mu + I_\mathrm{net,n}),\]where:
\[P_1 = \exp\left(-\frac{h}{\tau}\right), \quad P_2 = 1 - P_1 = -\mathrm{expm1}\left(-\frac{h}{\tau}\right).\]The noisy output is:
\[X_\mathrm{noisy,n} = X_n + \sqrt{\frac{\tau}{h}}\,\sigma\,\xi_n,\]where \(\xi_n\sim\mathcal{N}(0,1)\).
5. Update Ordering (Matching NEST ``rate_neuron_opn_impl.h``)
Per simulation step:
Draw noise: sample \(\xi_n\sim\mathcal{N}(0,1)\), compute \(\mathrm{noise}_n=\sigma\,\xi_n\).
Build noisy output: compute \(X_\mathrm{noisy,n}=X_n+\sqrt{\tau/h}\,\mathrm{noise}_n\) and store as both
delayed_rateandinstant_rate(outgoing values for projections).Propagate deterministic dynamics: apply exponential Euler to update \(X_n\).
Read event buffers: drain delayed events arriving at current step; accumulate instantaneous events.
Apply network input with threshold-linear gain:
linear_summation=True: \(X_{n+1} \gets X_{n+1} + P_2\,\phi(I_\mathrm{ex}+I_\mathrm{in})\).linear_summation=False: \(X_{n+1} \gets X_{n+1} + P_2\,[\phi(I_\mathrm{ex})+\phi(I_\mathrm{in})]\).
Update state variables:
rate,noise,noisy_rate,delayed_rate,instant_rate,_step_count.
Note: Unlike input-noise variant, there is no rectification option for output-noise neurons. The noise is applied to the output only and does not affect the internal deterministic state.
6. Numerical Stability and Computational Complexity
Construction enforces \(\tau>0\), \(\sigma\ge 0\).
The threshold-linear gain is evaluated using
np.minimumandnp.maximumfor numerically stable clipping.Per-call cost is \(O(\prod\mathrm{varshape})\) with vectorized NumPy operations in float64.
The exponential Euler scheme is numerically stable for all \(h>0\).
- Parameters:
in_size (
Size) – Population shape (tuple or int). All per-neuron parameters are broadcast toself.varshape.tau (
ArrayLike, optional) – Time constant \(\tau\) (ms). Scalar or array broadcastable toself.varshape. Must be \(>0\). Default:10.0 * u.ms.sigma (
ArrayLike, optional) – Output-noise scale \(\sigma\) (dimensionless). Scalar or array broadcastable toself.varshape. Must be \(\ge 0\). Default:1.0.mu (
ArrayLike, optional) – Mean drive \(\mu\) (dimensionless). Scalar or array broadcastable toself.varshape. External constant input to the rate dynamics. Default:0.0.g (
ArrayLike, optional) – Gain slope \(g\) (dimensionless) for the threshold-linear function \(\phi(h)=\min(\max(g(h-\theta),0),\alpha)\). Scalar or array broadcastable toself.varshape. Default:1.0.theta (
ArrayLike, optional) – Activation threshold \(\theta\) (dimensionless). The gain function is zero for \(h<\theta\). Scalar or array broadcastable toself.varshape. Default:0.0.alpha (
ArrayLike, optional) – Saturation level \(\alpha\) (dimensionless). The gain function saturates at \(\alpha\) for large inputs. Scalar or array broadcastable toself.varshape. Default:np.inf(no saturation).mult_coupling (
bool, optional) – API compatibility flag. Has no effect on dynamics for threshold-linear neurons (multiplicative coupling factors are constant 1.0). Default:False.linear_summation (
bool, optional) – Controls where the threshold-linear gain is applied. IfTrue, the gain is applied to the sum of excitatory and inhibitory inputs. IfFalse, the gain is applied separately to each input branch (matching NEST event semantics). Default:True.rate_initializer (
Callable, optional) – Initializer for theratestate variable \(X_0\). Callable compatible withbraintools.initAPI. Default:braintools.init.Constant(0.0).noise_initializer (
Callable, optional) – Initializer for thenoisestate variable (records last noise sample \(\sigma\,\xi_{n-1}\)). Callable compatible withbraintools.initAPI. Default:braintools.init.Constant(0.0).noisy_rate_initializer (
Callable, optional) – Initializer for thenoisy_ratestate variable \(X_\mathrm{noisy,0}\) (initial noisy output). Callable compatible withbraintools.initAPI. Default:braintools.init.Constant(0.0).name (
strorNone, optional) – Module name for identification in hierarchies. IfNone, an auto-generated name is used. Default:None.
Parameter Mapping
The following table maps NEST
threshold_lin_rate_opnparameters to brainpy.state equivalents:NEST Parameter
brainpy.state
Default
tautau10 ms
sigmasigma1.0
mumu0.0
g(gain slope)g1.0
theta(threshold)theta0.0
alpha(saturation)alphainf
mult_couplingmult_coupling(no effect)False
linear_summationlinear_summationTrue
Note: Unlike
threshold_lin_rate_ipn, this model does not havelambda(passive decay is fixed at 1.0),rectify_rate, orrectify_outputparameters.- rate#
Current deterministic rate state \(X_n\) (float64 array of shape
self.varshapeor(batch_size,) + self.varshape).- Type:
brainstate.ShortTermState
- noise#
Last noise sample \(\sigma\,\xi_{n-1}\) (float64 array, same shape as
rate).- Type:
brainstate.ShortTermState
- noisy_rate#
Noisy output rate \(X_\mathrm{noisy,n}=X_n+\sqrt{\tau/h}\,\sigma\,\xi_n\) (float64 array, same shape as
rate).- Type:
brainstate.ShortTermState
- instant_rate#
Noisy rate value used for instantaneous projections (float64 array, same shape as
rate).- Type:
brainstate.ShortTermState
- delayed_rate#
Noisy rate value used for delayed projections (float64 array, same shape as
rate).- Type:
brainstate.ShortTermState
- _step_count#
Internal step counter for delayed event scheduling (int64 scalar).
- Type:
brainstate.ShortTermState
- _delayed_ex_queue#
Internal queue mapping
step_idxto accumulated excitatory delayed events.- Type:
- _delayed_in_queue#
Internal queue mapping
step_idxto accumulated inhibitory delayed events.- Type:
- Raises:
ValueError – If
tau <= 0orsigma < 0.ValueError – If
instant_rate_eventscontain non-zerodelay_steps.ValueError – If
delayed_rate_eventscontain negativedelay_steps.ValueError – If event tuples have length other than 2, 3, or 4.
Notes
Runtime Event Semantics
Event formats are identical to
threshold_lin_rate_ipn:instant_rate_events: Applied in the current step without delay.delayed_rate_events: Scheduled with integerdelay_steps.Sign convention:
weight >= 0→ excitatory,weight < 0→ inhibitory.
Comparison to Input-Noise Variant
The key differences between
threshold_lin_rate_opn(output noise) andthreshold_lin_rate_ipn(input noise) are:Noise location: Output noise is added after nonlinearity; input noise is integrated before nonlinearity.
Stationary distribution: Output noise does not affect the mean of the deterministic attractor; input noise shifts the effective drive.
Dynamics: Output-noise model has simpler deterministic dynamics (\(\lambda=1.0\) fixed) with additive output corruption.
Rectification: Input-noise variant supports
rectify_output; output- noise variant does not (noise is on output only).
Failure Modes
No automatic failure handling. Negative time constants or noise parameters are caught at construction by
_validate_parameters.Invalid event formats raise
ValueErrorduring update.The noise scaling \(\sqrt{\tau/h}\) can become large for small time steps, but this is by design to ensure correct variance scaling.
Examples
Example 1: Minimal output-noise threshold-linear neuron.
>>> import brainpy.state as bst >>> import saiunit as u >>> model = bst.threshold_lin_rate_opn( ... in_size=10, tau=20*u.ms, sigma=0.5, g=2.0, theta=1.0 ... ) >>> model.init_all_states(batch_size=1) >>> rate = model(x=0.5) # deterministic state >>> noisy_rate = model.noisy_rate.value # noisy output
Example 2: Saturating threshold-linear neuron with output noise.
>>> model = bst.threshold_lin_rate_opn( ... in_size=5, ... tau=10*u.ms, ... sigma=0.2, ... g=1.5, theta=0.5, alpha=5.0 ... ) >>> model.init_all_states()
Example 3: Update with events (identical to input-noise variant).
>>> model = bst.threshold_lin_rate_opn(in_size=3, tau=10*u.ms, sigma=0.1) >>> model.init_all_states() >>> instant_event = {'rate': 2.0, 'weight': 0.1} >>> delayed_event = {'rate': 1.5, 'weight': -0.05, 'delay_steps': 3} >>> rate = model.update( ... x=0.2, ... instant_rate_events=instant_event, ... delayed_rate_events=delayed_event ... )
References
See also
threshold_lin_rate_ipnInput-noise variant of the threshold-linear rate neuron.
rate_neuron_opnGeneral output-noise rate neuron with custom gain functions.
lin_rateDeterministic linear rate neuron (
sigma=0, no threshold).
- init_state(**kwargs)[source]#
Initialize all state variables for simulation.
- Parameters:
**kwargs – Unused compatibility parameters accepted by the base-state API.
Notes
This method initializes:
rate: Deterministic rate state \(X_n\).noise: Last noise sample \(\sigma\,\xi_{n-1}\).noisy_rate: Noisy output \(X_\mathrm{noisy,n}\).instant_rate: Noisy rate for instantaneous projections.delayed_rate: Noisy rate for delayed projections._step_count: Internal step counter for delay scheduling._delayed_ex_queue,_delayed_in_queue: Delay queues.
All state arrays are initialized as float64 NumPy arrays using the provided initializers. Both
instant_rateanddelayed_rateare initialized tonoisy_rate(outgoing values are noisy).
- property receptor_types#
Receptor type dictionary for projection compatibility.
- Returns:
{'RATE': 0}. Rate neurons have a single unified receptor port for all rate-based inputs. Excitatory vs. inhibitory separation is handled internally via event weight signs.- Return type:
dict[str,int]
Notes
This property is used by projection objects to validate connection targets. Unlike spiking neurons with separate AMPA/GABA receptor ports, rate neurons use sign-based branch routing (
weight >= 0→ excitatory branch,weight < 0→ inhibitory branch).
- property recordables#
List of state variable names that can be recorded during simulation.
- Returns:
['rate', 'noise', 'noisy_rate']. Theratevariable records the deterministic rate state \(X_n\),noiserecords the last noise sample \(\sigma\,\xi_{n-1}\), andnoisy_raterecords the noisy output \(X_\mathrm{noisy,n}\).- Return type:
Notes
These variables can be accessed via recording tools in BrainPy for post-simulation analysis. The
noisy_rateis the value transmitted to downstream neurons via projections.
- update(x=0.0, instant_rate_events=None, delayed_rate_events=None, noise=None)[source]#
Perform one simulation step of deterministic threshold-linear rate dynamics with output noise.
- Parameters:
x (
ArrayLike, optional) – External drive (scalar or array broadcastable toself.varshape). Added tomuas constant forcing. Default is0.0.instant_rate_events (
None,dict,tuple,list, oriterable, optional) – Instantaneous rate events applied in the current step without delay. See class docstring for event format. Default isNone.delayed_rate_events (
None,dict,tuple,list, oriterable, optional) – Delayed rate events scheduled with integerdelay_steps(units of simulation time step). See class docstring for event format. Default isNone.noise (
ArrayLike, optional) – Externally supplied noise sample \(\xi_n\) (scalar or array broadcastable to state shape). IfNone(default), draws \(\xi_n\sim\mathcal{N}(0,1)\) internally.
- Returns:
rate_new – Updated deterministic rate state \(X_{n+1}\) (float64 array of shape
self.rate.value.shape).- Return type:
np.ndarray
Notes
Update algorithm:
Draw noise and compute noisy output:
\[\mathrm{noise}_n = \sigma\,\xi_n, \quad X_\mathrm{noisy,n} = X_n + \sqrt{\frac{\tau}{h}}\,\mathrm{noise}_n.\]Store \(X_\mathrm{noisy,n}\) as
delayed_rateandinstant_rate(outgoing values for projections).Collect input contributions:
Delayed events arriving at current step (from internal queues).
Newly scheduled delayed events with
delay_steps=0.Instantaneous events.
Delta inputs (sign-separated into excitatory/inhibitory).
Current inputs via
sum_current_inputs(x, rate).
Compute propagator coefficients (deterministic exponential Euler):
\[P_1 = \exp(-h/\tau), \quad P_2 = 1 - P_1 = -\mathrm{expm1}(-h/\tau).\]Propagate deterministic dynamics:
\[X_{n+1} = P_1 X_n + P_2(\mu + \mu_\mathrm{ext}).\]Apply network input with threshold-linear gain:
linear_summation=True: \(X_{n+1} \gets X_{n+1} + P_2\,\phi(I_\mathrm{ex}+I_\mathrm{in})\).linear_summation=False: \(X_{n+1} \gets X_{n+1} + P_2\,[\phi(I_\mathrm{ex})+\phi(I_\mathrm{in})]\).
where \(\phi(h)=\min(\max(g(h-\theta),0),\alpha)\).
Update state variables:
rate,noise,noisy_rate,delayed_rate,instant_rate,_step_count.
Key difference from input-noise variant: Noise is added to the output before the deterministic update, not during the stochastic integration. This means the internal state \(X_n\) evolves deterministically, and only the transmitted rate is noisy.
Numerical stability: The threshold-linear gain uses
np.minimumandnp.maximumfor stable clipping. The exponential Euler scheme usesnp.expm1for numerically stable evaluation of \(1-e^{-x}\). The noise scaling \(\sqrt{\tau/h}\) ensures correct variance scaling as \(h\to 0\).Failure modes: No automatic failure handling. Negative time constants or noise parameters are caught at construction by
_validate_parameters. Invalid event formats raiseValueError.