erfc_neuron#
- class brainpy.state.erfc_neuron(in_size, tau_m=Quantity(10., 'ms'), theta=Quantity(0., 'mV'), sigma=Quantity(1., 'mV'), y_initializer=Constant(value=0.0), stochastic_update=True, rng_seed=0, name=None)#
Binary stochastic neuron with complementary error-function gain.
Description
erfc_neuronre-implements NEST’s binary neuron model of the same name. The neuron keeps a persistent synaptic input state \(h\) and updates its binary output \(y \in \{0, 1\}\) at Poisson-distributed update times with mean interval \(\tau_m\).1. Gain function and state transition
At each scheduled update, the new binary state is sampled as
\[y \leftarrow \mathbf{1}[U < g(h + c)], \quad U \sim \mathrm{Uniform}(0, 1),\]with gain function
\[g(h) = \frac{1}{2}\,\mathrm{erfc}\!\left(-\frac{h - \theta}{\sqrt{2}\,\sigma}\right).\]This matches the NEST implementation in
gainfunction_erfc::operator(). The model corresponds to a McCulloch-Pitts threshold unit with additive Gaussian noise of standard deviation \(\sigma\).2. Interpretation: threshold unit with Gaussian noise
The complementary error function gain arises from a threshold model with Gaussian noise. Suppose the neuron fires when \(h + \xi > \theta\), where \(\xi \sim \mathcal{N}(0, \sigma^2)\). The activation probability is then
\[P(\text{fire}) = P(h + \xi > \theta) = P\left(\frac{\xi}{\sigma} > \frac{\theta - h}{\sigma}\right) = \frac{1}{2}\,\mathrm{erfc}\!\left(\frac{\theta - h}{\sqrt{2}\,\sigma}\right).\]This establishes the connection to the McCulloch-Pitts neuron with additive Gaussian noise.
3. Update order (NEST semantics)
Each simulation step follows the same ordering as NEST’s
binary_neuron::update():Accumulate delta inputs into persistent \(h\).
Read current input \(c\) for the present step.
If
t + dt > t_next(strict inequality), sample a new binary state from \(g(h+c)\).If an update happened, advance
t_nextbyExp(1) * tau_m.
As in NEST, probabilities are not explicitly clipped before comparing against uniform random numbers. The comparison with a uniform random number implies effective clipping: gain values below 0 yield probability 0, values above 1 yield probability 1.
4. Assumptions, constraints, and computational implications
The model assumes unit-compatible parameters and broadcast-compatible shapes against
self.varshape.tau_mmust be strictly positive (enforced in__init__()).Per-step compute is \(O(\prod \mathrm{varshape})\) with vectorized elementwise operations plus random sampling overhead.
Stochastic update times are sampled from an exponential distribution, so the inter-update intervals are memoryless (Poisson process property).
When
stochastic_update=False, the model updates at every time step but retains stochastic state transitions according to the same gain function.
- Parameters:
in_size (
Size) – Population shape specification. All neuron parameters are broadcast toself.varshapederived fromin_size.tau_m (
ArrayLike, optional) – Mean inter-update interval \(\tau_m\) in ms; scalar or array broadcastable toself.varshape. Must be strictly positive. Default is10.0 * u.ms.theta (
ArrayLike, optional) – Threshold \(\theta\) in mV; scalar or array broadcastable toself.varshape. Default is0.0 * u.mV.sigma (
ArrayLike, optional) – Gain/noise parameter \(\sigma\) in mV; scalar or array broadcastable toself.varshape. Larger values produce smoother gain transitions. Default is1.0 * u.mV.y_initializer (
Callable, optional) – Initializer for initial binary stateyininit_state(). Output should be float64 values (typically 0.0 or 1.0) shape-compatible withself.varshape(and optional batch prefix). Default isbraintools.init.Constant(0.0).stochastic_update (
bool, optional) – IfTrue(default), use Poisson update scheduling as in NEST (updates occur at intervals sampled fromExp(tau_m)). IfFalse, update each time step while retaining stochastic state sampling from the same gain function. Default isTrue.rng_seed (
int, optional) – Seed for internal random sampling (both for uniform and exponential random variables). Different seeds produce different random sequences. Default is0.
Parameter Mapping
Table 18 Parameter mapping to model symbols# Parameter
Type / shape / unit
Default
Math symbol
Semantics
in_sizeSize; scalar/tuplerequired
–
Defines population/state shape
self.varshape.tau_mArrayLike, broadcastable to
self.varshape(ms)10.0 * u.ms\(\tau_m\)
Mean Poisson inter-update interval.
thetaArrayLike, broadcastable to
self.varshape(mV)0.0 * u.mV\(\theta\)
Activation threshold in gain function.
sigmaArrayLike, broadcastable to
self.varshape(mV)1.0 * u.mV\(\sigma\)
Noise standard deviation / gain slope parameter.
y_initializerCallable
Constant(0.0)–
Initializes binary output state
y.stochastic_updatebool
True–
Enables Poisson-timed updates vs. every-step updates.
rng_seedint
0–
Random number generator seed.
namestr | None
None–
Optional node identifier.
- Raises:
ValueError – If
tau_mcontains any non-positive values (checked in__init__()), or if parameter initialization or broadcasting fails.TypeError – If provided values are not compatible with expected units/types (ms, mV, or callable initializer).
KeyError – At runtime, if required simulation context entries (
tordt) are missing whenupdate()is called (only whenstochastic_update=True).AttributeError – If
update()is called beforeinit_state()creates required state variables.
- y#
Binary output state (float64 values 0.0 or 1.0).
- Type:
ShortTermState
- h#
Persistent summed synaptic input.
- Type:
ShortTermState
- t_next#
Next stochastic update time (only if
stochastic_update=True).- Type:
ShortTermState
- rng_key#
JAX PRNGKey for random sampling (internal state).
- Type:
ShortTermState
Notes
State variables are
y,h,rng_key, and optionallyt_next(whenstochastic_update=True).In NEST, binary-neuron communication encodes state transitions using spike multiplicity (double spike for up-transition, single spike for down-transition). Here, equivalent effects are represented through delta inputs added to \(h\).
The gain function is evaluated at \(h + c\), where \(c\) is the sum of current inputs for the present step.
Random sampling uses JAX’s functional random number generation with state splitting for reproducibility and compatibility with JAX transformations.
Examples
>>> import brainpy >>> import brainstate >>> import saiunit as u >>> with brainstate.environ.context(dt=0.1 * u.ms): ... neu = brainpy.state.erfc_neuron(in_size=10, tau_m=5.0 * u.ms) ... neu.init_state(batch_size=1) ... with brainstate.environ.context(t=0.0 * u.ms): ... out = neu.update(x=2.0 * u.mV) ... _ = out.shape
>>> import brainpy >>> import brainstate >>> import saiunit as u >>> with brainstate.environ.context(dt=0.1 * u.ms): ... neu = brainpy.state.erfc_neuron( ... in_size=(2, 3), ... theta=1.0 * u.mV, ... sigma=0.5 * u.mV, ... stochastic_update=False ... ) ... neu.init_state() ... with brainstate.environ.context(t=0.0 * u.ms): ... _ = neu.update(x=1.5 * u.mV)
References
See also
ginzburg_neuronBinary neuron with sigmoidal/affine gain function
mcculloch_pitts_neuronBinary neuron with hard threshold
- init_state(**kwargs)[source]#
Initialize binary state, input accumulator, and update timing.
- Parameters:
**kwargs – Unused compatibility parameters accepted by the base-state API.
- Raises:
ValueError – If initializer outputs cannot be broadcast to target state shape.
TypeError – If initializer values are incompatible with required numeric/unit conversions.
- update(x=Quantity(0., 'mV'))[source]#
Advance the binary neuron by one simulation step.
Follows NEST update ordering:
Integrate delta inputs into persistent
h.Compute total input
h + cwherecis current input.Evaluate gain function \(g(h + c)\).
If Poisson-scheduled update is due (
t + dt > t_next), sample new binary state from \(g(h + c)\) and schedule next update.Return updated binary output
y.
- Parameters:
x (
ArrayLike, optional) – External current input in mV for this step. Combined with additional current sources fromsum_current_inputs(). Default is0.0 * u.mV.- Returns:
out – Binary output state
self.y.valuewith shapeself.varshape(or(batch_size,) + self.varshapeif batched). Values are float64 (0.0 or 1.0) wrapped injax.lax.stop_gradientto prevent gradient flow through stochastic sampling.- Return type:
jax.Array- Raises:
KeyError – If simulation context does not provide required entries
tordtwhenstochastic_update=True.AttributeError – If required states are missing because
init_state()has not been called.TypeError – If input/state values are not unit-compatible with expected mV arithmetic.
Notes
When
stochastic_update=True, updates only occur at Poisson- distributed times (mean intervaltau_m). Between updates,yremains constant.When
stochastic_update=False, the binary state is resampled at every time step according to the same gain function.The gain function is never explicitly clipped; effective clipping occurs through comparison with uniform random numbers: if \(g(h + c) < 0\), firing probability is 0; if \(g(h + c) > 1\), firing probability is 1.
All random sampling uses functional JAX RNG state splitting for reproducibility and JAX transformation compatibility.