erfc_neuron#

class brainpy.state.erfc_neuron(in_size, tau_m=Quantity(10., 'ms'), theta=Quantity(0., 'mV'), sigma=Quantity(1., 'mV'), y_initializer=Constant(value=0.0), stochastic_update=True, rng_seed=0, name=None)#

Binary stochastic neuron with complementary error-function gain.

Description

erfc_neuron re-implements NEST’s binary neuron model of the same name. The neuron keeps a persistent synaptic input state \(h\) and updates its binary output \(y \in \{0, 1\}\) at Poisson-distributed update times with mean interval \(\tau_m\).

1. Gain function and state transition

At each scheduled update, the new binary state is sampled as

\[y \leftarrow \mathbf{1}[U < g(h + c)], \quad U \sim \mathrm{Uniform}(0, 1),\]

with gain function

\[g(h) = \frac{1}{2}\,\mathrm{erfc}\!\left(-\frac{h - \theta}{\sqrt{2}\,\sigma}\right).\]

This matches the NEST implementation in gainfunction_erfc::operator(). The model corresponds to a McCulloch-Pitts threshold unit with additive Gaussian noise of standard deviation \(\sigma\).

2. Interpretation: threshold unit with Gaussian noise

The complementary error function gain arises from a threshold model with Gaussian noise. Suppose the neuron fires when \(h + \xi > \theta\), where \(\xi \sim \mathcal{N}(0, \sigma^2)\). The activation probability is then

\[P(\text{fire}) = P(h + \xi > \theta) = P\left(\frac{\xi}{\sigma} > \frac{\theta - h}{\sigma}\right) = \frac{1}{2}\,\mathrm{erfc}\!\left(\frac{\theta - h}{\sqrt{2}\,\sigma}\right).\]

This establishes the connection to the McCulloch-Pitts neuron with additive Gaussian noise.

3. Update order (NEST semantics)

Each simulation step follows the same ordering as NEST’s binary_neuron::update():

  1. Accumulate delta inputs into persistent \(h\).

  2. Read current input \(c\) for the present step.

  3. If t + dt > t_next (strict inequality), sample a new binary state from \(g(h+c)\).

  4. If an update happened, advance t_next by Exp(1) * tau_m.

As in NEST, probabilities are not explicitly clipped before comparing against uniform random numbers. The comparison with a uniform random number implies effective clipping: gain values below 0 yield probability 0, values above 1 yield probability 1.

4. Assumptions, constraints, and computational implications

  • The model assumes unit-compatible parameters and broadcast-compatible shapes against self.varshape.

  • tau_m must be strictly positive (enforced in __init__()).

  • Per-step compute is \(O(\prod \mathrm{varshape})\) with vectorized elementwise operations plus random sampling overhead.

  • Stochastic update times are sampled from an exponential distribution, so the inter-update intervals are memoryless (Poisson process property).

  • When stochastic_update=False, the model updates at every time step but retains stochastic state transitions according to the same gain function.

Parameters:
  • in_size (Size) – Population shape specification. All neuron parameters are broadcast to self.varshape derived from in_size.

  • tau_m (ArrayLike, optional) – Mean inter-update interval \(\tau_m\) in ms; scalar or array broadcastable to self.varshape. Must be strictly positive. Default is 10.0 * u.ms.

  • theta (ArrayLike, optional) – Threshold \(\theta\) in mV; scalar or array broadcastable to self.varshape. Default is 0.0 * u.mV.

  • sigma (ArrayLike, optional) – Gain/noise parameter \(\sigma\) in mV; scalar or array broadcastable to self.varshape. Larger values produce smoother gain transitions. Default is 1.0 * u.mV.

  • y_initializer (Callable, optional) – Initializer for initial binary state y in init_state(). Output should be float64 values (typically 0.0 or 1.0) shape-compatible with self.varshape (and optional batch prefix). Default is braintools.init.Constant(0.0).

  • stochastic_update (bool, optional) – If True (default), use Poisson update scheduling as in NEST (updates occur at intervals sampled from Exp(tau_m)). If False, update each time step while retaining stochastic state sampling from the same gain function. Default is True.

  • rng_seed (int, optional) – Seed for internal random sampling (both for uniform and exponential random variables). Different seeds produce different random sequences. Default is 0.

  • name (str or None, optional) – Optional node name.

Parameter Mapping

Table 18 Parameter mapping to model symbols#

Parameter

Type / shape / unit

Default

Math symbol

Semantics

in_size

Size; scalar/tuple

required

Defines population/state shape self.varshape.

tau_m

ArrayLike, broadcastable to self.varshape (ms)

10.0 * u.ms

\(\tau_m\)

Mean Poisson inter-update interval.

theta

ArrayLike, broadcastable to self.varshape (mV)

0.0 * u.mV

\(\theta\)

Activation threshold in gain function.

sigma

ArrayLike, broadcastable to self.varshape (mV)

1.0 * u.mV

\(\sigma\)

Noise standard deviation / gain slope parameter.

y_initializer

Callable

Constant(0.0)

Initializes binary output state y.

stochastic_update

bool

True

Enables Poisson-timed updates vs. every-step updates.

rng_seed

int

0

Random number generator seed.

name

str | None

None

Optional node identifier.

Raises:
  • ValueError – If tau_m contains any non-positive values (checked in __init__()), or if parameter initialization or broadcasting fails.

  • TypeError – If provided values are not compatible with expected units/types (ms, mV, or callable initializer).

  • KeyError – At runtime, if required simulation context entries (t or dt) are missing when update() is called (only when stochastic_update=True).

  • AttributeError – If update() is called before init_state() creates required state variables.

y#

Binary output state (float64 values 0.0 or 1.0).

Type:

ShortTermState

h#

Persistent summed synaptic input.

Type:

ShortTermState

t_next#

Next stochastic update time (only if stochastic_update=True).

Type:

ShortTermState

rng_key#

JAX PRNGKey for random sampling (internal state).

Type:

ShortTermState

Notes

  • State variables are y, h, rng_key, and optionally t_next (when stochastic_update=True).

  • In NEST, binary-neuron communication encodes state transitions using spike multiplicity (double spike for up-transition, single spike for down-transition). Here, equivalent effects are represented through delta inputs added to \(h\).

  • The gain function is evaluated at \(h + c\), where \(c\) is the sum of current inputs for the present step.

  • Random sampling uses JAX’s functional random number generation with state splitting for reproducibility and compatibility with JAX transformations.

Examples

>>> import brainpy
>>> import brainstate
>>> import saiunit as u
>>> with brainstate.environ.context(dt=0.1 * u.ms):
...     neu = brainpy.state.erfc_neuron(in_size=10, tau_m=5.0 * u.ms)
...     neu.init_state(batch_size=1)
...     with brainstate.environ.context(t=0.0 * u.ms):
...         out = neu.update(x=2.0 * u.mV)
...     _ = out.shape
>>> import brainpy
>>> import brainstate
>>> import saiunit as u
>>> with brainstate.environ.context(dt=0.1 * u.ms):
...     neu = brainpy.state.erfc_neuron(
...         in_size=(2, 3),
...         theta=1.0 * u.mV,
...         sigma=0.5 * u.mV,
...         stochastic_update=False
...     )
...     neu.init_state()
...     with brainstate.environ.context(t=0.0 * u.ms):
...         _ = neu.update(x=1.5 * u.mV)

References

See also

ginzburg_neuron

Binary neuron with sigmoidal/affine gain function

mcculloch_pitts_neuron

Binary neuron with hard threshold

init_state(**kwargs)[source]#

Initialize binary state, input accumulator, and update timing.

Parameters:

**kwargs – Unused compatibility parameters accepted by the base-state API.

Raises:
  • ValueError – If initializer outputs cannot be broadcast to target state shape.

  • TypeError – If initializer values are incompatible with required numeric/unit conversions.

update(x=Quantity(0., 'mV'))[source]#

Advance the binary neuron by one simulation step.

Follows NEST update ordering:

  1. Integrate delta inputs into persistent h.

  2. Compute total input h + c where c is current input.

  3. Evaluate gain function \(g(h + c)\).

  4. If Poisson-scheduled update is due (t + dt > t_next), sample new binary state from \(g(h + c)\) and schedule next update.

  5. Return updated binary output y.

Parameters:

x (ArrayLike, optional) – External current input in mV for this step. Combined with additional current sources from sum_current_inputs(). Default is 0.0 * u.mV.

Returns:

out – Binary output state self.y.value with shape self.varshape (or (batch_size,) + self.varshape if batched). Values are float64 (0.0 or 1.0) wrapped in jax.lax.stop_gradient to prevent gradient flow through stochastic sampling.

Return type:

jax.Array

Raises:
  • KeyError – If simulation context does not provide required entries t or dt when stochastic_update=True.

  • AttributeError – If required states are missing because init_state() has not been called.

  • TypeError – If input/state values are not unit-compatible with expected mV arithmetic.

Notes

  • When stochastic_update=True, updates only occur at Poisson- distributed times (mean interval tau_m). Between updates, y remains constant.

  • When stochastic_update=False, the binary state is resampled at every time step according to the same gain function.

  • The gain function is never explicitly clipped; effective clipping occurs through comparison with uniform random numbers: if \(g(h + c) < 0\), firing probability is 0; if \(g(h + c) > 1\), firing probability is 1.

  • All random sampling uses functional JAX RNG state splitting for reproducibility and JAX transformation compatibility.