spike_train_injector#
- class brainpy.state.spike_train_injector(in_size=1, spike_times=(), spike_multiplicities=(), precise_times=False, allow_offgrid_times=False, shift_now_spikes=False, start=Quantity(0., 'ms'), stop=None, origin=Quantity(0., 'ms'), name=None)#
Spike train injector – NEST-compatible event source device.
Emit deterministic spike events at configured times with optional per-time multiplicity, then gate output by a half-open activity window. Unlike
spike_generator, which selects the last matching weight, this device accumulates all multiplicities that match the current step, making it suitable for injecting pre-recorded spike trains where multiple events may be scheduled at the same simulation time.1. Model equations
Let \(\{t_i\}_{i=1}^{K}\) be configured spike times in ms after conversion from unitful or unitless inputs. Let \(m_i\) denote multiplicity (
spike_multiplicities) when provided, otherwise \(m_i = 1\). At simulation time \(t\) with step \(\Delta t\) (both in ms), define the matching indicator\[q_i(t) = \mathbf{1}\!\left[|t - t_i| < \frac{\Delta t}{2}\right].\]The scalar emitted spike count before window gating is
\[a(t) = \sum_{i=1}^{K} m_i\, q_i(t).\]The activity gate is
\[g(t) = \mathbf{1}\!\left[t \ge t_0 + t_{\mathrm{start,rel}}\right] \cdot \mathbf{1}\!\left[t < t_0 + t_{\mathrm{stop,rel}}\right],\]where the second indicator is omitted when
stop is None. The returned output is broadcast to node shapeself.varshape:\[y(t) = g(t)\,a(t)\,\mathbf{1}_{\mathrm{varshape}}.\]2. Timing derivation, assumptions, and constraints
The \(|t - t_i| < \Delta t / 2\) rule corresponds to nearest-grid assignment under uniform-step simulation. For exact half-step offsets, strict inequality means no match at that boundary. If multiple
spike_timesentries map to the same step, their multiplicities are summed, giving \(a(t) > 1\) for bursts.Enforced constraints:
spike_timesmust be non-descending after conversion.spike_multiplicitiesmust be empty or have exactlylen(spike_multiplicities) == len(spike_times)elements.precise_times=Truecannot be combined withallow_offgrid_times=Trueorshift_now_spikes=True.
Implementation-specific constraints:
NEST option flags
precise_times,allow_offgrid_times, andshift_now_spikesare accepted for API compatibility but the current update rule always uses the fixed tolerance test above regardless of their values.NEST documentation states spikes should be strictly in the future. This implementation does not perform explicit future-time validation in
__init__()and instead relies on runtime matching combined with active-window gating.
3. Computational implications
Each
update()call usesu.math.searchsorted()to locate the open interval \((t - \Delta t/2,\, t + \Delta t/2)\) in the sortedspike_timesarray, yielding a range \([\textit{idx\_lo}, \textit{idx\_hi})\) of matching indices. A Boolean mask over \(\{0,\ldots,K-1\}\) is then used to sum the multiplicities of all matching entries. Per-call complexity is \(O(\log K + K + \prod \mathrm{varshape})\). Theupdate()method is fully compatible withjax.jit: no Python control flow branches on traced values.- Parameters:
in_size (
Size, optional) – Output size/shape consumed bybrainstate.nn.Dynamics. The emitted array has shapeself.varshapederived fromin_size. Default is1.spike_times (
Sequence, optional) – Sequence of spike times with lengthK. Entries may be unitful times (typicallysaiunitms quantities) or bare numerics interpreted as ms. Passed directly tou.math.asarray(), which validates unit consistency across all entries. Must be non-descending. Duplicate times are allowed and their multiplicities are accumulated. Default is().spike_multiplicities (
Sequence, optional) – Sequence of integer multiplicities with lengthKmatchingspike_times, or empty to use implicit unit multiplicities (\(m_i = 1\)). Entries are converted withint(m)and stored as a dimensionless JAX array; accumulated across all indices matching the same step. Default is().precise_times (
bool, optional) – NEST compatibility flag for sub-step precise timing. Stored and validated againstallow_offgrid_times/shift_now_spikesbut not used to alter runtime matching in this implementation. Default isFalse.allow_offgrid_times (
bool, optional) – NEST compatibility flag permitting off-grid spike times. Stored and validated but not used to alter runtime matching in this implementation. Default isFalse.shift_now_spikes (
bool, optional) – NEST compatibility flag for shifting spikes that would fire at the current step to the next. Stored and validated but not used to alter runtime matching in this implementation. Default isFalse.start (
ArrayLike, optional) – Relative activation time \(t_{\mathrm{start,rel}}\) (typically ms), initialized viabraintools.init.param(). The effective inclusive lower bound of the active window isorigin + start. Default is0. * u.ms.stop (
ArrayLikeorNone, optional) – Relative deactivation time \(t_{\mathrm{stop,rel}}\) (typically ms), initialized viabraintools.init.param()when notNone. The effective exclusive upper bound isorigin + stop.Nonedisables the upper bound. Default isNone.origin (
ArrayLike, optional) – Global time origin \(t_0\) (typically ms) added to bothstartandstopto obtain absolute window bounds. Default is0. * u.ms.name (
strorNone, optional) – Optional node name forwarded tobrainstate.nn.Dynamics.
Parameter Mapping
Table 29 Parameter mapping to model symbols# Parameter
Default
Math symbol
Semantics
spike_times()\(t_i\)
Spike schedule; matched by
|t - t_i| < dt/2.spike_multiplicities()\(m_i\)
Per-time spike count; empty means implicit \(m_i = 1\).
start0. * u.ms\(t_{\mathrm{start,rel}}\)
Relative inclusive lower bound of active window.
stopNone\(t_{\mathrm{stop,rel}}\)
Relative exclusive upper bound;
Nonemeans unbounded.origin0. * u.ms\(t_0\)
Global offset applied to
startandstop.- Raises:
ValueError – If
precise_times=Trueis combined withallow_offgrid_times=Trueorshift_now_spikes=True, ifspike_timesis not non-descending after conversion, or ifspike_multiplicitiesis non-empty and has a different length thanspike_times.TypeError – If
u.math.asarray()detects unit inconsistency across entries, or if unitful/unitless arithmetic is invalid during time-window comparisons.KeyError – At update time, if required simulation context entries (e.g.
't'ordt) are absent frombrainstate.environ.
Notes
This device does not accept incoming synaptic or current connections; it only emits scheduled events. The output is dimensionless (spike count per step) and is typically consumed by a downstream synapse model that scales by connection weight.
The key behavioral difference from
spike_generatoris accumulation: when two entries inspike_timesround to the same step,spike_train_injectorsums their multiplicities whilespike_generatorretains only the last matching weight. Usespike_train_injectorwhen replaying recorded spike trains that may contain bursts, andspike_generatorwhen a single weighted event per step is intended.Spike times should ideally be aligned to the simulation grid (multiples of
dt) to avoid off-by-one steps. The tolerancedt/2covers one-ULP rounding for grid-aligned times in typical float64 arithmetic.See also
spike_generatorDeterministic spike device with per-spike weights (last-match semantics).
dc_generatorConstant-current stimulation device.
ac_generatorSinusoidal current stimulation device.
step_current_generatorPiecewise-constant current stimulation device.
References
Examples
Inject a burst of five spikes at
t = 2 ms(two entries map to the same step, multiplicities are accumulated to givea = 2 + 3 = 5):>>> import brainpy >>> import brainstate >>> import saiunit as u >>> with brainstate.environ.context(dt=0.1 * u.ms): ... inj = brainpy.state.spike_train_injector( ... spike_times=[1.0 * u.ms, 2.0 * u.ms, 2.0 * u.ms], ... spike_multiplicities=[1, 2, 3], ... start=0.0 * u.ms, ... stop=5.0 * u.ms, ... ) ... with brainstate.environ.context(t=2.0 * u.ms): ... out = inj.update() ... _ = out.shape
Inject a single spike at
t = 10 msusing NEST’sprecise_timesflag for API compatibility (sub-step resolution not enforced here):>>> import brainpy >>> import brainstate >>> import saiunit as u >>> with brainstate.environ.context(dt=0.1 * u.ms): ... inj = brainpy.state.spike_train_injector( ... spike_times=[10.0 * u.ms], ... precise_times=True, ... ) ... with brainstate.environ.context(t=10.0 * u.ms): ... out = inj.update() ... _ = out.shape
- update()[source]#
Compute the accumulated spike output for the current simulation step.
The implementation is fully compatible with
jax.jit: spike-time matching usesu.math.searchsorted()on the staticspike_timesarray whiletanddtremain traced values throughout. The multiplicity sum uses a Boolean mask with no Python branching over traced values.- Returns:
out – Float-valued JAX array with shape
self.varshape. Output semantics:0when outside[origin + start, origin + stop)(or[origin + start, +inf)ifstop is None),0when active but no configured spike satisfies|t - t_i| < dt/2,accumulated integer multiplicity \(a(t) = \sum_i m_i\, q_i(t)\) when active and one or more spikes match.
- Return type:
jax.Array- Raises:
KeyError – If required simulation context entries are missing from
brainstate.environ(e.g.'t'ordt).
Notes
Matching uses the open interval \((t - \Delta t/2,\, t + \Delta t/2)\) located via two
u.math.searchsorted()calls:idx_lo = searchsorted(times, t - dt/2, side='right')— first index strictly greater than the lower bound.idx_hi = searchsorted(times, t + dt/2, side='left')— first index at or above the upper bound.
A Boolean mask
indices in [idx_lo, idx_hi)selects all matching entries; their multiplicities (or 1s if none configured) are summed to obtain the scalar count \(a(t)\). Start is inclusive and stop is exclusive, matching NEST semantics.Unlike
spike_generator.update(), which keeps only the last matching weight, this method accumulates all matching multiplicities. A burst of three spikes scheduled at the same time thus returns3(or the sum of their individual multiplicities).See also
spike_train_injectorClass-level parameter definitions and equations.
spike_generator.updateWeight-selection (last-match) update rule.
dc_generator.updateWindowed constant-current update rule.
step_current_generator.updateWindowed piecewise-constant update rule.