pp_cond_exp_mc_urbanczik#
- class brainpy.state.pp_cond_exp_mc_urbanczik(in_size, t_ref=Quantity(3., 'ms'), phi_max=0.15, rate_slope=0.5, beta=0.3333333333333333, theta=-55.0, g_sp=Quantity(600., 'nS'), g_ps=Quantity(0., 'nS'), soma_g_L=Quantity(30., 'nS'), soma_C_m=Quantity(300., 'pF'), soma_E_L=Quantity(-70., 'mV'), soma_E_ex=Quantity(0., 'mV'), soma_E_in=Quantity(-75., 'mV'), soma_tau_syn_ex=Quantity(3., 'ms'), soma_tau_syn_in=Quantity(3., 'ms'), soma_I_e=Quantity(0., 'pA'), dend_g_L=Quantity(30., 'nS'), dend_C_m=Quantity(300., 'pF'), dend_E_L=Quantity(-70., 'mV'), dend_E_ex=Quantity(0., 'mV'), dend_E_in=Quantity(0., 'mV'), dend_tau_syn_ex=Quantity(3., 'ms'), dend_tau_syn_in=Quantity(3., 'ms'), dend_I_e=Quantity(0., 'pA'), gsl_error_tol=0.001, rng_key=None, spk_fun=ReluGrad(alpha=0.3, width=1.0), spk_reset='hard', name=None)#
Two-compartment point process neuron with conductance-based synapses for Urbanczik-Senn learning.
pp_cond_exp_mc_urbanczikimplements a two-compartment spiking neuron model that combines stochastic point process spike generation with dendritic prediction error computation for supervised learning. The soma uses conductance-based synapses while the dendrite uses current-based synapses. At each time step, the model computes a learning signal (δΠ) based on the mismatch between actual somatic spiking and the dendritic prediction, enabling gradient-based synaptic plasticity.This is a brainpy.state re-implementation of the NEST simulator model described in Urbanczik & Senn (2014) [1], using NEST-standard parameterization and numerical integration methods.
- Parameters:
in_size (
Size) – Population shape (tuple of ints or single int). Determines neuron array dimensions. Required parameter.t_ref (
ArrayLike, optional) – Refractory period duration (Quantity, default: 3.0 ms). Neurons cannot spike again within this interval after a spike. If 0, no refractory period and Poisson spike generation is used.phi_max (
float, optional) – Maximum firing rate in kHz (dimensionless, default: 0.15). Upper bound of the rate function φ(u). Typical range: 0.1-0.2 kHz (100-200 Hz).rate_slope (
float, optional) – Rate function slope parameterk(dimensionless, default: 0.5). Controls the steepness of the sigmoid rate function. Must be non-negative.beta (
float, optional) – Rate function steepness in 1/mV (dimensionless, default: 1/3 ≈ 0.333). Higher values create sharper transitions around thresholdtheta.theta (
float, optional) – Rate function threshold potential in mV (numeric, default: -55.0). Membrane potential at which firing rate is approximately half-maximal.g_sp (
ArrayLike, optional) – Soma-to-dendrite coupling conductance (Quantity, default: 600.0 nS). Forward coupling from dendrite voltage to soma dynamics. Typically dominant coupling.g_ps (
ArrayLike, optional) – Dendrite-to-soma coupling conductance (Quantity, default: 0.0 nS). Backward coupling from soma voltage to dendritic dynamics. Usually zero in this model.soma_g_L (
ArrayLike, optional) – Somatic leak conductance (Quantity, default: 30.0 nS). Controls somatic resting potential and membrane time constant.soma_C_m (
ArrayLike, optional) – Somatic membrane capacitance (Quantity, default: 300.0 pF). Together with leak conductance determines somatic time constant τ = C_m / g_L.soma_E_L (
ArrayLike, optional) – Somatic leak reversal potential (Quantity, default: -70.0 mV). Resting potential of the soma in absence of inputs.soma_E_ex (
ArrayLike, optional) – Somatic excitatory reversal potential (Quantity, default: 0.0 mV). Driving force for excitatory conductance-based synapses.soma_E_in (
ArrayLike, optional) – Somatic inhibitory reversal potential (Quantity, default: -75.0 mV). Driving force for inhibitory conductance-based synapses.soma_tau_syn_ex (
ArrayLike, optional) – Somatic excitatory synaptic time constant (Quantity, default: 3.0 ms). Decay time constant for excitatory conductance.soma_tau_syn_in (
ArrayLike, optional) – Somatic inhibitory synaptic time constant (Quantity, default: 3.0 ms). Decay time constant for inhibitory conductance.soma_I_e (
ArrayLike, optional) – Somatic constant external current (Quantity, default: 0.0 pA). DC bias current applied to soma at all times.dend_g_L (
ArrayLike, optional) – Dendritic leak conductance (Quantity, default: 30.0 nS). Controls dendritic resting potential and membrane time constant.dend_C_m (
ArrayLike, optional) – Dendritic membrane capacitance (Quantity, default: 300.0 pF). Together with leak conductance determines dendritic time constant τ = C_m / g_L.dend_E_L (
ArrayLike, optional) – Dendritic leak reversal potential (Quantity, default: -70.0 mV). Resting potential of the dendrite in absence of inputs.dend_E_ex (
ArrayLike, optional) – Dendritic excitatory reversal potential (Quantity, default: 0.0 mV). Used for documentation; current-based synapses don’t use reversal potentials.dend_E_in (
ArrayLike, optional) – Dendritic inhibitory reversal potential (Quantity, default: 0.0 mV, note: NOT -75.0 mV). Matches NEST default. Used for documentation only.dend_tau_syn_ex (
ArrayLike, optional) – Dendritic excitatory synaptic time constant (Quantity, default: 3.0 ms). Decay time constant for excitatory current.dend_tau_syn_in (
ArrayLike, optional) – Dendritic inhibitory synaptic time constant (Quantity, default: 3.0 ms). Decay time constant for inhibitory current.dend_I_e (
ArrayLike, optional) – Dendritic constant external current (Quantity, default: 0.0 pA). DC bias current applied to dendrite at all times.gsl_error_tol (
ArrayLike, optional) – Unitless local RKF45 error tolerance (default: 1e-3). Must be strictly positive.rng_key (
jax.Array, optional) – JAX PRNG key for stochastic spike generation (default: None). If None, a default key (PRNGKey(0)) is used. For reproducibility, provide explicit key.spk_fun (
Callable, optional) – Surrogate gradient function for spike differentiation (default: braintools.surrogate.ReluGrad()). Used in backpropagation through spikes.spk_reset (
str, optional) – Spike reset mode (default: ‘hard’). Options: ‘hard’ (stop_gradient) or ‘soft’ (subtract threshold). Note: This model has NO voltage reset after spikes, so this parameter has limited effect.name (
str, optional) – Module name (default: None). Used for logging and identification.
Parameter Mapping
This table maps NEST C++ parameter names to brainpy.state constructor arguments:
NEST Parameter
brainpy.state Parameter
Default
t_reft_ref3.0 ms
phi_maxphi_max0.15 kHz
rate_sloperate_slope0.5
betabeta0.333 (1/mV)
thetatheta-55.0 mV
g_spg_sp600.0 nS
g_psg_ps0.0 nS
g_L(soma)soma_g_L30.0 nS
C_m(soma)soma_C_m300.0 pF
E_L(soma)soma_E_L-70.0 mV
E_ex(soma)soma_E_ex0.0 mV
E_in(soma)soma_E_in-75.0 mV
tau_syn_ex(soma)soma_tau_syn_ex3.0 ms
tau_syn_in(soma)soma_tau_syn_in3.0 ms
I_e(soma)soma_I_e0.0 pA
g_L(dendrite)dend_g_L30.0 nS
C_m(dendrite)dend_C_m300.0 pF
E_L(dendrite)dend_E_L-70.0 mV
E_ex(dendrite)dend_E_ex0.0 mV
E_in(dendrite)dend_E_in0.0 mV
tau_syn_ex(dendrite)dend_tau_syn_ex3.0 ms
tau_syn_in(dendrite)dend_tau_syn_in3.0 ms
I_e(dendrite)dend_I_e0.0 pA
Mathematical Formulation
1. Compartment Structure
The neuron consists of two compartments:
Soma (s): Conductance-based synapses, stochastic spike generation
Dendrite (d, also labeled p for “proximal”): Current-based synapses, predictive signal for learning
2. Somatic Dynamics
The somatic membrane potential evolves according to:
\[C_\mathrm{m}^s \frac{dV^s}{dt} = -g_\mathrm{L}^s (V^s - E_\mathrm{L}^s) - g_\mathrm{ex}^s (V^s - E_\mathrm{ex}^s) - g_\mathrm{in}^s (V^s - E_\mathrm{in}^s) + g_\mathrm{sp} (V^p - V^s) + I_\mathrm{stim}^s + I_\mathrm{e}^s\]Somatic synaptic conductances decay exponentially:
\[\frac{dg_\mathrm{ex}^s}{dt} = -\frac{g_\mathrm{ex}^s}{\tau_\mathrm{syn,ex}^s}, \qquad \frac{dg_\mathrm{in}^s}{dt} = -\frac{g_\mathrm{in}^s}{\tau_\mathrm{syn,in}^s}\]3. Dendritic Dynamics
The dendritic membrane potential evolves according to:
\[C_\mathrm{m}^p \frac{dV^p}{dt} = -g_\mathrm{L}^p (V^p - E_\mathrm{L}^p) + I_\mathrm{ex}^p + I_\mathrm{in}^p + g_\mathrm{ps} (V^s - V^p)\]Dendritic synaptic currents (note: current-based, not conductance) decay exponentially:
\[\frac{dI_\mathrm{ex}^p}{dt} = -\frac{I_\mathrm{ex}^p}{\tau_\mathrm{syn,ex}^p}, \qquad \frac{dI_\mathrm{in}^p}{dt} = -\frac{I_\mathrm{in}^p}{\tau_\mathrm{syn,in}^p}\]4. Stochastic Spike Generation
Spikes are generated stochastically based on the instantaneous rate function:
\[\text{rate}(t) = 1000 \cdot \phi(V^s(t)) \quad [\text{Hz}]\]where:
\[\phi(u) = \frac{\phi_\mathrm{max}}{1 + k \cdot \exp(\beta (\theta - u))}\]With refractory period (
t_ref > 0): At most one spike per time step. Spike probability is \(P_{\mathrm{spike}} = 1 - \exp(-\text{rate} \cdot dt \cdot 10^{-3})\). A uniform random number \(r \sim U(0,1)\) is compared to this probability.Without refractory period (
t_ref == 0): Number of spikes drawn from Poisson distribution with mean \(\lambda = \text{rate} \cdot dt \cdot 10^{-3}\).
Important: There is NO membrane potential reset after a spike. The voltage continues to evolve according to the differential equations.
5. Urbanczik-Senn Learning Signal
At each time step, the model computes a learning signal for synaptic plasticity. The dendritic compartment predicts the somatic potential via:
\[V^*_W = \frac{E_\mathrm{L}^s \cdot g_\mathrm{L}^s + V^p \cdot g_\mathrm{sp}}{g_\mathrm{sp} + g_\mathrm{L}^s}\]This represents the steady-state somatic voltage given the current dendritic voltage, assuming all synaptic inputs are zero.
The error signal (prediction error) at time step \(t\) is:
\[\delta\Pi(t) = \left(n_\mathrm{spikes}(t) - \phi(V^*_W(t)) \cdot dt\right) \cdot h(V^*_W(t))\]where:
\(n_{\mathrm{spikes}}(t)\) is the number of actual spikes emitted (0 or 1 with refractory period, ≥0 without)
\(\phi(V^*_W(t)) \cdot dt\) is the expected spike count based on prediction
\(h(u)\) is the learning modulation function:
\[h(u) = \frac{15 \cdot \beta}{1 + \frac{1}{k} \cdot \exp(-\beta (\theta - u))}\]The history of \((t, \delta\Pi)\) pairs is stored and accessible via
get_urbanczik_history()for use by plasticity rules.6. Receptor Types and Synaptic Input Addressing
Synaptic inputs are routed to specific compartments and receptor types via labeled input channels:
Receptor Label
Port
Description
soma_exc1
Excitatory conductance input to soma (nS)
soma_inh2
Inhibitory conductance input to soma (nS)
dend_exc3
Excitatory current input to dendrite (pA)
dend_inh4
Inhibitory current input to dendrite (pA)
soma(current)5
Direct current injection to soma (pA)
dend(current)6
Direct current injection to dendrite (pA)
Implementation Note: In brainpy.state, use
add_delta_input()with labels'soma_exc','soma_inh','dend_exc','dend_inh'for synaptic spikes. Useadd_current_input()with labels'soma'and'dend'for current injections. All synaptic weights must be positive; excitation vs. inhibition is determined by the receptor label.7. Numerical Integration
The 6-dimensional ODE system (V_s, g_ex_s, g_in_s, V_d, I_ex_d, I_in_d) is integrated using an adaptive RKF45 Runge-Kutta-Fehlberg integrator that is fully JAX-compatible and differentiable.
Update Order per Time Step:
Integrate ODEs over interval \((t, t + dt]\) using current stimulus currents from previous step
Add arriving synaptic spike inputs (conductance/current jumps):
Soma: \(g_{\mathrm{ex}}^s \mathrel{+}= \Delta g_{\mathrm{ex}}\), \(g_{\mathrm{in}}^s \mathrel{+}= \Delta g_{\mathrm{in}}\)
Dendrite: \(I_{\mathrm{ex}}^p \mathrel{+}= \Delta I_{\mathrm{ex}}\), \(I_{\mathrm{in}}^p \mathrel{-}= \Delta I_{\mathrm{in}}\) (note sign)
Check refractoriness and generate spikes stochastically if not refractory
Compute and store Urbanczik learning signal \(\delta\Pi\)
Buffer external current inputs for next time step
Computational Complexity and Performance
Time Complexity: \(O(N \cdot S)\) where \(N\) is the number of neurons and \(S\) is the number of adaptive ODE solver steps per neuron per time step. Typically \(S \approx 3-10\) depending on dynamics.
Space Complexity: \(O(N)\) for state variables, plus \(O(N \cdot T)\) for Urbanczik history over \(T\) time steps.
Performance Notes:
This model is significantly slower than simple LIF neurons due to: (1) element-wise adaptive ODE solving per neuron, (2) stochastic spike generation requiring RNG calls, and (3) learning signal computation.
Not vectorized across neurons; uses Python loop over
np.ndindex.For large networks (>1000 neurons), consider alternative implementations or simplified models.
History storage grows linearly with simulation time; clear periodically if memory is constrained.
Attributes (State Variables)
- V_sbrainstate.HiddenState
Somatic membrane potential (Quantity, shape:
varshape). Initialized tosoma_E_L. Unit: mV.- g_ex_sbrainstate.HiddenState
Somatic excitatory synaptic conductance (Quantity). Initialized to 0. Unit: nS.
- g_in_sbrainstate.HiddenState
Somatic inhibitory synaptic conductance (Quantity). Initialized to 0. Unit: nS.
- V_dbrainstate.HiddenState
Dendritic membrane potential (Quantity). Initialized to
dend_E_L. Unit: mV.- I_ex_dbrainstate.HiddenState
Dendritic excitatory synaptic current (Quantity). Initialized to 0. Unit: pA.
- I_in_dbrainstate.HiddenState
Dendritic inhibitory synaptic current (Quantity). Initialized to 0. Unit: pA.
- refractory_step_countbrainstate.ShortTermState
Remaining refractory time steps (int32 array). Counts down to zero. Initialized to 0.
- I_stim_somabrainstate.ShortTermState
Buffered soma current for next integration step (Quantity). Unit: pA.
- I_stim_dendbrainstate.ShortTermState
Buffered dendrite current for next integration step (Quantity). Unit: pA.
- last_spike_timebrainstate.ShortTermState
Time of last spike emission (Quantity). Initialized to -1e7 ms. Unit: ms.
- integration_stepbrainstate.ShortTermState
Persistent RKF45 substep size estimate (ms).
- Raises:
ValueError – If
rate_slope < 0(must be non-negative).ValueError – If
phi_max < 0(must be non-negative).ValueError – If
t_ref < 0(must be non-negative).ValueError – If any capacitance
C_m≤ 0 (must be strictly positive).ValueError – If any synaptic time constant ≤ 0 (must be strictly positive).
ValueError – If
gsl_error_tol≤ 0 (must be strictly positive).
Notes
NEST Compatibility: All default parameter values match NEST 3.9+ C++ source for
pp_cond_exp_mc_urbanczik. Notable: dendritic inhibitory reversal potential is 0.0 mV (not -75.0 mV).Stochasticity: Spike times are non-deterministic unless
rng_keyis explicitly managed. For reproducibility, provide a fixed PRNG key and re-seed appropriately.No Voltage Reset: Unlike integrate-and-fire models, there is no discrete voltage reset after spiking. The membrane potential evolves continuously.
Urbanczik History: The learning signal history is stored in a Python dict (
_urbanczik_history) and grows unbounded. For long simulations, periodically clear history or implement custom storage.Surrogate Gradients: The
spk_funparameter enables gradient-based learning through spike discontinuities, but this model is primarily designed for the Urbanczik-Senn rule which uses the stored δΠ signals directly.
Examples
Basic single neuron simulation:
>>> import brainpy.state as bp >>> import saiunit as u >>> import brainstate >>> import numpy as np >>> # Create a single neuron >>> neuron = bp.pp_cond_exp_mc_urbanczik(in_size=1) >>> neuron.init_all_states() >>> # Simulate for 100 ms with constant soma current >>> dt = 0.1 * u.ms >>> with brainstate.environ.context(dt=dt): ... spikes = [] ... for i in range(1000): # 100 ms ... spk = neuron.update(x=300.0 * u.pA) # Strong depolarizing current ... spikes.append(float(spk[0])) >>> print(f"Total spikes: {sum(spikes)}") Total spikes: 12
Two-compartment voltage monitoring:
>>> neuron = bp.pp_cond_exp_mc_urbanczik(in_size=1, t_ref=0.0*u.ms) >>> neuron.init_all_states() >>> soma_voltages, dend_voltages = [], [] >>> with brainstate.environ.context(dt=0.1*u.ms): ... for i in range(500): ... neuron.update(x=200.0 * u.pA) ... soma_voltages.append(float(neuron.V_s.value[0] / u.mV)) ... dend_voltages.append(float(neuron.V_d.value[0] / u.mV)) >>> # Plot soma_voltages and dend_voltages to visualize dynamics
Accessing Urbanczik learning signals:
>>> neuron = bp.pp_cond_exp_mc_urbanczik(in_size=2) >>> neuron.init_all_states() >>> with brainstate.environ.context(dt=0.1*u.ms): ... for i in range(100): ... neuron.update(x=250.0 * u.pA) >>> # Retrieve learning signal history for neuron 0 >>> history_0 = neuron.get_urbanczik_history(neuron_idx=0) >>> print(f"History length: {len(history_0)}") History length: 100 >>> # Each entry is (time_ms, delta_PI) >>> t, dPI = history_0[-1] >>> print(f"Last time: {t:.2f} ms, Last dPI: {dPI:.4f}") Last time: 10.00 ms, Last dPI: -0.0234
References
See also
gif_cond_expGeneralized integrate-and-fire with conductance synapses
pp_psc_deltaPoint process neuron with current synapses
urbanczik_synapseSynapse model implementing Urbanczik-Senn plasticity rule
- get_spike(V=None)[source]#
Evaluate surrogate spike output from membrane voltage.
- Parameters:
V (
ArrayLike, optional) – Voltage values with shape broadcastable toself.varshapeand units compatible with mV. IfNone, uses current stateself.V_s.value.- Returns:
Surrogate spike activation produced by
spk_fun(V / (1.0 * u.mV)).- Return type:
ArrayLike
- get_urbanczik_history(neuron_idx=0)[source]#
Retrieve the Urbanczik-Senn learning signal history for a specific neuron.
This method returns the complete time series of error signals (δΠ) computed during simulation, which can be used to implement the Urbanczik-Senn synaptic plasticity rule. Each entry contains the simulation time and corresponding error signal value.
- Parameters:
neuron_idx (
int, optional) – Flat (raveled) index of the neuron within the population array (default: 0). For a 2D population of shape (N, M), valid indices are 0 to N*M-1. Usenp.ravel_multi_index((i, j), shape)to convert multi-dimensional indices to flat index.- Returns:
history – List of (time_ms, delta_PI) tuples, where:
time_ms(float): Simulation time in milliseconds when the signal was computed. Times are strictly increasing.delta_PI(float): Learning signal value (dimensionless). Positive values indicate the neuron spiked more than predicted (potentiation signal); negative values indicate under-spiking (depression signal).
If the neuron index has not been encountered (no history), returns an empty list
[].- Return type:
Mathematical Interpretation
Each
delta_PIvalue represents:\[\delta\Pi(t) = \left(n_{\mathrm{spikes}}(t) - \phi(V^*_W(t)) \cdot dt\right) \cdot h(V^*_W(t))\]where:
\(n_{\mathrm{spikes}}\) is the actual spike count in the time step
\(\phi(V^*_W) \cdot dt\) is the expected spike count from prediction
\(h(V^*_W)\) is the voltage-dependent learning modulation
Usage in Plasticity:
The Urbanczik-Senn weight update rule for a synapse connecting to this neuron involves integrating these error signals with presynaptic activity traces. Typically, weights are updated as:
\[\Delta w_i = \eta \sum_t \delta\Pi(t) \cdot x_i(t)\]where \(x_i(t)\) is the presynaptic trace (e.g., filtered spike train).
Examples
Retrieve and plot learning signals:
>>> import brainpy.state as bp >>> import saiunit as u >>> import brainstate >>> import matplotlib.pyplot as plt >>> neuron = bp.pp_cond_exp_mc_urbanczik(in_size=1) >>> neuron.init_all_states() >>> with brainstate.environ.context(dt=0.1*u.ms): ... for i in range(1000): ... neuron.update(x=250.0 * u.pA) >>> history = neuron.get_urbanczik_history(neuron_idx=0) >>> times, dPIs = zip(*history) >>> plt.plot(times, dPIs) >>> plt.xlabel('Time (ms)') >>> plt.ylabel('δΠ') >>> plt.title('Urbanczik Learning Signal') >>> plt.show()
Access for multi-dimensional population:
>>> import numpy as np >>> neuron_pop = bp.pp_cond_exp_mc_urbanczik(in_size=(10, 10)) >>> neuron_pop.init_all_states() >>> # Simulate... >>> # Get history for neuron at position (3, 7) >>> flat_idx = np.ravel_multi_index((3, 7), (10, 10)) >>> history_3_7 = neuron_pop.get_urbanczik_history(neuron_idx=flat_idx)
Check if history exists:
>>> history = neuron.get_urbanczik_history(neuron_idx=999) >>> if not history: ... print("No history recorded for neuron 999")
Notes
History is stored in memory and grows linearly with simulation length. For long simulations or large populations, consider periodic clearing.
History is reset by
reset_state()but persists acrossupdate()calls.The internal storage is a Python dict mapping flat indices to lists, which is not JAX-compatible but sufficient for post-simulation analysis.
Times are recorded at the end of each time step (t + dt), not at the beginning (t).
See also
urbanczik_synapseSynapse model that uses these signals for plasticity
- init_state(**kwargs)[source]#
Initialize persistent and short-term state variables.
- Parameters:
**kwargs – Unused compatibility parameters accepted by the base-state API.
- Raises:
ValueError – If an initializer cannot be broadcast to requested shape.
TypeError – If initializer outputs have incompatible units/dtypes for the corresponding state variables.
- update(x=Quantity(0., 'pA'))[source]#
Advance neuron state by one simulation time step with ODE integration and stochastic spiking.
This method performs a complete update cycle for the two-compartment model, including numerical integration of differential equations, synaptic input processing, stochastic spike generation, and Urbanczik learning signal computation. It follows NEST’s update order exactly.
- Parameters:
x (
Quantity, optional) – External current input applied to the soma compartment (unit: pA, default: 0.0 pA). This is typically used for direct current injection from external sources. Shape must be broadcastable to neuron population shape.- Returns:
spike – Binary spike output array of shape matching neuron population (float32). Values are 1.0 where a spike was emitted, 0.0 otherwise. For neurons without refractory period (t_ref=0), values can be >1 if multiple spikes occurred in one step.
- Return type:
jax.numpy.ndarray
Notes
Update Procedure
The method executes the following steps in order (per neuron):
1. ODE Integration
Integrate the 6-dimensional state vector over the interval (t, t+dt] using the adaptive RKF45 solver. The integration uses stimulus currents buffered from the previous time step.
2. Synaptic Input Application
Apply instantaneous jumps to synaptic variables from arriving spikes:
Soma: g_ex_s += Δg_ex, g_in_s += Δg_in (conductance jumps in nS)
Dendrite: I_ex_d += ΔI_ex, I_in_d -= ΔI_in (current jumps in pA; note sign)
3. Stochastic Spike Generation
If neuron is not refractory:
Compute instantaneous rate: rate = 1000 · φ(V_s) [Hz]
With t_ref > 0: Draw uniform random r, emit spike if r ≤ 1 - exp(-rate·dt·1e-3)
With t_ref = 0: Draw Poisson(rate·dt·1e-3) for spike count
If spike(s) emitted: set refractory counter to round(t_ref / dt)
If neuron is refractory: decrement refractory counter, no spikes.
4. Urbanczik Learning Signal
Compute dendritic prediction and error signal:
\(V^*_W = (E_{L,s} \cdot g_{L,s} + V_d \cdot g_{sp}) / (g_{sp} + g_{L,s})\)
\(\delta\Pi = (n_{\text{spikes}} - \phi(V^*_W) \cdot dt) \cdot h(V^*_W)\)
Store \((t, \delta\Pi)\) in history dict
5. Current Input Buffering
Collect all current inputs (via
sum_current_inputs()) and store for use in the next time step’s ODE integration.Computational Complexity
Time: O(N · S) where N is population size, S is adaptive ODE steps per neuron
Space: O(N) for state updates, O(N·T) for history accumulation over T steps
Not vectorized: Uses Python loop over all neuron indices
Side Effects
Updates all state variables (V_s, V_d, g_ex_s, g_in_s, I_ex_d, I_in_d)
Updates refractory counters and last_spike_time
Appends (t, δΠ) to internal
_urbanczik_historydictAdvances internal PRNG state (
_rng_state)Consumes and clears delta_inputs from projections
Numerical Considerations
The ODE solver is adaptive and may take variable numbers of internal steps
For stiff dynamics or large coupling conductances, integration may require more steps, increasing computation time
Dendritic inhibitory current is subtracted, matching NEST convention for inhibitory synapses
Warning
Numerical issues (NaN, Inf) can arise from invalid parameter combinations (e.g., zero capacitance), extremely large input currents, or ODE solver failure.
Notes
This is a slow model due to per-neuron ODE solving and lack of vectorization. For networks >1000 neurons, expect significant runtime.
The lack of voltage reset after spikes is intentional and matches the original Urbanczik & Senn (2014) formulation.
Random number generation state is advanced even if no spikes occur, ensuring reproducibility across different input patterns given the same seed.