stdp_pl_synapse_hom#
- class brainpy.state.stdp_pl_synapse_hom(weight=1.0, delay=Quantity(1., 'ms'), receptor_type=0, tau_plus=Quantity(20., 'ms'), tau_minus=Quantity(20., 'ms'), lambda_=0.1, alpha=1.0, mu=0.4, Kplus=0.0, post=None, name=None)#
NEST-compatible
stdp_pl_synapse_homconnection model.stdp_pl_synapse_homimplements the power-law spike-timing-dependent plasticity (STDP) rule from Morrison et al. (2007) with homogeneous plasticity parameters. This synapse exhibits asymmetric potentiation and depression with non-linear, power-law weight dependence, making it suitable for modeling balanced networks with realistic weight distributions.The model replicates NEST
models/stdp_pl_synapse_hom.hexactly, including propagator computation, update ordering, and event timing semantics. Delay scheduling and receiver delivery inherit fromstatic_synapse.1. Mathematical Model
State Variables
weight(\(w\)): Synaptic efficacy (current/conductance units or dimensionless)Kplus(\(K^+\)): Presynaptic eligibility trace (dimensionless)t_lastspike(\(t_{\mathrm{last}}\)): Timestamp of previous presynaptic spike (ms)Internal postsynaptic history buffer:
(t_post, K^-(t_post))pairs
Continuous-time dynamics (between spikes):
Presynaptic trace decay:
\[\frac{dK^+}{dt} = -\frac{K^+}{\tau_+}\]Postsynaptic trace decay (maintained in internal buffer):
\[\frac{dK^-}{dt} = -\frac{K^-}{\tau_-}\]where:
\(\tau_+ > 0\) – Potentiation time constant (ms)
\(\tau_- > 0\) – Depression time constant (ms)
Upon presynaptic spike at time \(t_{\mathrm{pre}}\) with dendritic delay \(d\):
Step 1: Facilitation (Potentiation) — Process all postsynaptic spikes in the causal window:
For each postsynaptic spike \(t_{\mathrm{post}}\) in the interval \((t_{\mathrm{last}} - d,\, t_{\mathrm{pre}} - d]\):
\[ \begin{align}\begin{aligned}K^+_{\mathrm{eff}} = K^+ \cdot \exp\left(\frac{t_{\mathrm{last}} - (t_{\mathrm{post}} + d)}{\tau_+}\right)\\w \leftarrow w + \lambda \, w^\mu \, K^+_{\mathrm{eff}}\end{aligned}\end{align} \]where:
\(\lambda\) – Learning rate (dimensionless)
\(\mu\) – Power-law exponent for potentiation (\(\mu \in [0, 1]\) typical)
Interpretation: The presynaptic trace \(K^+\) is back-propagated to the time of the postsynaptic spike (\(t_{\mathrm{post}} + d\), accounting for dendritic delay), producing a smaller effective trace for older postsynaptic spikes. Potentiation is multiplicative and sub-linear in weight (\(w^\mu\) with \(\mu < 1\)), promoting stable weight distributions.
Step 2: Depression — Apply depression based on the postsynaptic trace at the pre-spike time:
\[ \begin{align}\begin{aligned}K^-_{\mathrm{eff}} = K^-\left(t_{\mathrm{pre}} - d\right)\\w \leftarrow w - \alpha \lambda \, w \, K^-_{\mathrm{eff}}\\w \leftarrow \max(w, 0)\end{aligned}\end{align} \]where \(\alpha\) is the depression scaling factor.
Interpretation: Depression is linear in weight and occurs when a presynaptic spike is preceded by postsynaptic activity. The weight is clipped to zero to prevent negative values.
Step 3: Event Transmission — Schedule the weighted event with updated
weight.Step 4: Presynaptic Trace Update:
\[ \begin{align}\begin{aligned}K^+ \leftarrow K^+ \cdot \exp\left(\frac{t_{\mathrm{last}} - t_{\mathrm{pre}}}{\tau_+}\right) + 1\\t_{\mathrm{last}} \leftarrow t_{\mathrm{pre}}\end{aligned}\end{align} \]Postsynaptic spike handling (via internal buffer):
Upon postsynaptic spike at \(t_{\mathrm{post}}\):
\[K^- \leftarrow K^- \cdot \exp\left(\frac{t_{\mathrm{last\_post}} - t_{\mathrm{post}}}{\tau_-}\right) + 1\]Stored as
(t_post, K^-)in history buffer for future lookups.2. Update Ordering and NEST Compatibility
This implementation preserves the exact update sequence from NEST
models/stdp_pl_synapse_hom.h::send():Read postsynaptic spike history in \((t_{\mathrm{last}} - d,\, t_{\mathrm{pre}} - d]\)
For each retrieved postsynaptic spike, compute back-propagated \(K^+_{\mathrm{eff}}\)
Apply facilitation: \(w \leftarrow w + \lambda w^\mu K^+_{\mathrm{eff}}\)
Retrieve depression trace \(K^-_{\mathrm{eff}}\) at \(t_{\mathrm{pre}} - d\)
Apply depression: \(w \leftarrow \max(w - \alpha \lambda w K^-_{\mathrm{eff}}, 0)\)
Schedule weighted spike event
Update presynaptic trace: \(K^+ \leftarrow K^+ e^{(t_{\mathrm{last}} - t_{\mathrm{pre}})/\tau_+} + 1\)
Update timestamp: \(t_{\mathrm{last}} \leftarrow t_{\mathrm{pre}}\)
3. Homogeneous-Property Semantics
In NEST,
tau_plus,lambda,alpha, andmuare common model properties shared by all synapses of this type, whileweightandKplusare per-connection state.This implementation enforces NEST connect-time semantics:
Common properties (
tau_plus,lambda,alpha,mu) are set at model instantiation or viaSetDefaults()/CopyModel()Per-connection properties (
weight,Kplus) can be set viaConnect(..., syn_spec={...})check_synapse_params()rejects attempts to override common properties in connection specifications
4. Event Timing Semantics
NEST evaluates this model using on-grid spike time stamps and ignores precise sub-step offsets. This implementation follows the same convention:
Presynaptic spike detected at simulation step
nSpike time stamp: \(t_{\mathrm{spike}} = t_n + dt\)
Dendritic arrival time: \(t_{\mathrm{arrival}} = t_{\mathrm{spike}} - d\)
Delivery time: \(t_{\mathrm{delivery}} = t_{\mathrm{spike}} + \mathrm{delay}\)
5. Stability Constraints and Computational Implications
Parameter Constraints:
\(\tau_+ > 0\) (enforced in
__init__andset)\(\tau_- > 0\) (recommended, not enforced)
\(\lambda \geq 0\) (learning rate)
\(\alpha \geq 0\) (depression scaling)
\(\mu \in [0, 1]\) (typical range; not enforced)
\(K^+ \geq 0\) (initial presynaptic trace; typically zero)
\(w \geq 0\) (maintained via clipping in depression)
Numerical Considerations
Trace propagation uses
math.exp()for exponential decayPower-law computation uses
numpy.power()with float64 precisionPostsynaptic history is stored as Python lists
_post_hist_tand_post_hist_kminus; lookups are \(O(n)\) where \(n\) is the number of stored postsynaptic spikesPer-call cost: \(O(n_{\mathrm{post}})\) where \(n_{\mathrm{post}}\) is the number of postsynaptic spikes in the causal window
All state variables are Python floats (
float64precision)
Behavioral Regimes:
Power-law stabilization (\(\mu < 1\)): Potentiation is sub-linear in weight, preventing runaway growth and promoting log-normal weight distributions (Morrison et al., 2007)
Balanced networks: The combination of power-law potentiation and linear depression naturally regulates weight distributions in recurrent networks
Weight clamping: Depression clipping at \(w = 0\) prevents negative weights; no upper bound is enforced (unlike
stdp_synapsewithWmax)
Failure Modes
Non-finite weights: Power-law computation \(w^\mu\) can produce
infornanfor extreme weights; users should monitor weight distributionsTrace overflow: Large spike trains can accumulate unbounded \(K^+\) or \(K^-\) values (not a practical issue for typical firing rates)
History buffer growth: Postsynaptic spike history is not pruned; long simulations with high postsynaptic firing rates may consume memory
- Parameters:
weight (
ArrayLike, optional) – Initial synaptic weight \(w\) (dimensionless or with receiver-specific units). Scalar float or array-like. Must be non-negative. Default:1.0.delay (
ArrayLike, optional) – Synaptic transmission delay \(d\) in milliseconds. Must be> 0. Quantized to integer time steps perstatic_synapseconventions. Default:1.0 * u.ms.receptor_type (
int, optional) – Receiver port/receptor identifier (non-negative integer). Default:0.tau_plus (
ArrayLike, optional) – Potentiation time constant \(\tau_+\) in milliseconds. Must be> 0. Scalar float or saiunitQuantity. Common property (not per-connection). Default:20.0 * u.ms.tau_minus (
ArrayLike, optional) – Depression trace time constant \(\tau_-\) in milliseconds. Must be> 0. Scalar float or saiunitQuantity. In NEST, this parameter belongs to the postsynapticArchivingNode; here it is stored on the synapse for standalone compatibility. Default:20.0 * u.ms.lambda (
ArrayLike, optional) – Learning rate \(\lambda\) (dimensionless). Must be non-negative. Common property (not per-connection). Default:0.1.alpha (
ArrayLike, optional) – Depression scaling factor \(\alpha\) (dimensionless). Must be non-negative. Controls the relative strength of depression vs. potentiation. Common property (not per-connection). Default:1.0.mu (
ArrayLike, optional) – Power-law exponent \(\mu\) for potentiation (dimensionless). Typical range: \([0, 1]\). Values \(< 1\) produce sub-linear potentiation; \(\mu = 0\) disables weight dependence. Common property (not per-connection). Default:0.4.Kplus (
ArrayLike, optional) – Initial presynaptic eligibility trace \(K^+\) (dimensionless). Must be non-negative. Scalar float or array-like. Per-connection state. Default:0.0.post (
object, optional) – Default receiver object (typically a neuron or neuron group). Can be overridden insend()andupdate()calls. Default:None.name (
str, optional) – Object name for identification and debugging. Default:None.
Parameter Mapping
The following table maps NEST parameter names to this implementation:
NEST Parameter
brainpy.state Parameter
Type
weightweightper-connection
delaydelayper-connection
receptor_typereceptor_typeper-connection
tau_plustau_pluscommon property
tau_minustau_minuscommon property
lambdalambda_common property
alphaalphacommon property
mumucommon property
KplusKplusper-connection
Notes
The model transmits spike-like events only (no graded signals).
update(pre_spike=..., post_spike=...)accepts both presynaptic and postsynaptic spike multiplicities (integer counts) for standalone STDP simulation without explicit neuron models.record_post_spike(multiplicity, t_spike_ms=None)can be used to manually feed postsynaptic spikes when the postsynaptic model does not expose NESTArchivingNodeAPIs.Postsynaptic spike history is not automatically pruned; users may call
clear_post_history()to reset internal buffers if needed.Unlike
stdp_synapse, this model has no upper weight bound (Wmax); weight stability relies on power-law potentiation dynamics.
See also
stdp_synapseClassical pair-based STDP with separate potentiation/depression exponents
stdp_triplet_synapseTriplet STDP rule (Pfister-Gerstner)
static_synapseBase class for event scheduling and delay handling
References
Examples
Basic standalone STDP simulation:
>>> import brainpy.state as bps >>> import saiunit as u >>> >>> # Create synapse with power-law STDP >>> syn = bps.stdp_pl_synapse_hom( ... weight=1.0, ... tau_plus=20*u.ms, ... tau_minus=20*u.ms, ... lambda_=0.1, ... alpha=1.0, ... mu=0.4, ... ) >>> syn.init_state() >>> >>> # Simulate pre-before-post pairing (potentiation) >>> syn.record_post_spike(t_spike_ms=10.0) # post spike at 10 ms >>> syn.send(1.0) # pre spike at 11 ms (assuming dt=1ms, t=10ms) >>> print(f"Weight after potentiation: {syn.weight:.4f}") Weight after potentiation: 1.0xxx >>> >>> # Simulate post-before-pre pairing (depression) >>> syn.record_post_spike(t_spike_ms=20.0) # post spike at 20 ms >>> syn.send(1.0) # pre spike at 10 ms (causally follows post) >>> print(f"Weight after depression: {syn.weight:.4f}") Weight after depression: 0.9xxx
Enforcing homogeneous-property semantics:
>>> import brainpy.state as bps >>> >>> syn = bps.stdp_pl_synapse_hom(lambda_=0.05) >>> >>> # Allowed: per-connection properties >>> syn.check_synapse_params({'weight': 2.0, 'Kplus': 0.5}) # OK >>> >>> # Disallowed: common properties in connection specs >>> try: ... syn.check_synapse_params({'lambda': 0.1}) ... except ValueError as e: ... print(e) lambda cannot be specified in connect-time synapse parameters...
- check_synapse_params(syn_spec)[source]#
Validate connect-time synapse parameter specification.
Enforces NEST’s homogeneous-property semantics by rejecting attempts to override common model properties (
tau_plus,lambda,alpha,mu) in per-connection synapse specifications.In NEST, homogeneous models share plasticity parameters across all connections, while per-connection properties (
weight,Kplus) can vary. This method prevents accidental overrides that would violate this contract.- Parameters:
syn_spec (
Mapping[str,object]orNone) – Synapse parameter specification dictionary, typically provided inConnect(..., syn_spec={...})calls. IfNone, no validation is performed.- Raises:
ValueError – If
syn_speccontains any of the disallowed common properties:'tau_plus','lambda','alpha','mu'.
Notes
Allowed per-connection keys:
'weight','delay','receptor_type','Kplus'Disallowed common-property keys:
'tau_plus','lambda','alpha','mu'To change common properties, use
set(tau_plus=..., lambda_=..., ...)on the model instance, or NEST-styleSetDefaults()/CopyModel()APIsThis check is performed automatically during connection establishment
See also
setUpdate model parameters (common and per-connection)
Examples
Valid per-connection specification:
>>> import brainpy.state as bps >>> >>> syn = bps.stdp_pl_synapse_hom(lambda_=0.1) >>> >>> # Allowed: per-connection properties >>> syn.check_synapse_params({'weight': 2.0, 'Kplus': 0.5}) # OK
Invalid common-property override:
>>> import brainpy.state as bps >>> >>> syn = bps.stdp_pl_synapse_hom(lambda_=0.1) >>> >>> # Disallowed: common property in connection spec >>> try: ... syn.check_synapse_params({'lambda': 0.05}) ... except ValueError as e: ... print(e) lambda cannot be specified in connect-time synapse parameters...
- clear_post_history()[source]#
Clear internal postsynaptic STDP history state.
Resets the internal postsynaptic spike history buffer and depression trace to initial conditions. This method is useful for:
Resetting the synapse state between simulation trials
Reclaiming memory after long simulations with high postsynaptic firing rates
Debugging and testing STDP dynamics
The method resets:
_post_kminus: Depression trace to0.0_last_post_spike: Last postsynaptic spike time to-1.0_post_hist_t: Spike time history to empty list_post_hist_kminus: Depression trace history to empty list
Presynaptic state (
Kplus,t_lastspike) is not affected.Notes
This method does not reset
weightor presynaptic traceKplusCalled automatically by
init_state()Postsynaptic history is not automatically pruned during simulation; manual calls to this method may be needed for very long runs
See also
init_stateFull state initialization including history clearing
record_post_spikeAdd postsynaptic spikes to history buffer
- get()[source]#
Return current public parameters and mutable state.
Retrieves all NEST-compatible public parameters and per-connection state variables as a dictionary. This method is used for introspection, logging, and state serialization.
- Returns:
Dictionary mapping parameter/state names to their current values:
'weight': Current synaptic weight (float)'delay': Synaptic delay in ms (float)'receptor_type': Receiver port ID (int)'tau_plus': Potentiation time constant in ms (float)'tau_minus': Depression time constant in ms (float)'lambda': Learning rate (float)'alpha': Depression scaling factor (float)'mu': Power-law exponent (float)'Kplus': Current presynaptic trace value (float)'synapse_model': Model identifier ('stdp_pl_synapse_hom')
- Return type:
Notes
All saiunit
Quantityvalues are converted to Python floats (SI units)Internal state (
t_lastspike, postsynaptic history) is not includedThe returned dictionary can be used with
set(**params)for state restorationKey names match NEST conventions (
'lambda'instead of'lambda_')
See also
setUpdate parameters and state from dictionary
init_stateReset state to initial values
Examples
>>> import brainpy.state as bps >>> import saiunit as u >>> >>> syn = bps.stdp_pl_synapse_hom( ... weight=1.5, ... tau_plus=20*u.ms, ... lambda_=0.1, ... ) >>> syn.init_state() >>> >>> params = syn.get() >>> print(params['weight']) 1.5 >>> print(params['lambda']) 0.1 >>> print(params['synapse_model']) stdp_pl_synapse_hom
- init_state(batch_size=None, **kwargs)[source]#
Initialize synapse state variables to default values.
Resets all mutable state to initial conditions, including:
weight: Baseline synaptic weight (inherited fromstatic_synapse)Kplus: Presynaptic eligibility trace to_Kplus0t_lastspike: Last presynaptic spike time to_t_lastspike0(default0.0)Postsynaptic spike history buffer (cleared via
clear_post_history())Event delivery queue (inherited from
static_synapse)
- Parameters:
Notes
This method must be called before simulation begins
Clears all postsynaptic spike history (calls
clear_post_history())Does not reset common properties (
tau_plus,lambda_,alpha,mu)Presynaptic trace is reset to initial value set via constructor or
set()
See also
clear_post_historyClear postsynaptic spike history only
setUpdate parameters and initial state values
- record_post_spike(multiplicity=1.0, *, t_spike_ms=None)[source]#
Record postsynaptic spikes into the internal STDP history buffer.
This method manually adds postsynaptic spike events to the internal history buffer used for STDP computation. It is intended for standalone STDP simulation when the postsynaptic neuron does not expose NEST
ArchivingNodeAPIs.For each spike, the method:
Updates the depression trace: \(K^- \leftarrow K^- \exp((t_{\mathrm{last}} - t_{\mathrm{spike}})/\tau_-) + 1\)
Stores the spike time and trace value in the history buffer
- Parameters:
multiplicity (
ArrayLike, optional) – Number of spikes to record (non-negative integer count). If< 1.0, no spikes are recorded. Default:1.0.t_spike_ms (
ArrayLikeorNone, optional) – Spike time stamp in milliseconds (scalar float or saiunitQuantity). IfNone, uses the current simulation time plus one time step: \(t_{\mathrm{spike}} = t_{\mathrm{current}} + dt\). Default:None.
- Returns:
Number of spikes actually recorded (integer count).
- Return type:
- Raises:
ValueError – If
multiplicityis not a scalar, not finite, negative, or not close to an integer value.ValueError – If
t_spike_msis provided but not a scalar or not finite.
Notes
Multiple spikes at the same time are recorded sequentially, updating the trace after each spike (matches NEST behavior for simultaneous spikes)
Spike times are stored in milliseconds (Python float)
The internal history buffer grows unbounded; call
clear_post_history()to reclaim memory if neededThis method does not trigger STDP weight updates; updates occur during presynaptic spike processing in
send()
See also
clear_post_historyReset postsynaptic spike history buffer
updateMain update method that accepts
post_spikeparametersendPresynaptic spike processing (applies STDP weight updates)
Examples
Record postsynaptic spikes at explicit times:
>>> import brainpy.state as bps >>> import saiunit as u >>> >>> syn = bps.stdp_pl_synapse_hom(tau_minus=20*u.ms) >>> syn.init_state() >>> >>> # Record single spike at 10 ms >>> n = syn.record_post_spike(1.0, t_spike_ms=10.0) >>> print(f"Recorded {n} spike(s)") Recorded 1 spike(s) >>> >>> # Record multiple simultaneous spikes >>> n = syn.record_post_spike(3.0, t_spike_ms=20.0) >>> print(f"Recorded {n} spike(s)") Recorded 3 spike(s)
Use current simulation time (automatic stamping):
>>> import brainpy.state as bps >>> import saiunit as u >>> import brainstate as bst >>> >>> syn = bps.stdp_pl_synapse_hom() >>> syn.init_state() >>> >>> with bst.environ.context(dt=0.1*u.ms): ... syn.record_post_spike() # Uses t_current + dt
- send(multiplicity=1.0, *, post=None, receptor_type=None)[source]#
Schedule one outgoing event with NEST
stdp_pl_synapse_homdynamics.Processes a presynaptic spike event by applying power-law STDP weight updates and scheduling the weighted event for delayed delivery to the postsynaptic neuron. This method implements the exact update sequence from NEST
models/stdp_pl_synapse_hom.h::send().Update Sequence:
Compute spike timestamp: \(t_{\mathrm{spike}} = t_{\mathrm{current}} + dt\)
Facilitation (Potentiation): For each postsynaptic spike \(t_{\mathrm{post}}\) in the causal window \((t_{\mathrm{last}} - d,\, t_{\mathrm{spike}} - d]\):
Back-propagate presynaptic trace: \(K^+_{\mathrm{eff}} = K^+ \exp((t_{\mathrm{last}} - (t_{\mathrm{post}} + d))/\tau_+)\)
Apply potentiation: \(w \leftarrow w + \lambda w^\mu K^+_{\mathrm{eff}}\)
Depression: Retrieve postsynaptic trace \(K^-_{\mathrm{eff}}\) at \(t_{\mathrm{spike}} - d\) and apply depression:
\(w \leftarrow w - \alpha \lambda w K^-_{\mathrm{eff}}\)
Clip to non-negative: \(w \leftarrow \max(w, 0)\)
Event Scheduling: Schedule weighted event \(w_{\mathrm{eff}} = w \times \mathrm{multiplicity}\) for delivery at \(t_{\mathrm{delivery}} = t_{\mathrm{spike}} + \mathrm{delay}\)
Presynaptic Trace Update: \(K^+ \leftarrow K^+ \exp((t_{\mathrm{last}} - t_{\mathrm{spike}})/\tau_+) + 1\)
Timestamp Update: \(t_{\mathrm{last}} \leftarrow t_{\mathrm{spike}}\)
- Parameters:
multiplicity (
ArrayLike, optional) – Presynaptic spike multiplicity (scalar float, typically1.0). If zero or negative, no event is scheduled and the method returnsFalse. Default:1.0.post (
objectorNone, optional) – Receiver object (typically a neuron or neuron group). IfNone, uses the default receiver set in the constructor or viaset(post=...). Default:None.receptor_type (
ArrayLikeorNone, optional) – Receiver port identifier (non-negative integer). IfNone, usesself.receptor_type. Default:None.
- Returns:
Trueif an event was scheduled,Falseifmultiplicitywas zero.- Return type:
- Raises:
ValueError – If
receptor_typeis provided but not a valid non-negative integer.RuntimeError – If no receiver is available (
postisNoneand no default receiver is set).
Notes
The method uses on-grid spike timing: spike time is \(t + dt\), ignoring precise sub-step offsets
Dendritic delay \(d\) shifts the STDP causal window but does not affect event delivery time (delivery delay is separate)
Weight updates are applied before event scheduling, so the delivered event reflects the updated weight
Presynaptic trace is updated after STDP computation
Postsynaptic spike history must be maintained externally via
record_post_spike()orupdate(post_spike=...)
See also
updateCombined pre/post spike processing with automatic history management
record_post_spikeManually add postsynaptic spikes to history buffer
Examples
Standalone presynaptic spike processing:
>>> import brainpy.state as bps >>> import saiunit as u >>> >>> syn = bps.stdp_pl_synapse_hom(weight=1.0, lambda_=0.1, mu=0.4) >>> syn.init_state() >>> >>> # Record postsynaptic spike at 10 ms >>> syn.record_post_spike(t_spike_ms=10.0) >>> >>> # Process presynaptic spike at 11 ms (causally follows post) >>> success = syn.send(1.0) >>> print(f"Event scheduled: {success}") Event scheduled: True >>> print(f"Updated weight: {syn.weight:.4f}") Updated weight: 1.0xxx
With explicit receiver:
>>> import brainpy.state as bps >>> >>> class DummyReceiver: ... def receive(self, weight, port, event_type): ... print(f"Received {weight} on port {port}") >>> >>> syn = bps.stdp_pl_synapse_hom() >>> syn.init_state() >>> receiver = DummyReceiver() >>> >>> syn.send(1.0, post=receiver, receptor_type=0) True
- set(*, weight=<object object>, delay=<object object>, receptor_type=<object object>, tau_plus=<object object>, tau_minus=<object object>, lambda_=<object object>, alpha=<object object>, mu=<object object>, Kplus=<object object>, post=<object object>)[source]#
Set NEST-style public parameters and mutable state.
Updates model parameters (common properties and per-connection state) with validation. This method supports partial updates—only specified parameters are modified.
- Parameters:
weight (
ArrayLikeorsentinel, optional) – New synaptic weight. Scalar float or array-like. Must be non-negative. If_UNSET, current value is preserved.delay (
ArrayLikeorsentinel, optional) – New synaptic delay in ms. Must be> 0. If_UNSET, current value is preserved.receptor_type (
intorsentinel, optional) – New receiver port ID (non-negative integer). If_UNSET, current value is preserved.tau_plus (
ArrayLikeorsentinel, optional) – New potentiation time constant in ms. Must be> 0. If_UNSET, current value is preserved.tau_minus (
ArrayLikeorsentinel, optional) – New depression time constant in ms. Must be> 0(not enforced). If_UNSET, current value is preserved.lambda (
ArrayLikeorsentinel, optional) – New learning rate. Must be non-negative. If_UNSET, current value is preserved.alpha (
ArrayLikeorsentinel, optional) – New depression scaling factor. Must be non-negative (not enforced). If_UNSET, current value is preserved.mu (
ArrayLikeorsentinel, optional) – New power-law exponent. If_UNSET, current value is preserved.Kplus (
ArrayLikeorsentinel, optional) – New presynaptic trace value. Must be non-negative (not enforced). Updates bothself.Kplus(current state) andself._Kplus0(initial value forinit_state()). If_UNSET, current value is preserved.post (
objectorsentinel, optional) – New default receiver object. If_UNSET, current value is preserved.
- Raises:
ValueError – If
tau_plusis provided and<= 0.ValueError – If any parameter is not a scalar, not finite, or violates type constraints.
Notes
All parameters are optional; only provided values are updated
Parameter validation is performed before any state is modified
Setting
Kplusupdates both current state and initial-value storageCommon properties (
tau_plus,lambda_,alpha,mu) should typically be set at model creation, not per-connectionThis method does not clear postsynaptic spike history or reset
t_lastspike
See also
getRetrieve current parameter values
init_stateReset state to initial values
Examples
Update learning rate and weight:
>>> import brainpy.state as bps >>> import saiunit as u >>> >>> syn = bps.stdp_pl_synapse_hom(weight=1.0, lambda_=0.1) >>> syn.init_state() >>> >>> syn.set(weight=2.0, lambda_=0.05) >>> print(syn.get()['weight']) 2.0 >>> print(syn.get()['lambda']) 0.05
Update time constants:
>>> import brainpy.state as bps >>> import saiunit as u >>> >>> syn = bps.stdp_pl_synapse_hom() >>> syn.set(tau_plus=15*u.ms, tau_minus=25*u.ms) >>> print(syn.tau_plus) 15.0 >>> print(syn.tau_minus) 25.0
- update(pre_spike=0.0, *, post_spike=0.0, post=None, receptor_type=None)[source]#
Deliver due events, update postsynaptic history, then process presynaptic spikes.
Main update method for standalone STDP simulation. This method orchestrates the complete synaptic update cycle in three phases:
Event Delivery: Deliver all events scheduled for the current time step to the postsynaptic receiver
Postsynaptic History Update: Record incoming postsynaptic spikes into the internal STDP history buffer
Presynaptic Spike Processing: Apply STDP weight updates and schedule new events via
send()
This ordering matches NEST’s event-driven simulation semantics, where postsynaptic spike history is updated before processing presynaptic spikes arriving in the same time step.
- Parameters:
pre_spike (
ArrayLike, optional) – Presynaptic spike count (non-negative integer or float). Summed with any registered current/delta inputs before processing. If zero, no presynaptic spike is processed. Default:0.0.post_spike (
ArrayLike, optional) – Postsynaptic spike count (non-negative integer or float). Recorded into the internal STDP history buffer at time \(t_{\mathrm{current}} + dt\). Default:0.0.post (
objectorNone, optional) – Receiver object for event delivery. IfNone, uses the default receiver set in the constructor or viaset(post=...). Default:None.receptor_type (
ArrayLikeorNone, optional) – Receiver port identifier (non-negative integer). IfNone, usesself.receptor_type. Default:None.
- Returns:
Number of events delivered to the postsynaptic receiver during this step.
- Return type:
- Raises:
ValueError – If
post_spikeis not a scalar, not finite, negative, or not close to an integer value.ValueError – If
receptor_typeis provided but not a valid non-negative integer.RuntimeError – If a presynaptic spike is triggered but no receiver is available.
Notes
The method uses on-grid spike timing: spikes are stamped at \(t_{\mathrm{current}} + dt\)
Presynaptic input is accumulated from three sources:
pre_spikeparametercurrent_inputs(registered viaadd_current_input())delta_inputs(registered viaadd_delta_input())
Multiple postsynaptic spikes at the same time (
post_spike > 1) are recorded sequentially with trace updates between each spikeThis method is typically called once per time step in a simulation loop
See also
sendPresynaptic spike processing and STDP weight updates
record_post_spikeManually record postsynaptic spikes
add_current_inputRegister input sources for presynaptic spike accumulation
Examples
Basic STDP simulation loop:
>>> import brainpy.state as bps >>> import saiunit as u >>> import brainstate as bst >>> >>> syn = bps.stdp_pl_synapse_hom( ... weight=1.0, ... tau_plus=20*u.ms, ... tau_minus=20*u.ms, ... lambda_=0.1, ... mu=0.4, ... ) >>> syn.init_state() >>> >>> with bst.environ.context(dt=1.0*u.ms): ... # Pre-before-post pairing (potentiation) ... for step in range(5): ... bst.environ.set_t(step * 1.0) ... pre = 1.0 if step == 0 else 0.0 ... post = 1.0 if step == 2 else 0.0 ... syn.update(pre_spike=pre, post_spike=post) ... print(f"Weight after potentiation: {syn.weight:.4f}") Weight after potentiation: 1.0xxx
With input accumulation:
>>> import brainpy.state as bps >>> import saiunit as u >>> >>> syn = bps.stdp_pl_synapse_hom() >>> syn.init_state() >>> >>> # Register input source >>> syn.add_current_input('pre_neurons', lambda: 0.5) >>> >>> # Update with explicit + accumulated input >>> n_delivered = syn.update(pre_spike=0.5) # Total: 0.5 + 0.5 = 1.0 >>> print(f"Delivered {n_delivered} event(s)") Delivered 1 event(s)