Open Access Paper
28 December 2022 Temporal coding in recurrent spiking neural networks with synaptic delay-weight plasticity
Guoyong Shi, Jungang Liang, Yong Cui
Author Affiliations +
Proceedings Volume 12506, Third International Conference on Computer Science and Communication Technology (ICCSCT 2022); 125060D (2022) https://doi.org/10.1117/12.2661848
Event: International Conference on Computer Science and Communication Technology (ICCSCT 2022), 2022, Beijing, China
Abstract
As a brain-inspired artificial neural network computational model, a recurrent spiking neural network is composed of biologically plausible spiking neurons, which has taken on increasing importance in this study mainly include complex network structure and implicit nonlinear mechanism. This paper presents a learning algorithm with synaptic delay-weight plasticity for recurrent spiking neural network, which may enable real-time communication of complex spatiotemporal spike trains simulating organisms. First, by copying the hidden layer between the input and output layers, a context layer can be created. A total error for spatiotemporal pattern is then introduced, along with the learning rule for synaptic weights with synaptic delay plasticity. Moreover, by using synaptic delays of spike train from postsynaptic neuron, the proposed algorithm defines the learning rules for synaptic weights. In addition, the performance of the learning algorithm has been successfully tested to achieve high-precision learning with limited iterative resources. Finally, the main factors have been evaluated qualitatively and quantitatively, such as different lengths and frequencies for desired output spike trains.

1.

INTRODUCTION

Spiking neural networks (SNNs) establish interoperability properties at the artificial neural network level by simulating the communication mechanism of firing spike trains in real time1,2. Biological neuron signals can be converted into continuous functions by SNNs2,3, which are widely used for spatial-temporal data analysis in research fields. Several categories of SNNs architecture have been developed in recent years based on different applicability, including single-layer SNNs, multi-layer feed-forward SNNs, and recurrent SNNs. Compared with the other two network structures, recurrent SNNs (RSNNs) are more biologically realistic, and it has the ability to simulate the actual biological activities of biological pulse issuance, transmission, processing and feedback in real time. At the same time, RSNNs are the artificially constructed structures that are closest to the real biological complex network node links. An artificial neural network with recurrent connections is a type of machine learning model with a feedback loop structure that can be used to assist in solving parallel processes, such as machine automation4, intelligent recognition5, and textual analysis6.

A key component of learning and memory is synaptic plasticity. As part of his initial synaptic plasticity hypothesis, Hebb presented that synergistic activity between presynaptic and postsynaptic neurons played a vital role in increasing synaptic potentiation7. With the in-depth exploration of researchers, excellent results have emerged, including strengthening synaptic weights using spike-timing dependent plasticity (STDP)8, weakening them using anti-STDP9, threshold driven plasticity rules (TDP)10, precise-spike-driven synaptic plasticity11 and somato-dendritic synaptic plasticity12. Consequently, the supervised synaptic plasticity learning mechanism, such as STDP and TDP, missing the characteristic of the delay in the propagation of impulse signals in biological synapses, cannot be directly used to express the biological communication mechanism of RSNNs. From this point of view, it focuses more on the exploration of neural dynamics rather than just analyzing the macro variables of neural networks13. On the basis of the comparative study, scientists do not emphasize the importance of developing synaptic weights between neuronal connections, which calls for developing sophisticated algorithms for RSNNs.

The aim of this paper is to propose a supervised learning rule based on synaptic delay plasticity with a spike train weighting adjustment operation in all layers, which can improve the learning ability for complex spatiotemporal patterns. The update method of synaptic weights is defined in combination with the synaptic delay, and then a supervised learning rule is constructed to realize the spatiotemporal communication of spike train. In Section 2 we mainly introduce the network structure of RSNNs and the simplified neural information processing flow by introducing the context layer. In Section 3 we give the construction process of learning rules for each layer of the network. In Section 4 we show the learning performance of the spike train and the effect of different element variations. A summary of the work in paper is presented in Section 5.

2.

RELATED WORK

This section mainly describes the linear network structure and conceptual network introduction of RSNNs in detail. Besides, spike train function is introduced through mathematical expressions.

2.1

The network structure

In this study, a multi-layer spiking neural network is recursively constructed using random coupling and intrinsic connectivity. As shown in Figure 1, the network structure consists of three main layers, namely input, hidden and output, and each layer consists of neuronal connections. It is obvious that the processing process between the layers for spike train is a sequential recursive feed-forward structure, while the randomly coupled real-time communication of the neurons within the hidden layers forms recurrent connections. To achieve a recurrent connection approach as shown in Figure 1 by the dotted line, a connectivity degree (Cd) is applied to indicate the tightness of the coupling within the hidden neurons. In fact, the hidden neurons are connected in an intricate way, which is more reflected by random coupling, so we just artificially abstract to facilitate the construction of network model.

Figure 1.

The constructed network structure.

00043_PSISDG12506_125060D_page_2_1.jpg

As mentioned above, RSNNs are complex in structure, so it is difficult to develop an algorithm for spike train learning in terms of synaptic delay-weight plasticity. In order to follow the structural principle and facilitate program design, a copy of the recurrent layer (called the context layer) is placed at the bottom with the aim of converting RSNNs into an equivalent feed-forward structure, as shown in Figure 2.

Figure 2.

The network structure conversion with context layer.

00043_PSISDG12506_125060D_page_2_2.jpg

2.2

The spike train function

After the biological neuron is stimulated by the presynaptic a spike train si, it will generate the stress response output spike train so. A continuous-time linear Poisson neuron model is used to establish the relationship between input and output spike trains, which can be shown as:

00043_PSISDG12506_125060D_page_3_1.jpg

where the number of presynaptic neurons is denoted by A, and an important part of the spike train processing between neuron i and another neuron o is the synaptic weight woi. Based on the results in References14,15, learning rules can be constructed using a simplified linear summation mechanism of smooth spike trains.

In addition, a smoothing function h is chosen for mathematical convenience, which allows discrete spike trains to be converted into continuous functions. On R2(ϖ) space, si and sj are defined as follows16:

00043_PSISDG12506_125060D_page_3_2.jpg

where an autocorrelation between smoothing function u and the kernel function Θ, and the kernel can be defined as Θ(tm,tn) = ∫Γu(ttm)u(ttn) dt.

3.

SUPERVISED LEARNING ALGORITHM WITH SYNAPTIC DELAY-WEIGHT PLASTICITY

In this case, we can define 00043_PSISDG12506_125060D_page_3_3.jpg, as the desired spike train, and 00043_PSISDG12506_125060D_page_3_4.jpg, as the actual spike train, respectively. By combining equations (1) and (2), we can interpret the total error of RSNNs at time t as the error train in output spikes:

00043_PSISDG12506_125060D_page_3_5.jpg

For all output neurons, the error function can be transformed into the inner product of spike trains, facilitating the adjustment of real-time spike train for an efficient learning process.

By backpropaguing network errors to synaptic weights through error backpropagation, a generalized delta update rule is used to modify synaptic weights. As a result of synaptic connections, the synaptic weight w is calculated as follows:

00043_PSISDG12506_125060D_page_3_6.jpg

where η is the learning rate, ∇E is the error function for adjusting the learning efficiency of spike train, and di is synaptic delay, which can be expressed as:

00043_PSISDG12506_125060D_page_3_7.jpg

where dϕ is the synapse ϕ that need to be adjusted, and 00043_PSISDG12506_125060D_page_3_8.jpg denotes the distance between the time interval when the maximum postsynaptic potential occurs between the input and the desired after the generation of spike trains, which can be represented as:

00043_PSISDG12506_125060D_page_3_9.jpg

If 00043_PSISDG12506_125060D_page_3_10.jpg, the synaptic delay di is assigned as 00043_PSISDG12506_125060D_page_3_11.jpg. Otherwise, di is assigned as 00043_PSISDG12506_125060D_page_3_12.jpg The mathematical expression for the gradient ∇E is:

00043_PSISDG12506_125060D_page_4_1.jpg

As demonstrated in Figure 2, synaptic weight learning rules for the network layer can be separated into three layers: output, context, and input.

3.1

Learning rule in output layer (NO)

Using equation (7) and the corresponding partial derivative transformation operation, the gradient ∇Eoh is associated with the recurrent neurons at time t, which can be shown:

00043_PSISDG12506_125060D_page_4_2.jpg

Accordingly, the rule for updating the synaptic weight ∆woh based on the inner products of adjacent layer relationships and spike train conversion transitions is:

00043_PSISDG12506_125060D_page_4_3.jpg

where the spike train counts of the actual and the expected are denoted by 00043_PSISDG12506_125060D_page_4_4.jpg and 00043_PSISDG12506_125060D_page_4_5.jpg, both as continuous values.

3.2

Learning rule in context layer (NC)

From equation (7) and the partial derivative operation, the gradient ∇Ehr is associated with the recurrent neurons at time t, which can be represented as:

00043_PSISDG12506_125060D_page_4_6.jpg

The rule for updating the synaptic weight ∆whr of adjacent layer relationships and spike train conversion transitions is:

00043_PSISDG12506_125060D_page_4_7.jpg

3.3

Learning rule in input layer (NI)

In the same way, the gradient ∇Eoh is associated with the recurrent neurons at time t, which can be show as:

00043_PSISDG12506_125060D_page_4_8.jpg

Similarly, the rule of the synaptic weight ∆whi updated is:

00043_PSISDG12506_125060D_page_4_9.jpg

Finally, a summary of the learning rules for synaptic weights is given in equations (9), (11), and (13) corresponding to each layer of the RSNNs.

4.

EXPERIMENTS

This section shows the learning capabilities of RSNNs when trained with a supervised learning algorithm with spike weights. The learning algorithm is set up by defining the initial parameters to facilitate testing and verification, and then the emulating performance of spike train based on learning rules is analyzed.

4.1

Experimental setup

The short-term memory spike response model (SRM)17 is chosen in the experiment with the aim of constructing the network as a neuronal model, which has the ability to solve the spatiotemporal spike pattern problems of RSNNs after training.

The settings of related parameters such as SRM model and other variables are shown in Table 1. Finally, at least one test path is satisfied based on the averaged experimental results over 100 trials, specifically applying the learning epoch for up to 500 steps or until no network error exists.

Table 1.

Related parameter settings.

SectionsParametersSettings
SRMThe absolute refractory period1 ms
The neuron threshold1.0
The number of neurons in each layerNi = Nr = 150
No= 1
Initialized synaptic weight distribution interval(0, 0.5)
The connectivity degreeCh=0.35
Other variablesThe rate at which Poisson generates spikesri =20 Hz
rd=50 Hz
The Gaussian kernelΘ(x, y) = exp(–|x – y|2 / 2σ2)
The learning rate0.001
The time step0.1

4.2

Elemental analysis

Figure 3 shows the simulation threads of spike train based learning rules with synaptic delay. Figure 3a illustrates an overview of the learning process of the pulse sequence, which is divided into three phases: initial, desired and actual. Through supervised learning over about 250 epochs, the learning algorithm produces the desired output spike train in 150 ms intervals based on the initial output spike train. As shown in Figure 3b, the total network error E evolves over time. Following 250 learning epochs, the error function drops to 0 as the network is iteratively learned. This means that the spike train achieves the actual-to-expected identicality.

Figure 3.

The simulation threads of spike train based learning rules with synaptic delay for RSNNs. (a): The efficient iterative process. Δ, ∇ and ∙ all denote spike trains, each specific to the initial, desired and actual; (b): The errors on the network evolve over time.

00043_PSISDG12506_125060D_page_6_1.jpg

Figure 4 shows the variation of network errors based on desired output spike train of different lengths. Figure 4a illustrates that the optimal expected output spike train length of the algorithm is 150 ms, then high-precision autonomous learning can be performed, and the error increases significantly with the length of spike train. It is intuitively evident from Figure 4b that the learning epoch number for the algorithm with synaptic delay incorporated is about 250 epochs. The main reason is that the occurrence of negative feedback regarding learning accuracy and spike train length, where the error rises significantly when the length increases.

Figure 4.

The spike trains with varying lengths when iterating through 500 learning epochs. (a): The differences in network errors due to spike train lengths; (b): The effect of spike train lengths on errors and learning epochs.

00043_PSISDG12506_125060D_page_6_2.jpg

Figure 5 shows the variation of network errors based on desired output spike train of different frequencies. Figure 5a illustrates that the network error grows significantly as the frequency of desired output spike train increases, and the algorithm can perform high-precision autonomous learning when the output frequency is set to 50 Hz. From Figure 5b, the learning epoch of constructed rules is approximately in equilibrium as output spike frequency varies.

Figure 5.

The spike trains with varying frequencies when iterating through 500 learning epochs. (a): The differences in network errors due to spike train frequencies; (b): The effect of spike train frequencies on errors and learning epochs.

00043_PSISDG12506_125060D_page_7_1.jpg

5.

CONCLUSIONS

In this work, we propose a multi-spike learning rule with synaptic delay-weight plasticity based on weighting adjustment operation of spike train for RSNNs. The introduction of synaptic delay makes the adjustment of synaptic weights more similar to the processing process of biological neurons for spike train. The synaptic weight update rules combined with synaptic delay are given mathematical representations on NO, NC, and NI, respectively. The high-precision experimental results are obtained in the learning process by integrating the learning algorithm with synaptic delay, including the analysis of the optimal learning performance for spike train, the influence of different spike train lengths, and the change of the different output frequency for desired spike train. Therefore, the updating of synaptic weights based on the delay characteristic may be utilized in various neuron learning rules, neuron models and complex network structures.

ACKNOWLEDGMENTS

The research is supported by the City-School Science and Technology Strategic Cooperation Project of Nanchong City under Grants no. SXQHJH014, and Philosophy and Social Sciences Research Planning Project of Nanchong City under Grants no. NC22C361.

REFERENCES

[1] 

Demin, V. A., Nekhaev, D. V. and Surazhevsky, I. A., “Necessary conditions for STDP-based pattern recognition learning in a memristive spiking neural network,” Neural Networks, 134 64 –75 (2021). https://doi.org/10.1016/j.neunet.2020.11.005 Google Scholar

[2] 

Wang, X., Lin, X. and Dang, X., “Supervised learning in spiking neural networks: A review of algorithms and evaluations,” Neural Networks, 125 258 –280 (2020). https://doi.org/10.1016/j.neunet.2020.02.011 Google Scholar

[3] 

Tan, C., Šarlija, M. and Kasabov, N., “NeuroSense: Short-term emotion recognition and understanding based on spiking neural network modelling of spatio-temporal EEG patterns,” Neurocomputing, 434 137 –148 (2021). https://doi.org/10.1016/j.neucom.2020.12.098 Google Scholar

[4] 

Shaban, A., Bezugam, S. S. and Suri, M., “An adaptive threshold neuron for recurrent spiking neural networks with nanodevice hardware implementation,” Nature Communications, 12 1 –11 (2021). https://doi.org/10.1038/s41467-021-24427-8 Google Scholar

[5] 

Shen, J., Liu, J. K. and Wang, Y., “Dynamic spatiotemporal pattern recognition with recurrent spiking neural network,” Neural Computation, 33 2971 –299 (2021). Google Scholar

[6] 

Shen, J., Lin, K. and Wang, Y., “Character recognition from trajectory by recurrent spiking neural networks,” in 39th Annual Inter. Conf. of the IEEE Engineering in Medicine and Biology Society, 2900 –2903 (2017). Google Scholar

[7] 

Hebb, D. O., “Perception of a complex: The phase sequence,” [The Organization of Behavior: A Neuropsychological Theory], 79 –106 Psychology Press,2005). Google Scholar

[8] 

Clopath, C., Büsing, L. and Vasilaki, E., “Connectivity reflects coding: a model of voltage-based STDP with homeostasis,” Nature Neuroscience, 13 344 –352 (2010). https://doi.org/10.1038/nn.2479 Google Scholar

[9] 

Rumsey, C. and Abbott, L., “Synaptic democracy in active dendrites,” Journal of Neurophysiology, 96 2307 –2318 (2006). https://doi.org/10.1152/jn.00149.2006 Google Scholar

[10] 

Yu, Q., Li, H. and Tan, K. C., “Spike timing or rate neurons learn to make decisions for both through threshold-driven plasticity,” IEEE Transactions on Cybernetics, 49 2178 –2189 (2018). https://doi.org/10.1109/TCYB.2018.2821692 Google Scholar

[11] 

Yu, Q., Tang, H. and Tan, K. C., “Precise-spike-driven synaptic plasticity: Learning hetero-association of spatiotemporal spike patterns,” Plos One, 8 e78318 (2013). https://doi.org/10.1371/journal.pone.0078318 Google Scholar

[12] 

Schiess, M. and Urbanczik, R., “Somato-dendritic synaptic plasticity and error-backpropagation in active dendrites,” PLoS computational biology, 12 e1004638 (2016). https://doi.org/10.1371/journal.pcbi.1004638 Google Scholar

[13] 

Pani, D., Meloni, P. and Tuveri, G., “An FPGA platform for real-time simulation of spiking neuronal networks,” Frontiers in Neuroscience, 11 90 (2017). https://doi.org/10.3389/fnins.2017.00090 Google Scholar

[14] 

Carnell, A. and Richardson, D., “Linear algebra for time series of spikes,” ESANN, 363 –368 (2005). Google Scholar

[15] 

Sporea, I. and Grüning, A., “A supervised learning in multilayer spiking neural networks,” Neural computation, 25 473 –509 (2013). https://doi.org/10.1162/NECO_a_00396 Google Scholar

[16] 

Paiva, A. R. C., Park, I. and Principe, J. C., “A reproducing kernel Hilbert space framework for spike train signal processing,” Neural Computation, 21 424 –449 (2009). https://doi.org/10.1162/neco.2008.09-07-614 Google Scholar

[17] 

Wulfram, G. and Werner, M., “[Spiking Neuron Models: Single Neurons, Populations, Plasticity],” Cambridge University Press, (2002). Google Scholar
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Guoyong Shi, Jungang Liang, and Yong Cui "Temporal coding in recurrent spiking neural networks with synaptic delay-weight plasticity", Proc. SPIE 12506, Third International Conference on Computer Science and Communication Technology (ICCSCT 2022), 125060D (28 December 2022); https://doi.org/10.1117/12.2661848
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neurons

Neural networks

Machine learning

Algorithm development

Synaptic plasticity

Artificial neural networks

Biological research

Back to Top