Academia.eduAcademia.edu

Outline

Gradient Descent for Spiking Neural Networks

https://doi.org/10.12751/NNCN.BC2017.0088

Abstract

Much of studies on neural computation are based on network models of static neurons that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking networks. Here, we present a gradient descent method for optimizing spiking network models by introducing a differentiable formulation of spiking networks and deriving the exact gradient calculation. For demonstration, we trained recurrent spiking networks on two dynamic tasks: one that requires optimizing fast (≈ millisecond) spike-based interactions for efficient encoding of information, and a delayed-memory XOR task over extended duration (≈ second). The results show that our method indeed optimizes the spiking network dynamics on the time scale of individual spikes as well as the behavioral time scales. In conclusion, our result offers a general purpose supervised learning algorithm for spiking neural networks, thus advancing further investigations on spike-based computation.

References (22)

  1. Valerio Mante, David Sussillo, Krishna V Shenoy, and William T Newsome. Context-dependent computa- tion by recurrent dynamics in prefrontal cortex. Nature, 503(7474):78-84, 2013.
  2. Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and James J DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23):8619-8624, 2014.
  3. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436-444, 2015.
  4. Sander M Bohte, Joost N Kok, and Han La Poutre. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing, 48(1):17-37, 2002.
  5. Raoul-Martin Memmesheimer, Ran Rubin, Bence P Ölveczky, and Haim Sompolinsky. Learning precisely timed spikes. Neuron, 82(4):925-938, 2014.
  6. Robert Gütig and Haim Sompolinsky. The tempotron: a neuron that learns spike timing-based decisions. Nature neuroscience, 9(3):420-428, 2006.
  7. Jean-Pascal Pfister, Taro Toyoizumi, David Barber, and Wulfram Gerstner. Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural computation, 18(6):1318-1348, 2006.
  8. Rȃzvan V Florian. Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity. Neural Computation, 19(6):1468-1502, 2007.
  9. Eugene M Izhikevich. Solving the distal reward problem through linkage of stdp and dopamine signaling. Cerebral cortex, 17(10):2443-2452, 2007.
  10. Robert Legenstein, Dejan Pecevski, and Wolfgang Maass. A learning theory for reward-modulated spike-timing-dependent plasticity with application to biofeedback. PLoS Comput Biol, 4(10):e1000180, 2008.
  11. Filip Ponulak and Andrzej Kasiński. Supervised learning in spiking neural networks with resume: sequence learning, classification, and spike shifting. Neural Computation, 22(2):467-510, 2010.
  12. Eric Hunsberger and Chris Eliasmith. Spiking deep networks with lif neurons. arXiv preprint arXiv:1510.08829, 2015.
  13. LF Abbott, Brian DePasquale, and Raoul-Martin Memmesheimer. Building functional networks of spiking model neurons. Nature neuroscience, 19(3):350-355, 2016.
  14. Sophie Denève and Christian K Machens. Efficient codes and balanced networks. Nature neuroscience, 19(3):375-382, 2016.
  15. Guillaume Lajoie, Kevin K Lin, and Eric Shea-Brown. Chaos and reliability in balanced spiking networks with temporal drive. Physical Review E, 87(5):052901, 2013.
  16. Lev Semenovich Pontryagin, EF Mishchenko, VG Boltyanskii, and RV Gamkrelidze. The mathematical theory of optimal processes. 1962.
  17. Nicolas Frémaux and Wulfram Gerstner. Neuromodulated spike-timing-dependent plasticity, and theory of three-factor learning rules. Frontiers in neural circuits, 9, 2015.
  18. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  19. Martin Boerlin, Christian K Machens, and Sophie Denève. Predictive coding of dynamical variables in balanced spiking networks. PLoS Comput Biol, 9(11):e1003258, 2013.
  20. Wieland Brendel, Ralph Bourdoukan, Pietro Vertechi, Christian K Machens, and Sophie Denéve. Learning to represent signals spike by spike. arXiv preprint arXiv:1703.03777, 2017.
  21. Bard Ermentrout. Ermentrout-kopell canonical model. Scholarpedia, 3(3):1398, 2008.
  22. Timothy P Lillicrap, Daniel Cownden, Douglas B Tweed, and Colin J Akerman. Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications, 7, 2016.