Spike: A GPU Optimised Spiking Neural Network Simulator Ahmad, Nasir, Isbister, James B, St Clere Smithe, Toby, and Stringer, Simon M 2018
Spiking Neural Network (SNN) simulations require internal variables - such as the membrane voltages of individual neurons and their synaptic inputs - to be updated on a sub-millisecond resolution. As a result, a single second of simulation time requires many thousands of update calculations per neuron. Furthermore, increases in the scale of SNN models have, accordingly, led to manyfold increases in the runtime of SNN simulations. Existing solutions to this problem of scale include high performance CPU based simulators capable of multithreaded execution (“CPU parallelism”). More recent GPU based simulators have emerged, which aim to utilise GPU parallelism for SNN execution. We have identified several key speedups, which give GPU based simulators up to an order of magnitude performance increase over CPU based simulators on several benchmarks. We present the Spike simulator with three key optimisations: timestep grouping, active synapse grouping, and delay insensitivity. Combined, these optimisations massively increase the speed of executing a SNN simulation and produce a simulator which is, on a single machine, faster than currently available simulators.
The emergence of polychronization and feature binding in a spiking neural network model of the primate ventral visual system Eguchi, Akihiro, Isbister, James B, Ahmad, Nasir, and Stringer, Simon Psychol. Rev. 2018
We present a hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images. The performance is improved by including top-down and lateral synaptic connections, as well as introducing multiple synaptic contacts between each pair of pre- and postsynaptic neurons, with different synaptic contacts having different axonal delays. Spike-timing-dependent plasticity thus allows the model to select the most effective axonal transmission delay between neurons. Furthermore, neurons representing the binding relationship between low-level and high-level visual features emerge through visually guided learning. This begins to provide a way forward to solving the classic feature binding problem in visual neuroscience and leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers. We name this hypothetical upward projection of information the “holographic principle.” (PsycINFO Database Record
A new approach to solving the feature-binding problem in primate vision Isbister, James B, Eguchi, Akihiro, Ahmad, Nasir, Galeazzi, Juan M, Buckley, Mark J, and Stringer, Simon Interface Focus 2018
We discuss a recently proposed approach to solve the classic feature-binding problem in primate vision that uses neural dynamics known to be present within the visual cortex. Broadly, the feature-binding problem in the visual context concerns not only how a hierarchy of features such as edges and objects within a scene are represented, but also the hierarchical relationships between these features at every spatial scale across the visual field. This is necessary for the visual brain to be able to make sense of its visuospatial world. Solving this problem is an important step towards the development of artificial general intelligence. In neural network simulation studies, it has been found that neurons encoding the binding relations between visual features, known as binding neurons, emerge during visual training when key properties of the visual cortex are incorporated into the models. These biological network properties include (i) bottom-up, lateral and top-down synaptic connections, (ii) spiking neuronal dynamics, (iii) spike timing-dependent plasticity, and (iv) a random distribution of axonal transmission delays (of the order of several milliseconds) in the propagation of spikes between neurons. After training the network on a set of visual stimuli, modelling studies have reported observing the gradual emergence of polychronization through successive layers of the network, in which subpopulations of neurons have learned to emit their spikes in regularly repeating spatio-temporal patterns in response to specific visual stimuli. Such a subpopulation of neurons is known as a polychronous neuronal group (PNG). Some neurons embedded within these PNGs receive convergent inputs from neurons representing lower- and higher-level visual features, and thus appear to encode the hierarchical binding relationship between features. Neural activity with this kind of spatio-temporal structure robustly emerges in the higher network layers even when neurons in the input layer represent visual stimuli with spike timings that are randomized according to a Poisson distribution. The resulting hierarchical representation of visual scenes in such models, including the representation of hierarchical binding relations between lower- and higher-level visual features, is consistent with the hierarchical phenomenology or subjective experience of primate vision and is distinct from approaches interested in segmenting a visual scene into a finite set of objects.
Harmonic Training and the Formation of Pitch Representation in a Neural Network Model of the Auditory Brain Ahmad, Nasir, Higgins, Irina, Walker, Kerry M M, and Stringer, Simon M Front. Comput. Neurosci. 2016
Attempting to explain the perceptual qualities of pitch has proven to be, and remains, a difficult problem. The wide range of sounds which elicit pitch and a lack of agreement across neurophysiological studies on how pitch is encoded by the brain have made this attempt more difficult. In describing the potential neural mechanisms by which pitch may be processed, a number of neural networks have been proposed and implemented. However, no unsupervised neural networks with biologically accurate cochlear inputs have yet been demonstrated. This paper proposes a simple system in which pitch representing neurons are produced in a biologically plausible setting. Purely unsupervised regimes of neural network learning are implemented and these prove to be sufficient in identifying the pitch of sounds with a variety of spectral profiles, including sounds with missing fundamental frequencies and iterated rippled noises.
Keystroke dynamics in the pre-touchscreen era Ahmad, Nasir, Szymkowiak, Andrea, and Campbell, Paul A Front. Hum. Neurosci. 2013
Biometric authentication seeks to measure an individual’s unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individuals’ typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts.