Possible projects for the Learning Topic Area

Tempotron-like spatio-temporal spike-based pattern recognition

We will be bringing chips with STDP and Tempotron-like spike-based plasiticy mechanisms. In this project we would set up an experimental protocol to induce learning in the chip synapses and verify if and to what extent the neurons can be trained to recognize spatio-temporal patterns.

Project leader: Elisabetta Chicca

interested : Mostafa Rahimi Azghadi

Mixed HW/SW STDP learning paradigm

One of the chips available has synapses with programmable (5bit) weights. As the chip uses the Address-Event Representation, the timing of input and output spikes is accessible to SW algorithms. In this project we will configure the chip PCB to send this information to a laptop and program STDP learning algorithms to update the 5bit weight values stored on the chip.

Project leader: Giacomo Indiveri

interested : Mostafa Rahimi Azghadi

Learning in DFT

Learning in dynamic field theory (DFT) is based on the memory-trace dynamics (MTD). In this project, we will implement MTD in (1) a 1D neural field, (2) in a 2D associative neural field, which couples two 1D fields, and (3) in a 2D transformation field, which couples three 1D fields. The ultimate goal is to use the MTD learning rule to update the weights on the HW chip and/or to link the system to AER sensors and motors and learn a simple sensory-motor map.

Project leader: Yulia Sandamirskaya

interested : federico corradi, Jorg Conradt

Sequence learning with neural dynamics in HW

We could implement the neural fields model for learning and generation of serially ordered sequences on the chip.

Alternatively we could work out some hybrid of the neural fields "behavior organization" architecture and FSMs based on WTAs.

Project leader: Yulia Sandamirskaya

interested : federico corradi

Learning Bayesian models in HW

The spike-based Expectation Maximization approach allows to learn generative Bayesian models in a winner-take-all (WTA) circuit of spiking neurons with a variant of STDP learning. In this project the goal would be to create a simple instance of generative model learning on a chip. If you can do it with input coming from an event-based sensor like the DVS or silicon cochlea - even better!

Project leader: Michael Pfeiffer

Learning, selectivity and classification in a neuromorphic VLSI system with triplet-based STDP synapses

For doing this project, we need a network of neurons and synapses with TSTDP. The neurons and the spike timings are already accessible using some hardware which are brought to the workshop. However, the synapse part of these systems, should be modified/programmed in a way that implement TSTDP. After setting up the network and having synapses with TSTDP characteristics, several various experiments can be performed using the designed TSTDP neuromorphic system. I propose two possible experiments that are more interesting for the neuromorphic people. For the first experiment, we can consider a feedforward neural network with N input neurons connected to a single output neuron via N TSTDP synapses. This experimental setup should be capable of selecting one pattern over M patterns with different mean firing rates by demonstrating a higher firing rate at the postsynaptic neuron for the targeted pattern. Learning in this experiment can be observed by looking at how the synaptic weights formed in response to a specific pattern (selected from a pool of M patterns). As a result, this experimental part will demonstrate if our proposed circuit is capable of orientation selectivity, which is an essential feature of neurons in the primary visual cortex. On the other hand, in the second experiment, patterns will have similar mean firing rates. Based on this experiment, we will demonstrate if TSTDP rule can solve the problem associated with pair-based STDP in learning (being incapable of learning patterns with similar mean firing rates). We will have a network of N TSTDP synapses connected to an output neuron. The synapses receive random binary vectors of spike trains with similar mean firing rates, in the presence of a teacher signal, to be trained. After training, test phase can be conducted by applying similar patterns to the network and checking the output neuron firing rate.

interested : Mostafa Rahimi Azghadi