Spiking Neural Nets
Leaky integrate-and-fire (integrate, then leak, then (maybe) fire)
Why leak?
- If you couldn't leak, then neurons can only communicate via firing rate, because you can't separate spike trains into 'words', because one spike yesterday is just the same as one spike one second ago.
- Even if you could do that (for instance, by dividing a simulation into timesteps or epochs, and resetting each neuron's state to zero at the beginning of each epoch), within an epoch, leaking additionally allows you to treat incoming spikes close together in time different from spikes far apart in time. Eg you could react differently to bursts then to a steady rate of firing with the same average firing rate.
- Leaking allows you to introduce a 'clock' to the circuit ((on a slower timescale than)/(without relying on) the time it takes to generate and propagate a spike as your 'clock'). The presence of a 'clock' allows you to have subnetworks that are 'pattern generators', ie which generate various spatiotemporal patterns. You can do this in conventional computers by using the fixed instruction timings of some assembly instructions, but recall that neural networks are clockless/asynchronous; lacking a central system clock generating fixed 'instruction timings', you could use an external clock peripheral, but the 'leak' effectively gives a clock to each neuron, without having to go to an outside peripheral.
Links: