\documentclass{seminar} \usepackage{colordvi}
\usepackage{pstcol} % To use the standard `color' package with Seminar \usepackage{semcolor} \input{seminar.bug}
\newcommand{\SlideColours?}[3][black]{% % #1 = frame line color (optional, default=black), % #2 = foreground color, #3 = background color \slideframe[\psset{linecolor=#1,fillcolor=#3,fillstyle=solid}]{scplain} \color{#2}}
\begin{document}
---
\emph{...suppose all we had were cells with true Hodgkin-Huxley squid axon dynamics. Is that enough for sensory coding? Computation? Muscular control?}
---
---
\emph{...suppose all we had were cells with true Hodgkin-Huxley squid axon dynamics. Is that enough for sensory coding? Computation? Muscular control?}
---
The short answer: yes
Slightly longer answer: yes, I think so
Contents
- Turing machine paradigm
- Are H-H neural nets Turing-complete?
- The Spike Response Model: a Turing-complete approximation of H-H
- Advantages to a zoo of subtypes
\SlideColours?[red]{black}{white}
\emph{...suppose all we had were cells with true Hodgkin-Huxley squid axon dynamics. Is that enough for sensory coding? Computation? Muscular control?}
For most of the talk I'll focus on the "computation" part.
Answering this question requires defining some terms:
- computation
- sufficient for computation
I'll adopt a paradigm popular in computer science, that of the Turing machine
Turing machine paradigm
- computation $\to$
anything which some Turing machine can do - sufficient for computation $\to$
able to do whatever Turing machines can do
\SlideColours?[red]{black}{white}
Turing machine paradigm cont'd
Computation
- \emph{Turing machine:} A specific theoretical model of a type of computing mechanism. There are many different "Turing machines", but they all have a certain structure.
Sufficient for computation
If, given any function which some Turing machine can compute, you can compute it too, then, then you can do "computation".
\SlideColours?[red]{black}{white}
Turing machine paradigm cont'd
- Proofs about "what some Turing machine can do" must consider every Turing machine. Here's a shortcut.
- There are some Turing machines which are able to \emph{run programs}. Some of these can be programmed to \emph{simulate any other Turing machine}\footnote{given $\infty$ time and memory}. These are called "\textbf{universal Turing machines}".
- Universal Turing machines are models of personal computers\footnote{Except that any Turing machine must have $\infty$ memory capacity; so in that respect, it's better than any PC}; you can program them to run any algorithm
Turing machine paradigm cont'd
- So, a Universal Turing Machine can compute anything that any other Turing machine can compute.
$\therefore$
Given a mechanism, we formally show that it is "sufficient for computation" by proving that it can simulate a Universal Turing Machine. Mechanisms which fulfill this criteria are called "Turing-complete".
Turing machine paradigm cont'd
Our question becomes:
\emph{are networks with Hodgkin-Huxley neurons capable of simulating a Universal Turing Machine?}
Turing machine paradigm cont'd
$\exists $ complications, but we don't have time to discuss them
\SlideColours?[red]{black}{white}
Turing machine paradigm cont'd
Complications
The requirement that Turing machines have $\infty$ memory complicates things.
- A memory-restricted version of a Turing machine is a "finite automaton"
- We'll actually be looking to make finite H-H neural nets which simulate finite automata, and then we'll think, "in the limit, with $\infty$ memory, it would be Turing-complete"
- But actually there are complications in extending finite automata simulations to Turing machine simulations. We don't have time to deal with them now.
To my knowledge, no one has formally proved that H-H neural nets are Turing-complete
Overview of my argument
- People have proved that simplifications of H-H neural nets are Turing-complete
- Usually, something fails to be Turing-complete b/c of an inescapable regularity or simplicity which constrains their range of possible behavior
- Therefore, if a simple mechanism is Turing-complete, a more complicated variant is likely Turing-complete too
- but THIS IS NOT A PROOF!
Many simpler neural net types are Turing-complete
- McCulloch?-Pitts neural nets (thresholding neurons) are Turing-complete\footnote{with the addition of infinite memory; Garzon and Franklin '90}
- Recurrent neural nets with sigmoidal transfer functions are Turing-complete\footnote{Kilian and Siegelmann '93}
- Neural nets with Spike Response Model neurons (called SNNs) are Turing-complete\footnote{in the presence of noise, they are finite automata; Maass, Natschlager, and Markram '04}
A Spike Response Model neuron
A Spike Response Model neuron
- Has a membrane potential which is a linear summation of current EPSPs and IPSPs
- Has a spiking threshold
- excitatory or inhibitory synapses with stereotyped IPSP and EPSP timecourses
\SlideColours?[red]{black}{white}
A Spike Response Model neuron
spiking
- soma remembers \textbf{time of last spike} ($t_prev$)
- \textbf{Membrane potential} $P$ \
- \textbf{threshold} $\Theta(t - t_{prev})$ which can depend on time since last spike
- Neuron spikes when $P(t) > \Theta(t - t_{prev})$
\SlideColours?[red]{black}{white}
A Spike Response Model neuron cont'd
Synapses
- excitatory or inhibitory
- \textbf{stereotyped IPSP and EPSP timecourses}, which depend on time since the spike which caused them.
---
spatial and temporal summation
\begin{eqnarray*} P(t) = \sum_{i \textrm{ is an incoming synapse}} \sum_{s \textrm{ is a previous spike}} w_i e_i(t - t_s)
\textrm{(where $e_i()$ is the response function of synapse $i$, and $w_i$ is its weight)} \end{eqnarray*}
A Spike Response Model neuron cont'd
Cares about \textbf{one thing}: when spikes occur
\SlideColours?[red]{black}{white}
A Spike Response Model neuron cont'd
- Cares about \textbf{one thing}: when spikes occur
- Doesn't explain shape or mechanism of spikes or EPSP/IPSPs
- Can include refractory periods
- Prohibits nonlinear interactions between synaptic inputs or between spikes
Networks built out of these neurons are called SNNs.
Spike Response Model neurons can approximate Hodgekin-Huxley dynamics
(Kistler et al '97) attempts to fit SRM model to mimic the Hodgkin-Huxley equations.
SRM predicts the timing of almost \textbf{90\%} of the spikes correctly.
Do SNN computability results apply to Hodgkin-Huxley?
Do SNN computability results apply to Hodgkin-Huxley?
- SNNs are similar to, and simpler than, Hodgkin-Huxley neurons
- SNN computability theorems allow bounded systemic noise; \emph{maybe} the differences can be counted as "noise"
- GENESIS simulations using Hodgkin-Huxley neurons to build architectures designed for SNNs tend to work
Do SNN computability results apply to Hodgkin-Huxley?
\bigskip
Formally, no; but they strongly suggest that H-H nets are Turing-complete
Sensory and motor functions
- Within Turing paradigm, sensory and motor are just special functions for computation
- Can a network read arbitrary input patterns and generate arbitrary output patterns
- SNNs can\footnote{almost; relation between target neural code and a certain neural code used in the theorem must be continuous}
Why the zoo?
Why the zoo?
- Turing paradigm ignores time and memory costs
- Usually, the availability of more complex or more diverse computing elements reduces time and memory costs
\SlideColours?[red]{black}{white}
Examples
- Some algorithms are O(nlog n) in feedforward threshold architectures, but O(1) in SNNs
- SNNs become more efficient if a model of "shunting inhibition" is added
Why the zoo? cont'd
$\therefore$ Hodgkin-Huxley probably "sufficient" in Turing paradigm, but creatures built out of only these would probably be either simple or slow (and would need huge brains)
\end{document}