work-storage-byTopic-neuroscience-quals-Q4

\documentclass{article}

%% NOTE: before running EasyLatex? on this, remove comments within [[[...?]]]

	\oddsidemargin  -.5in
	\evensidemargin -.5in
	\textwidth  7.2in

\topmargin -.5in \textheight 10.4in

\begin{document}

The question of what machinery would be sufficient for the brain to compute is a daunting one, because it depends on how we define computation and sufficency. Luckily, this is one of the central issues of theoretical computer science, and so various formal approaches have been developed.

The Turing machine paradigm of computation

The most canonical way to formalize computation is to define the class of computable functions as those functions which can be computed by some Turing machine in finite time. A Turing machine is a formal abstraction of a computing mechanism. Briefly, a Turing machine has an infinite, linear memory tape divided into discrete cells. Above the tape is a mobile read/write head. Each time step, the head may write, or erase and rewrite, any one of an alphabet of symbols into the cell immediately beneath it. Each time step, the head may also move either one space to the left or one to the right. At each time step, the current action (that is, what is written on the cell beneath the read/write head, and whether the head moves, and if it moves left or right) is deterministically dependent upon the symbol beneath the head, and upon the internal state of the head. The head has a finite number of distinct internal states.

There are many potential Turing machines, but some of these are called "universal Turing machines". These have the property that they can simulate ANY other Turing machine (given sufficient time). The universal Turing machines are an abstraction of a modern digital computer, which can compute any computable function, depending on their program\footnote{Actually, this is a lie; digital computers have finite memory, and so are technically only "finite automata", NOT Turing machines. A Turing machine is more like the limit of a digital computer as memory goes to infinity.}.

Then, given another proposed computing mechanism, one formally proves that the mechanism either is or isn't Turing-complete \footnote{also called Turing universal}, that is, can the mechanism simulate some universal Turing machine (given infinite time and memory). Something which has finite memory but which would be Turing-complete if it had infinite memory is a finite automaton, and is able to simulate any other finite automaton within the limits of memory\footnote{but telling which finite automata would be universal infinite Turing machines in the limit of infinite memory can get technically tricky; for example, any finite function can be simulated by some large lookup table, but no lookup table is a universal Turing machine; to deal with such cases, sometimes a computing mechanism is required to be "uniform", but this is beyond the scope of this paper.}.

To summarize, the canonical method of formalizing the question of whether a mechanism is "sufficent for computation" is to define "computation" as "the class of functions which can be computed by some Turing machine", and to define "sufficient for computation" as "the mechanism can simulate a universal Turing machine, given infinite time and memory".

Are networks of Hodgkin-Huxley neurons Turing-complete?

Because of the analytical intractability of the Hodgkin-Huxley equations, a complete formal proof of the Turing universality of networks of Hodgkin-Huxley neurons has not yet been published. However, there is every reason to believe that they are Turing-complete. Mechanisms fail to be Turing-complete when there is something they cannot do; this usually stems from inescapable regularity or simplicity which constrains their range of possible behavior. So, if a simplified version of a mechanism is Turing-complete, then the whole thing is usually Turing-complete as well.

In the case of Hodgkin-Huxley neural networks, many close simplifications have been proved to be Turing-complete. Recurrent neural nets with sigmoidal transfer functions are Turing-complete . McCulloch?-Pitts neural nets (thresholding neurons) are Turing-complete\footnote{with the addition of infinite memory; Garzon and Franklin '90}. A model which is far closer to Hodgkin-Huxley is Wolfgang Maas's version of a Spiking Neural Net (SNN) model (Maas et al '04).

The SNN model

The SNN model is based on a 1st-order "spike response model" of individual neurons. Briefly, in this model, each neuron keeps track of a "potential" $P$ (analogous to membrane potential). If the potential exceeds a threshold $\Theta$, the neuron spikes. The threshold can vary as a function of the time elapsed since the previous spike, $t_{prev}$:

\begin{eqnarray} \textrm{there is a spike at time $t$ \emph{iff} } P(t) > \Theta(t - t_{prev}) \end{eqnarray}

The synapses can be of two types, excitatory and inhibitory. The excitatory synapses share a stereotyped "response function" which gives the shape of an EPSP over time. Similarly for the inhibitory synapses and IPSPs. The potential is the weighted sum of the response of its synapses, that is, a linear combination of incoming EPSPs and IPSPs:

\begin{eqnarray} P(t) = \sum_{i \textrm{ is an incoming synapse}} \sum_{s \textrm{ is a previous spike}} w_i e_i(t - t_s)
\textrm{(where $e_i()$ is the response function of synapse $i$, and $w_i$ is its weight)} \end{eqnarray}

Temporal integration is possible because past spikes affect future potential via their temporally-extended response functions.

This model is Turing-complete\footnote{with the addition of a few restrictions on the form of the PSP and threshold functions; Maas '96}. Even in the presence of (possibly systematic) noise and stochastic firing, it can simulate any finite automaton (and hence would be Turing-complete with the addition of infinite memory\footnote{Don't be fooled, this is pretty much as good as the noiseless, non-stochastic case; the deterministic case provides itself with infinite memory because the spike times have arbitrary precision}) (Maas '04).

The spike response model and Hodgkin-Huxley

How does the 1st-order spike response model compare to the Hodgkin-Huxley equations? It doesn't model how the shape of a spike might come about, nor does it model how biochemical concepts like ions and ion channels relate to neural behavior. However, it does attempt to capture how spike times are determined from inputs, arguably the most important aspect for computational purposes. The SNN model puts some constraints on the behavior of synapses and limits dendritic computation; but this is outside the scope of Hodgkin-Huxley, so the two models don't conflict there. The SNN model allows the threshold to depend almost arbitrarily on the time since the last spike, so with an appropriate threshold function, it can model afterhyperpolarization, and absolute and relative refractory times.

What it can't do, though, is model nonlinear interactions, either between synaptic inputs or between nearby spikes. However, there is empirical evidence that it can capture the behavior of the Hodgkin-Huxley equations pretty well. (Kistler et al '97) attempts to fit such a model to mimic the Hodgkin-Huxley equations. They test the fitted spike response model on a spike train generated by the Hodgkin-Huxley model with with a stochastic input current, and find that it predicts almost 90% of the spikes correctly.

Do SNN computability results apply to Hodgkin-Huxley?

Because the behavior of the models is so similar, it is very likely that the computability theorems on SNNs will also apply to Hodgkin-Huxley neurons will small modifications, although a formal proof is still necessary. Because the theorems on SNNs can tolerate a bounded amount of arbitrarily systemic noise, it's even possible that most or all of the differences between the spike response model and Hodgkin-Huxley can be modeled as systemic noise and then immediately disregarded.

In addition, Maas and Natschlager have built GENESIS models in which they take network architectures designed to use spike response model neurons, and substitute Hodgkin-Huxley neurons instead. They find that the networks still compute what they were designed to compute[[[Maas_and_Natschlager,_Networks_of_spiking_neurons_can_emulate_arbitrary_Hopfield_nets_in_temporal_coding?]]].

So, the spike response model is very close in behavior to the Hodgkin-Huxley model. SNNs are Turing-complete, and even in the presence of noise they can simulate finite automata. Where the Hodgkin-Huxley model does differ from SNNs, they are more complex, rather than more simple; so it is likely that neural nets with Hodgkin-Huxley neurons are Turing-complete\footnote{with the addition of infinite memory}, also.

Sensory and motor functions

Within the Turing machine paradigm, sensory and motor functions are really just functions assigned to certain computations. Sensing means computation which starts with an input signal from external sensory transduction machinery, and motor operation means translating an internal signal into a desired output signal.

In the context of the SNN computability theorems, sensory might correspond to the recoding from some arbitrary neural code into the temporal coding scheme used in the theorems, and motor might correspond to readout from that scheme into another arbitrary neural code. Maas has shown that SNNs are able to translate between almost arbitrary\footnote{He requires only that the target coding scheme must be related to his coding scheme by a continuous function [[[Maas,_Fast_Sigmoidal_Networks_Via_Spiking_Neurons?]]]} neural codes and the codes used in the SNN computability theorems.

So, why the zoo?

The criterion for Turing-completeness depends only on what can be computed, not upon how much time and memory it takes to do so. In general, the availability of more complex computing elements can greatly reduce the cost of computation; for instance, the cost of certain algorithms has a superlinear dependency on the size of the problem instance in feedforward threshold architectures, but a constant cost in SNNs. One would expect that by "upgrading" to a more diverse set of neurons, a brain may be able to dramatically decrease the time and memory cost of computation.

\newpage \emph{Please note that we are only permitted to have 5 citations}

Max Garzon, S. Franklin. Neural Computability. Chapter 6 in Progress in Neural Networks (invited by O. Omidvar, ed). Ablex Pu. Co., Vol. 1 (1990), 127-145. ISBN 0-89391-610-2 (alternate, search).

Kilian, J. and Siegelmann, H. On the power of sigmoid neural networks. In Proceedings of the Sixth ACM Workshop on Computational Learning Theory (1993), pages 137--143. ACM Press. Available online at
http://external.nj.nec.com/homepages/joe/web-papers.html.

W. M. Kistler, W. Gerstner, and J. L. van Hemmen. Reduction of the Hodgkin-Huxley Equations to a Single-Variable Threshold Model Neural Computation, 9(5): 1015-1045 (1997)

W. Maass. Lower bounds for the computational power of networks of spiking neurons. Neural Computation, 8(1):1-40, 1996. Available online from
http://www.igi.tugraz.at/maass/publications.html.

W. Maass, T. Natschlager, and H. Markram. On the computational power of circuits of spiking neurons. J. of Physiology (Paris), 2004. in press; available online from
http://www.igi.tugraz.at/maass/publications.html. \end{document}