notes-science-neuro-introductionToReverseEngineeringTheBrain

(this is a book that will probably never get written, at least not by me...)

Preface

What is the algorithm of thought?

This book does not answer that question, but it provides introductory notes on neuroscience that may be useful to novice researchers attempting to work towards it by the method of 'reverse engineering'. This text differs from many other introductions to neuroscience in its lack of coverage of various "implementation details".

We do not know exactly what thought is. It is not yet clear if thought is an extraordinary process, possibly intimately connected to language, and present only in humans (and perhaps a few other 'higher' species) which is very different from other neural computation, or if on the other hand thought is a slight elaboration of patterns of neural information processing widespread among mammals, avians, and perhaps others. In fact, it seems likely that we will not be able to define what thought is, and determine if it is distinct from other cognition, until we have a much better understanding of other cognition. Therefore we must study cognition in general, including such processes as perception, attention, planning, action selection, belief inference, etc.

In fact, since thought is a complicated, hard to define thing that may be found only in humans, and since we must study other cognition first anyway before we can even determine if thought is anything different, in this book we will not directly study human-specific forms of thought at all; rather, we will study cogition in general.

In general, I am trying to write the book that I wish I had had when I began to study this area.

Which 'implementation details' to neglect during reverse engineering?

The study of neuroscience modulo low-level implementation details is usually known as 'cognitive neuroscience'. This is a difficult path to take, however, because we know so little about how thought works. An alternate method is reverse engineering, which requires knowledge of some, but not all, low-level implementation details.

Imagine that you have a computer system such as the Kinect that has a camera and identifies the position of parts of your body from data from the camera. It would be difficult (although perhaps possible) to discover the algorithm that it is using merely from playing Kinect games. This task would resemble coming up with most of the algorithm entirely on one's own and then testing if its properties match observable properties of the Kinect's behavior. In order to accomplish this, you have to be almost as clever as the people who created the Kinect. This is analogous to cognitive neuroscience.

If you are not that clever, there is another way, the reverse engineering way. Imagine now that, instead of coming up with the algorithm de novo, you attempt to reverse engineer the Kinect. You open up your Kinect and look at and probe its circuits. If you can learn enough about how electronics work, and if you focus on the organizing principals of the Kinect, you discover concepts such as CPU, memory banks, and assembly language instruction sets. By experimentally feeding the Kinect CPU various artificial assembly language instructions, you may be able to discovery the semantics of the assembly language that it uses. After that, you may eventually be able to devise a proble to read the Kinect's memory and find out the assembly language instructions that accomplish the calculation of body position. At this point you can read those instructions and come to understand the abstract algorithm that they are implementing.

In terms of understanding the algorithm of thought, the task of coming up with the algorithm de novo is the field of "artificial intelligence", which, while in this author's opinon is a worthwhile endeavor, is not the topic of this book. The other task, the task of reverse engineering the brain, is the path we take here. It could be that humans are just too stupid to solve the task of artificial intelligence anytime soon, and that we will have more luck reverse engineering the brain.

Reverse engineering an algorithm running on a completely unknown architecture seems to require one to begin by immersing oneself in the lowest-level implementation details of the system, but then to use these to attempt to abstract away from 'implementation details'. There are three types of such details.

The first type of implementation detail is crucial to the issue at hand, but is subsumed under an abstraction as more understanding is gained. So, for example, with a Kinect, at the beginning we must worry about what a resistor is and what a transistor is and what they are made of and how they works, but once we have that figured out we can abstract away from the materials that the circuit components are made of and consider only the circuit diagram. Similarly, once we understand about CPUs and registers memory and assembly language, and devise experimental methods to probe them, we can abstract away from the circuit diagram and consider only question of deciphering the assembly language, perhaps dropping back to the circuit level of analysis from time to time when a mistake had been made. If the entire process took many generations, one might imagine that textbooks written along the way would cover concepts at a progressively higher level, paying little attention to concepts that previous generations had focused on but since mastered.

The second form of implementation detail is things which are discovered to be tangential to the issue at hand. The Kinect has other functions besides body position calculation; it also creates a graphical display, and executes games. At the beginning, it will be unclear which circuitry is specific to graphical display, and which is involved in body position calculation, so all of it must be studied; but after more is learned, the circuitry specific to graphical display can be identified and neglected.

The third form of implementation detail is things which are not strictly necessary for the Kinect's function, but which make it work more efficiently. For example, the study memory cache will doubtlessly consume much effort initially, but in the final presentation of the algorithm it will be abstracted away from.

What is and is not an implementation detail is in many cases a subjective decision, because, until we completely decipher the algorithm of thought, there is always the danger that some abstraction which seems to remove the need for further consideration of its component parts may in fact be incomplete or incorrect. In addition, it may be useful at times to study tangential topics; for example, perhaps studying the graphics controller of the Kinect will lead to some insight about how transistors work, or how CPUs work, even though in theory it could be neglected entirely if the only goal is to learn the body position calculation algorithm.

So, going back to the brain, while later generations may be able to write a book that describes the algorithm of thought in pure form, on the way there we will have to learn about some implementation details. However, we strive to not spend time on implementation details whenever we can, either because we feel that, by building on prior work involving those details, we are now able to work at a higher level of abstraction, or because we suspect they are providing features other than the algorithm of thought, or because we suspect that they are helpful but not strictly necessary for thought. The decision of which implmentation details to include and which to leave out is somewhat subjective.

What this book is not

This book is not an attempt to introduce all of neuroscience, but rather, only those parts of neuroscience which are most relevant to the discovery of the algorithm of thought. The general principal is that we will attempt not to cover some "implementation details" of the brain, the author's subjective and fallible opinion. In writing this book, rather than being eager to be comprehensive, we are eager to leave out as much as possible (to minimize the time required from the reader).

To be more specific, here are topics which are frequently covered in neuroscience textbooks which will not be given much attention in this book:

Those topics are left out because they are felt either to be part of the 'biological substrate' of neural computation, or because they are concerned with aspects of biology other than cognition.

Another set of things that we will exclude are topics relating to structures that are helpful but not necessary for thought. There are a number of structures and processes which, if destroyed or disrupted, lead to individuals acting strangely, but still capable of cognition. Our standard for 'cognition' may be surprisingly low to some readers. If a lesion leaves an individual with severe retardation, difficulty making controlled movements, a loss of memory, and inappropriate behavior, yet still able to walk across a room and pick up an object, we would say that cognition has been retained and that whatever was lesioned must have been inessential.

By this criterion, we will not cover:

We will also leave out topics relating solely to particular input or output (sensory or motor) modalities or particular abilities. It is true that a person without any sensory or motor modalities may be incapable of (experimentally verifiable) cognition, but a person who is merely, say, deaf and mute, can still demonstrate cognition, hence neither the ability to hear nor the ability to speak is essential for cognition.

By this criterion, we will not cover:

Despite the fact that learning is an interesting topic to many researchers in cognition, we will also omit structures and processes responsible for long-term learning. This is because it seems to be possible to have an organism that can no longer learn certain types of things, but that can still demonstrate cognition (for example, the human patient H.M. lost his hippocampus and most adjacent structures including most of entorhinal cortex, and thereafter was unable to form new explicit episodic memories, yet was still able to behave normally in other ways).

Note that this does not necessarily entitle us to disregard all types of plasticity on a synaptic level, as some of these may be essential for even short-term information processing.

By this criterion, we will not cover:

Since general cognition, as we use the term (stuff like perception, attention, planning, action selection, belief inference), is present in many non-human species, therefore anything found in only one or some of these species must not be essential to cognition.

By this criterion, we will not cover:

And we will emphasize features found across biological classes, for example, features common to avians and mammals, at the expense of mammal-specific features.

Finally, we will not cover experimental methods used in studying the brain (e.g. the properties of Golgi staining; how MRIs work), as these change rapidly, in comparison to the object of our study, which evolves more slowly (over hundreds of thousands of years).

For the topics omitted, in many cases we will still mention their existence of the omitted organs and processes, and occasionally will give a few details about them if we can do so briefly; e.g. even if you are not interested in vision, you should know what someone means if they say that such-and-such a neuron projects to the superior colliculus, and you should know that has a retinotopic map and that it is relevant to visual (and perhaps nonvisual) attention. We'll also try to give brief summaries on the putative functions of omitted parts of the brain, so as to give a flavor for what sort of things parts of the brain are thought to do.

What is left?

So, what WILL we cover? We'll cover:

Note that we are covering systems implicated in sleep/awakeness. If a part can be shown to be completely unnecessary for awake behavior then we will omit it, but until then we cannot rule out that a complete destruction of this part would leave the animal unable to remain awake.


cells

gaussian fields?

how spikes work

reduced models

cell synapse scaling laws

linear summation but there are exceptions

spines: avg input over time

types of plasticity

types of transmitters and receptors

anatomy

list of anatomical guys, picture of them, location, and a theory of what they do

all the guys listed above

Cortex

cortical layers -- avian disclaimer

hierarchy: fwd: 3 out, 4 in, back: 6 out, 1,5,6 in (?)

thalamocortical scaling law

cell subtypes:

excitatory : all afferents are excitatory?

  pyramidal (spiny): 
   spiny
   soma receive only symmetrical synapses:
   normal: apical dendrite goes up
    vs spiny stellate (layer iv) (spiny)
  da glia

excitatory nonspiny bipolar cell

inhib:

chandelier cell (to pyramidal axon initial segment) basket cell (to pyramidal cell body) local plexus inhibatory nonspiny bipolar cell

cortical regions: different # of layers list of regions, their maps, and what sensory/motor maps they have, and what goes wrong if they are lesioned

the character of V1, S1 maps simple cells, complex cells, Gabor functions, wavelets quickly adapting/dynamic (pyramdial), slowly adapting/static (nonpyramidal) cells

Fast Spiking and Regular Spiking neurons

comparative anatomy (review Striedter)

thalamus

cell subtypes: todo da glia

'relay station' hypothesis

nuclei map to cortex and to TRN

90% feedback from cortex from layer 6

cortex layer 5 -> higher order nuclei

comparative anatomy (review Striedter)

basal ganglia

parkinson's

striatal degredation

random cog stuff (organize better)

double dissociation

severed corpus callosum find the word problems work twice as fast

blindsight forced choice

attention models

todo


CategoryBookDraft