an autoassociative memory structure in which the pattern being associated is an 'animation', similar to an animated gif. The structure of the animation is a connected network of nodes connected to each other with arcs. Each node has a dynamic nature, and together, they change and interact in one or more repetitive patterns (possibly the network is just a computational structure that can do lots of things, rather than just being capable of a single execution path/animation).

So, for example, one pattern might be a network of four nodes connected together in some topology, where the first node is blinking on and off, the second node is blinking on, on, on, off, etc.

so, the animation pattern as a whole is sort of like a rubino machine, in that it is a memory that stores signals, but at higher level

when you do an autoassociative query, your brain tries to find other networks whose nodes do similar things at similar times in the animation.

The links in the network represent associations between concepts.

These may be formed in at least two ways. (1) when you think about one of these structures/patterns/'animations', it is placed in the global workspace and broadcast widely throughout the brain. Other nodes and animations which have similar dynamics then recognize the similarity link themselves to the animation being broadcast. (2) a similar process, but unconscious, for a week or so after learning or thinking hard about the animation. So, to some extent the autoassociative queries can be precomputed.

how are these associative links between nodes traversed? 'code' running in a thought might use something like 'indirect addressing' to effectively follow pointers (links) from the node it's looking at to another nodes. These links may come in multiple 'flavors', e.g. sometimes you may want to say, 'i want the REFERENT OF this link' rather than always just getting the link target.

note to self: see also notes on 'custom multiple flavors of indirect addressing' in jasperBrainNotes1.

the Rubino machine stores continuous periodic functions. But we may also store and/ordiscrete firing sequences in discrete time, by storing the nodes, which are each little turingish machines (mb discretize time for the 'continuous' fns too? first byte (well, arbitrarily large int) is # of time segments in the following represntation? hmm)

perhaps instead of broadcasting the structure itself, one just broadcasts the activities of the nodes. Other parts of the brain receiving this broadcast then are left to re-infer (induce) the causal structure (and hence to guess a new network structure) on their own. This leads to diversity of representation. This provides one way to avoid 'cortical neurl form', that is, a common semantics for signals throughout the cortext. Instead, in this model, parts of the brain don't have to understand each other, they just try to predict each other.

In other words, these broadcasts of patterns have the semantics, "Here's what i am thinking right now. If you are a model with an inferred causal structure that produces data similar to this one, then please link yourself/your model to me."

such predictive models could also be composed into a hierarchy, with more general concepts further up the hierarchy.

goal-seeking can be imposed on this purely predictive architecture by getting it to prefer to imagine animations which hypothetically result in pleasurable outcomes. This will cause a form of 'shaping' training. This would explain why we like even to think about and imagine pleasurable outcomes.

if each module must listen to a broadcast of all attribute values of each item in working memory, then we can opportunistically find creative connections between things.

see also [1] for more.

todo make my neuro paper based on this stuff: