Difference between revision 16 and current revision

No diff available.

Links and books

Survey Books

other toread

Surfaces and Essences

Gödel, Escher, Bach: An Eternal Golden Braid

I Am a Strange Loop

Robert Rosen's Life Itself and Essays on Life Itself

Creative Analogies

Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought

" Instead of Hofstadter's GEB, read some of his papers, e.g., "Analogy as the Core of Cognition"

But there are others who have focused longer on analogy, e.g., George Lakoff:

"Metaphors we Live by"

"Where Mathematics Come From: How The Embodied Mind Brings Mathematics Into Being":

"Women, Fire, and Dangerous Things" "

Mind's I

zvanness 10 hours ago


I'm not so sure their technology is as futuristic as everyone thinks it is. If I had to take an educated guess, I would say it's some powerful AI that makes their knowledge graph smarter. Currently Google's Knowledge Graph uses more structured data sets and depends on a mechanism like this:

But the real challenge is to make the knowledge graph update in real time and take meaning from something as unstructured as a blog post or an email. And to do something like that requires some really unique AI.

--mjn - I totally agree!


mjn 10 hours ago


Oh, I'm decidedly agnostic on whether it really is futuristic AI. Just there is some buzz around it, and some of the people involved in it are definitely legit, and have also involved themselves in the "AGI" community, which leads to such speculation (which they've pretty deliberately cultivated). That doesn't prove they've Solved AI for any sci-fi-ish notion of Solved AI. They could just have some good but in the big picture fairly modest knowledge-graph tech, or they could even have not-that-good knowledge-graph tech with great PR! Hard to say without knowing any details.


nl 2 hours ago


Deep learning is pretty impressive as a supplement to knowledge engineering approaches.

Google's Deep Learning team were the people who developed the alogithm that discovered cats on YouTube? (without training). Presumably this team had something that impressed them.

The weakness to knowledge engineering approaches is that they tend to be fragile - they break badly with small holes in recorded knowledge. The IBM Watson team has a great video that showed how the different definitions of "fluid" and "liquid" meant a correct answer would have been missed if evidence collected in the answer verification phase of the DeepQA? pipeline (no relation to Deep Learning) hadn't overridden it.

Edit: Your(?) paper on your (?) relevancy engine is interesting. It seems like an application of skip-grams (which, ironically enough are heavily used by the DeepQA? answer verification phase mentioned above).


gdahl 6 hours ago


There were a lot of pros a DeepMind?. For example: Volodymyr Mnih, Andriy Mnih, Alex Graves, Koray Kavukcuoglu, Karol Gregor, Guillaume Desjardins, David Silver, and a bunch more I am forgetting.

mdeg 6 hours ago


So, Google's list so far:

Am I missing any?


andyjohnson0 2 hours ago


Ray Kurzweil


raldi 5 hours ago




swalsh 2 hours ago


BRETT robot:




deep dream #deepdream:


video games:


petri nets:

" Places. Every place on the Petri net represents a state in which some condition is true. The seven places in Figure 9 represent seven types of conditions that might be true at some time in the past, present, or future.

Tokens. A token in a place means that the corresponding condition is true at the indexical time now, which is the point in time of the current situation. Instead of representing situations by single nodes, as in finite-state machines, a Petri net represents the current situation by a conjunction of the conditions for all the places that currently contain tokens. In Figure 9, the initial situation is described by a conjunction of two propositions: the assassin has a gun, and the victim is alive.

Events. Every transition represents a type of event, whose precondition is the conjunction of the conditions for its input places and whose postcondition is the conjunction of the conditions for its output places. In the example, the Misfire event has no output conditions. When it occurs, its only effect is to erase a token from the place labeled Firing-pin-struck, thereby disabling the event Gun-fires.

Arcs. The arcs that link places and transitions have the effect of the add and delete lists in STRIPS (Fikes & Nilsson 1971). For any transition, the input arcs correspond to the delete list because each of them erases a token from its input place, thereby causing the corresponding proposition to become false. The ouput arcs correspond to the add list because each of them adds a token to its output place, thereby asserting the corresponding proposition.

Persistent places. A place that is linked to a transition by both an input and an output arc is persistent because its condition remains true when that transition fires. In the example, the place labeled Assassin-has-gun persists after the assassin loads the gun or pulls the trigger. For all the other places in Figure 9, their preconditions become false after their tokens are used as input to some transition. " ---

some interesting research ideas:


idea for local search (surely this has been done); find a local maxima, do some of the usual stuff (check a simplex, do a little simulated annealing, tabu search, etc) to see if its basin of attraction seems to be reasonably large; now instead of doing further simulated annealing (which will waste a lot of time re-checking nearby regions), and instead of just doing a random restart at some distant region, make a simplex, and then iteratively extend the length of each axis of the simplex.

Intuitively, if you find yourself on a mountain in fog and you want to find the highest point on the mountain, and you are at a local maxima, then what you might do is to go one step North, then come back, then go one step South and come back, then one step East and back, then one step West and back; then go two steps North and come back, etc.

(p.s. would be nice to read a recent review of local search techniques, including tabu search, simulated annealing, macrosteps (chunking), ridge search, etc)

some slides but probably not with much 'recent' stuff:

ok this is more what i was looking for, it has some details on 3SAT algorithms, still mb not that recent though, also i'd like to also cover eg BFGS [1]:


some cognitive biases/heuristics:



example of prediction possibly being fundamental:



" Some tentative steps towards integration already exist, including neurosymbolic modeling (Besold et al., 2017) and recent trend towards systems such as differentiable neural computers (Graves et al., 2016), programming with differentiable interpreters (Bo š njak, Rocktäschel, Naradowsky, & Riedel, 2016), and neural programming with discrete operations (Neelakantan, Le, Abadi, McCallum?, & Amodei, 2016). While none of this work has yet fully scaled towards anything like full-service artificial general intelligence, I have long argued (Marcus, 2001) that more on integrating microprocessor- like operations into neural networks could be extremely valuable "

" Another potential valuable place to look is human cognition (Davis & Marcus, 2015; Lake et al., 2016; Marcus, 2001; Pinker & Prince, 1988). There is no need for machines to literally replicate the human mind, which is, after all, deeply error prone, and far from perfect. But there remain many areas, from natural language understanding to commonsense reasoning, in which humans still retain a clear advantage; learning the mechanisms underlying those human strengths could lead to advances in AI, even the goal is not, and should not be, an exact replica of human brain


A good starting point might be to first to try understand the innate machinery in humans minds, as a source of hypotheses into mechanisms that might be valuable in developing artificial intelligences; in companion article to this one (Marcus, in prep) I summarize a number of possibilities, some drawn from my own earlier work (Marcus, 2001) and others from Elizabeth Spelke’s (Spelke & Kinzler, 2007). Those drawn from my own work focus on how information might be represented and manipulated, such as by symbolic mechanisms for representing variables and distinctions between kinds and individuals from a class; those drawn from Spelke focus on how infants might represent notions such as space, time, and object. A second focal point might be on common sense knowledge, both in how it develops (some might be part of our innate endowment, much of it is learned), how it is represented, and how it is integrated on line in the process of our interactions with the real world (Davis & Marcus, 2015). Recent work by Lerer et al (2016), Watters and colleagues (2017), Tenenbaum and colleagues(Wu, Lu, Kohli, Freeman, & Tenenbaum, 2017) and Davis and myself (Davis, Marcus, & Frazier-Logue, 2017) suggest some competing approaches to how to think about this, within the domain of everyday physical reasoning

A third focus might be on human understanding of narrative, a notion long ago suggested by Roger Schank and Abelson (1977) and due for a refresh (Marcus, 2014; Ko č isk ý et al., 2017).

" [2]