notes-cog-ai-aiMisc

This guy analyzes AI papers on arxiv to detect upcoming trends.

https://www.technologyreview.com/s/612768/we-analyzed-16625-papers-to-figure-out-where-ai-is-headed-next/

---

"Relational inductive biases, deep learning, and graph networks," posted on the arXiv pre-print service, is authored by Peter W. Battaglia of Google's DeepMind? unit, along with colleagues from Google Brain, MIT, and the University of Edinburgh [www.zdnet.com/google-amp/article/google-ponders-the-shortcomings-of-machine-learning/]

---

theory of mind study:

[1]

---

human cognition probably has a property that it can be hard to separate 'levels' of emulation. For example, 'method' acting probably can actually affect your mood. For example, it's probably hard for most people in most situations to continually hype oneself without starting to believe your own hype at least a little. For example, you can get really into a story about a fictional character and then feel sad about them.

This can probably be modeled as a system where there are various cognitive 'variables' connected to other variables in a 'spreading activation' way; activation of some of these variables can be seen as activating computational 'resources' that can be engaged to predict or model various things. These 'variables' are not lexically scoped like in some man-made programming languages, but rather are 'globals'.

can also combine with the theory that the brain mixes 'prediction' with 'will', that is, the mechanism for activating a planning routine to move your arm to pick up a glass of water is similar to the mechanism for predicting what your arm is going to do.

this model perhaps has implications for some theories of consciousness; if a non-conscious mind with this sort of architecture were to run a simulation trying to predict aspects of its own mind state, it would be hard to do this without the simulation exerting some measure of control over the mind. So this could relate to the consciousness-is-involved-with-a-self-model theory of consciousness. This model predicts some amount of unity of consciousness; if a second simulation were to be run on a mind already predicting itself, it would find itself having to try to model the first prediction mechanism as well as the other 'ordinary' aspects of the mind, and so if prediction and will are intertwined then the two simulations would find themselves intertwined as well, perhaps to the extent that you can consider them as one.

---

https://www.bibsonomy.org/user/bshanks/kr https://www.bibsonomy.org/user/bshanks/ai

---