notes-cog-ai-onGpt

on GPT-3:

I can see how this sort of thing might be a useful component of an AGI. One function could be helping to generate lots of training examples for other subcomponents along the lines of chess AI self-play; this could even be involved in "bootstrapping" another subcomponent from something that starts out with a preference for organizing things into a full-blown reasoning agent. Another function could be as an 'intuition' generating a few mostly-right starting points solutions that a reasoning subcomponent could then choose among, and then fix up.

These things seem to have about the level of coherence of dreams. Which makes me conjecture, perhaps the mechanism that "directs" dreams serves a similar function as the above?

---