tobesorted-work-opinions-ai

First, I think that it is most productive to let each researcher work on what they choose, without pressure to work on what everyone else thinks is best. So my opinions should never be construed to mean that I think that other people should drop what they like to do and research what I like instead. I also don't mean to say that anyone's wasting their time; I think that it's hard to predict which research will be most valuable in the end. Almost all researchers are highly trained, smart people who are experts in their field, and so if even when only a single researcher believes in some particular line of inquiry, that line probably has a lot going for it.

But here's what excites me.

I think that the field of A.I. forked a decade or two ago into specialized areas in order to break the hard problem of building a human-level intelligence into easier subproblems (veterans will see where I'm going with this :) ). Since then, there has been substantial progress in many of the sub-areas; for example, we now have supervised learning down pat for many of the simplest cases. Of course, many of the original sub-problems have not been solved, and along the way, the subfields have discovered some new sub-questions that must eventually be answered.

However, it seems that we could now go back to the original problem and apply and piece together what we've learned so far. I doubt we'll get anywhere near completing a human level A.I. this time around, but the effort itself will be valuable (and there will probably be industrially-useful bonus applications, too).

In particular, there's two things we should do:

I'm purposefully phrasing this to highlight the practical problem with this line of research; most of the time, immediate "results" will not be forthcoming, hence it will be hard to get published (the papers with generalist thinking will be competing against specialized papers which show concrete results, and the ognitive architecture publications which are just rehashings of existing ideas will be competeing against publications of novel ideas).

Yet I believe that in the long term, such research will be valuable. We must not succumb to local minima in our research programmes.

In order to overcome this obstacle, I believe it is necessary to build a community of scholars who support the value of pursuing human-level A.I. We need to a community of people who explicitly recognize the place of generalist thinking and of putting together existing components into a novel framework, as well as the value of continuing research into the specialized subdisciplines of A.I..