notes-researchAndApplication

A undesirable pattern that I've seen twice now in competitive social systems is:

1) Reward people for producing something 2) Radical long-term innovation (i.e. new methods and tech for production) don't get done. So, create a separate group of people who are rewarded for thinking big. 3) Now you have two groups of people. One group attracts thinkers, who are rewarded for having the most creative and most fundamental abstract ideas. The other group forces a focus on short term gains. 4) Problem: neither group sufficiently rewards people for taking the ideas of the "thinkers" and translating them into something that can be widely used. That is, for translating theory into practice.

Why? It's not that no one in either group would like to do that. But:

The last two subpoints require some explanation.

Structural change means that which entity does what may have to be reconfigured. For instance, distributed digital distribution over the internet may well be a more efficient distribution system for music than having large media corporations, but those media corporations have a strong incentive not to go down that path.

Once theory has been reduced to practice, everyone will start using it. This will ultimately become a lasting benefit for the consumers of whatever is being produced. But it will only be a transient competitive advantage for the group that introduces the innovation. So, the reward seen by the group that reduces theory for practice is only a small part of the total value seen by society as a whole.

Examples

Software: computer science academia and industry

In the field of computer science, all sorts of great ideas for new ways to use computers are developed. These ideas get as far as prototypes, and that's where they stop. It's not that the ideas aren't good or worthwhile. It's that lots of additional effort would be needed to refine the prototypes into practical software. Corporations aren't sufficiently motivated to do so, for the reason I gave above.

Because of the nature of software, in this case there is an additional thing we could do to fix the problem: chang the copyright licensing schemes of universities. See OpenSourceAcademia?.

A.I. and neurobio

Artificial intelligence for scientific discovery. There are many unexplored ways in which artificial intelligence research could be used to make scientific research more efficient. But few seem interested in developing them (at least in my field, neurobiology). Why? The artificial intelligence researchers are mainly rewarded for novel, fundamental advances in A.I., rather than for re-implementing someone else's old idea. The neurobiologists are rewarded for making new discoveries about neurobiology.

Since reputation is part of how these communities promote, there are gains to be had simply for "advancing science for science's sake". The A.I. guys who implement other people's old ideas, and the biologists who take the time to develop new methods, do garner a degree of respect, which does assist their careers. But not as much as they could have gotten by using that time to generate papers in top journals.

Potential solutions

I'm not certain. The obvious solution is to create a third group of people, who are rewarded for translating theory into practice.

Academia vs. industry

People tend to think that corporate R&D might fulfill that function, but as I noted above, the reward structure for corporations makes it illogical for them to do this.

So perhaps what we need is university departments, funded the same way as the others (grants), which promote based not on novel research but rather on translating theory into practice. The fruits of their labor would not be sold to industry, but rather made freely available to all. These researchers would sort of function as free consultants for industry. In the case of software, they would be open-source programmers.

A.I. vs. neurobiology

Perhaps what we need in this case are departments of "applied A.I.", like applied math. One problem here is that, based on the way status works in academia, is that the people who work for the applied A.I. department would probably have lower status than the plain "A.I." department (mainly because you can more easily blow people away, thereby convincing them of your intelligence, doing fundamental research).