opinions-academicPeople

The seeming* irrationality of individual researchers

before i entered academia, i imagined that technical reseachers would be much more reasonable, rational, unbiased people who looked at all sides of an issue, who are careful not to jump to conclusions, and who could be relied upon to properly qualify their statements. While there is a kernel of truth to that, compared to my initial ideal, i have found academia to have a large number of curmudgeonly stubborn biased cranks.

it seems that many outside academia hold the view that i used to hold, and that this leads people to put too much trust in what academics say, even when they are speaking off the cuff and on a topic outside their area of expertise.

the amazing thing is that, when writing about their area of expertise, reviewed by their peers, these same stubborn, biased, cranks manage to turn out articles which qualify their statements, which seem downright conservative in the conclusions that they draw from the evidence (but see the next section), even pointing out ways that they could be wrong.

why is that? My guess is, it's not that reseachers, left to themselves are much more "rational" than non-academics, but rather, the process of peer-reviewed journal publication, when situated in the social context of the academic community, results in more rational journal articles.

The unreliability of individual journal articles

another thing i was surprised at is how often these journal articles are wrong. This sounds strange, since I just told you how great and how conservative these articles are, but yet it seems (to me, at least) to be the case. Part of this is that these articles aren't actually as conservative as they seem; often, everything they say is __literally__ true, but presented in such a way so as to shed the best light on their side of the story*. But another part of this is that the world is just more complicated than we tend to think. There seem to be plenty of cases in which some set of experiments appears to reasonable, careful people to strongly establish some conclusion, and then later new data comes out that seems to establish the opposite conclusion just as well (and then everyone scratches their heads and tries to figure out how to reconcile the data).

The unreliability of scientific consensus

there have been many examples in the past in which the rough consensus viewpoint of the entire scientific community was later shown to be wrong, and remained so for decades.

Throw out science?

some people conclude from these sorts of arguments that science is no more trustworthy than other methods of determining truth (examples: (1) personal intuition; (2) anecdotal evidence; (3) taking as effective those practices that have been thought to work by a lot of people for a long time). For example, sometimes fans of alternative medicine feel that practices which have not proven themselves in published, peer-reviewed, (controlled, blinded, statistically valid, etc) clinical trials have an equal claim to trustworthiness as those that have [9].

i do not agree. Despite my belief that science is unreliabile, as detailed above, i think that all other empirical methods are still less reliable than science, by a large margin.

How to make a good guess as to what is true

Part 1. What you "should" do.

This means that if you read a peer-reviewed journal article, which appears to be conservative and balanced, and it convinces you of something, well, you still can't take this at face value; what you want is to have a large number of different journal articles which establish the same conclusion in different ways, while at the same time there are few (ideally, none[6]) articles that appear to establish the opposite conclusion*. Due to various biases in the process of academia, as well as to the nature of reason itself [8], the raw number of articles on one side or the other may be misleading; so you probably want to check with an expert to make sure that you didn't miss any important article. If there are any articles arguing against a particular conclusion, which haven't themselves been conclusively refuted

So, my conclusion is: you shouldn't

First, they are self-censoring, because they know that if they say something unwarrented, their peers will make fun of them.

Part 2. What you should actually do.

Depends on how much time you want to devote to answering your question.

If you don't have much time at all, just think to yourself about the question and make a guess, based on the evidence that you have already encountered. This method is very unreliable.

If you have a little more time, think about it yourself as above, and then sk a friend or two what they think and why, and then rethink your guess based on what they say. This method is much more reliable than the first one, but still very unreliable.

If you have more time, find a researcher within the general field that covers your topic (i.e. "economics"), and ask them what they think. After they tell you, paraphrase their conclusion in your own words and ask them if you interpreted it right. This is more reliable than asking your friends, but is still rather unreliable. Many journalists use this method to get evidence for a story, and i think that often the answers that they get (and publish) are treated with way too much reverence, because "a scientist" said it.

If you have more time, find a few researchers who aren't at the same institution, who aren't collaborators, and who have never employed each other, who don't seem to be advocates of a particular side of the issue, and who are all specialists in the particular area that covers your topic (i.e. "synaptic plasticity"). Ask them all to briefly discuss your question and come up with a collectively approved summary of what they each think, how certain they are, and what the consensus of the scientific community is on this issue (if you don't want to waste too much of their time, make it clear that you don't need a very detailed answer, i.e. two sentences instead of a page -- they should be able to hammer this out within a few days at most via email). After they tell you, paraphrase their conclusion in your own words and ask them if you interpreted it right. This is the first method that i think to be reliable enough that you should treat the answer that you get out of it with a significant degree of extra authority, because it came from 'scientists'. This is the method that i think journalists should use to gather expert input for a story. When the answer requested is very long, and the time that the researchers spend discussing it is very long, this is (similar to?) the method of a scientific advisory committee. Wikipedia pages are produced by a similar method, although on wikipedia, there is no guarantee that you get a good sample of expert contributors on any given page.

At this point, we have almost reached the limit of what you can do without looking into the question yourself. If you have tons of time, you can take a poll of large numbers of experts in the relevant area. But most people don't have the resources to run a proper mass poll. Instead, you can improve reliability by investing time understanding the issue.

How can it help to understand the issue, when there are already scientists who understand it whose opinion you can ask? Because you don't really know which scientists to ask. Two of the criteria for committee member selection are very crucial but also very hard for someone who doesn't understand the issues to get right. Namely, you're supposed to find people "who don't seem to be advocates of a particular side of the issue, and who are all specialists in the particular area that covers your topic". Now, this is very difficult. First, how do you know which academic research subarea is most relevant to your question? If you're lucky, this will be uncontroversial, and you can just ask any scientist to forward you to someone more likely to be the right kind of specialist, and then repeat until you get to the right specialty. But in some cases multiple different scientific communities both think they're the best people to ask about certain kinds of questions, and often in these cases the different communities give different answers. Second, how do you know if the people you ask are "advocates of a particular side of the issue"? Surely most "advocates" would claim to merely be unbiased, right-thinking individuals who will just tell you the facts, and that others who call them a biased advocate are either biased advocates on the other side of the issue who are trying to smear them, or non-experts who don't know what they are talking about. Third, how do you know if someone is an expert in a particular topic? Many scientists are happy to

TODO

If you have more time, either you can take a poll of large numbers of scientists, or you can begin to look into the question yourself.

If you have more time, convene many committees as above, and in addition, do a statistically significant poll of the members of a scientific society whose topical focus is, as close as is possible, the relevant research area for your question.

At this point, we have reached the limit of what you can do without looking into the question yourself. But actually, even to carry out the previous stages

Generally, on most questions of interest there will always be a substantial minority of scientists who seem to take each position. On some questions there is a majority opinion, on some there is only a plurality. On some questions the majority is so strong that