A theory of voting subminds, and implications for robust judgement

My theory of human decision-making is that one's mind contains various 'sub-minds', which might come to different conclusions on which choice to make. These different conclusions are amalgated somehow in some sort of semi-opaque internal 'addition' or 'voting' or 'negotiation' process. Furthermore, i believe that there is a learning process that causes this system to make better and better decisions over time when it is given feedback.

For example, when someone makes a decision that appears to benefit them even though it also benefits others, often there is a lot of discussion as to whether their motivation was really 'pure', eg whether the cause of their decision was their self-benefit or the benefit to others. This implies a model in which one consciously following some rule-based algorithm to reason about decisions, and in which the resulting choice can be explained in terms such as, "I chose option 'A' because it is better than 'B' along axis X, and although B is better than A along axis Y, axis X is more important to me than axis Y"; a 'pure' decision is one in which one's internal process considered the benefit to the group as taking precedence over the benefits to oneself within a certain class of scenario.

In my model, what actually happens during such a decision is that some sub-minds are considering the self-benefits, and other sub-minds consider the benefits to others (and probably many subminds are evaluating the decision from both of these angles, or on some other basis involving neither angle). Which means that all decisions are 'impure' insofar as that the 'selfish' subminds always play a part in the decision; if the altruistic sub-minds are sufficiently indecisive and the selfish subminds are sufficently strong then the selfish ones will determine the outcome, regardless of scenario (the semi-opaqueness of the process means that the person does not have the option of removing the input of the selfish subminds and taking the decision on a purely altruistic basis).

This theory seems to have implications for robust judgement. One way to make judgements is to consciously follow a rule-based algorithm of the sort discussed above. If this algorithm is objective, then there are no opaque or semi-opaque internal processes involved, so the above submind-voting system isn't effectively engaged; if the algorithm is mostly but not completely objective, then the submind system is only engaged to make the subjective judgement calls involved (although many of these subminds are not impartial and will take into account the effect of their 'judgement call' on the overall outcome).

However, a downside of trying to cut-out the submind system by using explicit rules is that the submind system will not be exercised very often in situations of import. As we know from computer programming, rule-based decision-making is brittle; if there is a 'bug in the program' then following an objective procedure does not provide good results; and if we resolve to use an objective procedure except when in our judgement there is a bug, then the submind system is involved anyways in making that determination. Furthermore, if we usually follow an objective procedure but then at some point we become intoxicated or otherwise unable or unwilling to use the procedure, or if we find ourselves in a situation where the procedure is inapplicable, then we will be forced to fall back on the submind system. In both of these cases, since we don't typically rely on the submind system, its lack of exercise means that it has probably been inadequately trained.

So, the way to promote more robust decision-making is to rely on the submind system frequently, rather than only in extremis. For example, if you have some bad habit, for example eating or drinking more than you would like, the easiest way to fix the problem is to make simple, comprehensive, objective rules for yourself and to follow them; when the rules are objective and comprehensive, there is no scope for the submind system to pull the wool over your eyes and continue the bad behavior while making excuses to yourself. The problem with this sort of fix is that the submind system is merely being bypassed; if it is later engaged it may well show the same propensity for engaging in the bad habit that it used to. When feasible, a better way to solve the problem would be to train oneself to exercise better judgement on a case-by-case basis; this retrains the submind system itself and so is more robust to situations in which you cannot or will not use your procedural rules.

mental timescales

I also think there is effectively different 'subminds' that operate on different timescales. For example, your personality interacting verbally, where you don't have much time to think of what to say next, may be slightly different than interacting in writing, where you can take your time.


i also think that people rely more on 'stigmatergy' or 'distributed computation' than we think. That is, to a greater extent than is immediately obvious, ideas that we think we come up with are really just simple 'riffs' on stuff we have seen before, or on the current condition of our external environment. This can lead to the external world having a surprisingly large effect on our actions; eg people tending to follow social scripts in situations; eg social proof having a large nonconscious effect on political views; eg small changes in the environment affecting personal habits.


i also think that a lot of our cognitive machinery is built on prediction. Even some of our action-causing cognitive machinery is just prediction machinery in a 'bidirectional' mode. This would explain why we sometimes see 'backwards' effects like placebos, where internal predictions cause biological effects; because the part of our brain controlling hormone release etc is hooked up to a prediction thingee, which is also hooked up to a part sensing hormones and other internal state; the prediction thingee notices that in the past there was a correlation between an efference copy of command signals sent to a hormone-releasing part, and certain changes in internal state; and it predicts that the placebo will cause similar changes in interanal state; which causes it to predict that those same signals will be observed from the efference copy of signals sent to hormone-releasing part; which due to 'bidirectionality' causes those command signals to actually be sent to the hormone-releasing part; another part is that prediction is seen as 'trying to make the signals on two different channels match'; the hormone-releasing part itself is trying to 'predict' its own state based on signals it receives, by changing its own state to what in the past has matched those signals; and due to 'bidirectionality' the prediction of the efference copy is not concealed from the hormone-releasing part, but actually sent to it.

Perhaps even a large fraction of our own brain is sub-parts mostly engaged in trying to predict what other sub-parts of our brain will do.


i also think that, as much as magical thinking (such as 'as above, so below' etc) has (at least seemingly) been shown to not be true in the external world, it is actually a good guide to the way our mind works (on a psychological level, not a physical one). I postulate that people came up with these 'magical' principles based on experience with how things seem to work in their own mind according to introspection (and according to ways people have discovered to control their own minds).


i also think that our cognitive machinery is built on our holding different conflicting beliefs simultaneously. The mechanism for choosing between these when they conflict (probably related to confabulation and to neglect syndrome) is something that should be studied. This may also relate to confirmation bias.

Also related: i've noticed that people who have what seems to be very unusual political views that would seem to imply that they should adopt a very unusual lifestyle often nevertheless choose typical lifestyles/everyday habits. This suggests that (a) people are able to hold these conflicting beliefs (their political beliefs, and their beliefs in how they should live their life); (b) people's political beliefs (and probably, beliefs in general) are less consequential than it might seem; (c) people are making their lifestyle choices in some other way besides reasoning from their purported beliefs about the macro world; perhaps by imitating others, or by doing what is expected of them socially, or by doing whatever seems to have worked in the past.


Sometimes it helps to try and (internally/privately/just to yourself) predict what you are going to decide about something before you've decided. This especially helps when you are being wishy-washy, because sometimes you find that once you have predicted one choice or another, that you are comfortable just sticking with that (but you don't give up your freedom to keep being wishy-washy if you want).


doing something unnecessary and physical (pointing, talking to yourself) helps reduce errors when doing tasks:


" About this time I met with an odd volume of the Spectator. It was the third. I had never before seen any of them. I bought it, read it over and over, and was much delighted with it. I thought the writing excellent, and wished, if possible, to imitate it. With this view I took some of the papers, and, making short hints of the sentiment in each sentence, laid them by a few days, and then, without looking at the book, try'd to compleat the papers again, by expressing each hinted sentiment at length, and as fully as it had been expressed before, in any suitable words that should come to hand. Then I compared my Spectator with the original, discovered some of my faults, and corrected them. —Benjamin Franklin, Autobiography

K. Anders Ericsson likens it to how artists practice by trying to imitate some famous work. Mathematicians are taught to attempt to prove most theorems themselves when reading a book or paper --- even if they can’t, they’ll have an easier time compressing the proof to its basic insight. I used this process to get a better eye for graphical design...

But the basic version idea applied to programming books is particularly simple yet effective.

Here’s how it works:

Read your programming book as normal. When you get to a code sample, read it over

Then close the book.

Then try to type it up. "


to learn new things and remember them:


cognitive biases flowchart


cognitive biases groupings


good, short read:

two comments:

madeuptempacct 18 hours ago [-]

I think good quality materials or teachers are key, and will always be key. I taught myself most things I know, and I am an idiot for it (though sometimes I had no other option).

There is no substitute for knowing CS50 exists for learning to program. There is no substitute to "Speed Secrets" (the book) for learning to drive fast, etc, etc.

The problem is that, as a beginner, you don't know what's good. As a beginner, you don't know that this really will prevent any knee instability: (PDF warning)

or these really are ALL great exercises:


I wish I had someone give me these things as a kid. I would have wasted much less time.


 randomsearch 19 hours ago [-]

Also: high quality sleep makes a huge difference to retention (fact) and makes it easier to tackle new material (personal experience).