There is a theorem that a group composed of rational individuals which has collective beliefs decided by majority rule cannot be rational.

Consider a group in which everyone agrees with "C if and only if (A and B)". 1/3 of the group believes A is true and B is false and C is false, 1/3 believes A is false and B is true and C is false, and 1/3 believes A and B and C are all true. By majority vote, the group as a whole would assert that A and B are true and C is false.

Consider a group in which 1/3 of the group prefers A > B > C, 1/3 of the group prefers B > C > A, and 1/3 of the group prefers C > A > B.

If the group votes on A vs B, A will win, because 2/3s of the group prefers A > B (the A > B > C faction, and the C > A > B faction).

Similarly, if the group votes on B vs C, B will win. And if the group votes on C vs A, C will win.

So, as a whole, assuming preferences are transitive, we have a preference cycle; A > B > C > A.

This means that it is difficult to say if the group 'really' has any preferences between A, B, and C.

If the group operates via a series of majority votes between two alternaties, and one party controls the agenda of these votes, they can cause the outcome to come out however they like (within the constraint that the desired outcome, and all alternatives that must be considered, are members of the same preference cycle).

See my notes on the book The Logic of Lawmaking.

The a priori probability of encountering intransitive assembly preferences (also called a "cyclic majority", a "paradox of voting", an "Arrow paradox") "rises rather dramatically with increases in both the number of alternatives considered and the number of voters (Niemi and Weisberg 1968; DeMayer? and Plott 1970; Gehrlein and Fishburn 1976)". Niemi and Weisberg 1968: for 3 alternatives, a priori probability of intransitivity with 3 actors is .056; with 15 actors, .082; with infinite actors, .088. If actors is held constant at 3, then with 3 alternatives: .056; with 4: .111, with 5: .16, with 6: .20. "With many actors and many alternatives, the a priori probability of a paradox can be as high as .84. This means that for a realistic number of actors and alternatives in decision-making settings such as in Congress, ... it is much more likely that a paradox will exist than it will not". -- my notes on the book The Logic of Lawmaking.

Let's say that you have $1000, and someone offers you a bet. The bet is that you flip a coin; if the coin comes up tails, you lose $X; if the coin comes up heads, you keep $X and in addition you also get two times $X. You get to choose $X. Afterwards, you will have the option of taking the same bet again, as many times as you want (unless you lose all of your money).

Should you take the bet? Yes, because the expected value is positive. But what should you choose for $X? Your naive reaction might be, bet $1000; this maximizes the expected value of the gain. But if you did that over and over again, eventually the coin would surely come up tails, and so in the long term that strategy (the strategy of taking the bet over and over again and betting it all) would almost certainly lose all of your money. What you would prefer is a strategy that in the long term makes as much money as possible.

You might say, OK, i'll only bet 99.9% of my money, that way i'll never go broke. But actually this strategy still loses money, even if it doesn't lose 100% of it. (todo copy in the example from my other notes)

The answer to this question is the Kelly criterion. The Kelly criterion gives you a mathematical formula (or set of formulas for slightly different situations) that tells you what proportion of your total capital you should allocate to a bet.

This is equivalent to telling you how much leverage you should use in an investment, if you imagined that you only have two choices of how to invest, invest in this one investment, or leave the money in cash (which is assumed to be completely safe) (where a fractional answer says risk some of your money in the investment, and keep some of it as cash; an answer of 1 says invest all of your money in the investment; and an answer over 1 says the investment is so good that you should borrow money to invest in the investment (note that we are neglecting the interest rate here)).

The way the math works out, if you bet less than the Kelly criterion, then over the long term, you'll make money, but at a slower than optimal rate. But if you bet more than the Kelly criterion, in the long term you lose money. Therefore, in practice, because there are almost always measurement errors in your parameters and unknown unknowns in your modeling assumptions, most people recommend calculating Kelly but then betting less than it tells you to. The common recommendation is to bet 'fractional Kelly', meaning to multiply the proportion given by Kelly by some constant fraction, such as 1/2.

For the type of scenario given above, where there is one bet to choose from (the money not bet just sits there until the next round), you choose how much you bet ('your wager'), and you will either lose all of your wager or win D times your wager (plus keep the wager), then Kelly criterion formula is:

(pD - (1-p)) / D

this can be remembered as 'edge over odds'; (pD - (1-p)) is the 'edge' and D is the 'odds'. The 'edge' here is the expected profit in units of your wager (when you win, which is p of the time, you make D times your wager; but when you lose, which is (1-p) of the time, you lose 1 times your wager). For example, in the above case where you flip a fair coin and if you win you get to keep your wager plus you get 2x your wager, p = 0.5 and D = 2. The edge here is 0.5 and the odds are 2, so the formula evaluates to 0.25; on each round you should bet one quarter of your bankroll (so on the first round, you should bet $250 of your $1000).

In an investment scenario, if log-normal returns are assumed (this is a mathematically tractable, but insufficiently conservative assumption; actual investment returns follow some heavy-tailed distribution, although it isn't yet known which one), and if it is assumed that bet size can be instantaneously adjusted in continuous time, (and possibly some other assumptions are made?), the Kelly formula simplifies to (see section 6.1, pdf page 29 of [1], or section 7.1, pdf page 22 of [2]):

(expected return) / (variance of return)

(recall that 'variance' is the square of standard deviation)

Links:

- [3]
- https://en.wikipedia.org/wiki/Kelly_criterion
- Max Dama's guide to automated trading
- http://edwardothorp.com/sitebuildercontent/sitebuilderfiles/Good_Bad_Paper.pdf
- The Kelly Criterion in Blackjack Sports Betting, and the Stock Market

todo alpha, beta, systematic and idiosyncratic risk

todo what is that derivation? todo taylor expansion

the idea that 'the trouble with investing' is that highly correlated returns tend to result (eg a large systemic risk component driving prices across the board)

Links:

- https://en.wikipedia.org/wiki/Modern_portfolio_theory
- https://en.wikipedia.org/wiki/Capital_asset_pricing_model

expected return / standard deviation of return

todo explain

just the idea of subtracting costs from benefits in decision-making

also, 'minimax optimization' (is that the right name?) in rational decision-making (trying to maximize the ratio of something good to something bad)

(compare to 'survival thinking': emphasis on self-sufficiency and on avoiding 'bankruptcy'; compare that to maximization of log utility rather than just utility; compare to risk aversion; compare to the idea that companies are more of a dangerous monoculture b/c they can adopt one another's strategies)

"

- People Face Tradeoffs. To get one thing, you have to give up something else. Making decisions requires trading off one goal against another.
- The Cost of Something is What You Give Up to Get It. Decision-makers have to consider both the obvious and implicit costs of their actions.
- Rational People Think at the Margin. A rational decision-maker takes action if and only if the marginal benefit of the action exceeds the marginal cost.
- People Respond to Incentives. Behavior changes when costs or benefits change.
- Trade Can Make Everyone Better Off. Trade allows each person to specialize in the activities he or she does best. By trading with others, people can buy a greater variety of goods or services.
- Markets Are Usually a Good Way to Organize Economic Activity. Households and firms that interact in market economies act as if they are guided by an "invisible hand" that leads the market to allocate resources efficiently. The opposite of this is economic activity that is organized by a central planner within the government.
- Governments Can Sometimes Improve Market Outcomes. When a market fails to allocate resources efficiently, the government can change the outcome through public policy. Examples are regulations against monopolies and pollution.
- A Country's Standard of Living Depends on Its Ability to Produce Goods and Services. Countries whose workers produce a large quantity of goods and services per unit of time enjoy a high standard of living. Similarly, as a nation's productivity grows, so does its average income.
- Prices Rise When the Government Prints Too Much Money. When a government creates large quantities of the nation's money, the value of the money falls. As a result, prices increase, requiring more of the same money to buy goods and services.
- Society Faces a Short-Run Tradeoff Between Inflation and Unemployment. Reducing inflation often causes a temporary rise in unemployment. This tradeoff is crucial for understanding the short-run effects of changes in taxes,government spending and monetary policy. " -- Mankiw's 10 principles

empirically, people tend to exhibit 'strong reciprocity', meaning that they will, contrary to their self-interest, tend to be willing to sacrifice resources to "reward a behavior that is perceived as kind", and also to punish noncooperative behavior. [4] [5]

Left to their own devices, people tend to be 'conditional cooperators' meaning that they will cooperate if others seem to be cooperating. However, people tend to cooperate slighly less than they perceive others doing so, leading to a group decrease in cooperation over time. However, if punishment is introduced (an option for some group members to punish others), then stable cooperation can be achieved. People will punish behavior even if the amount of 'punishment' done is only the same as the cost of the punishment. [6]

note that (at least in experimental settings such as prisoner's dilemma), (the expectation of) a 'repeated game' is often a necessary prerequisite for cooperation.

market failures, externalities, monopolies

The study of communication in an adversarial environment.

Similarly, "cryptoeconomics" is the study of economic interaction in an adversarial environment.

Links:

https://en.wikipedia.org/wiki/Von_Neumannâ€“Morgenstern_utility_theorem