interesting quote:

" "It's possible to argue that films such as The Matrix, Terminator II and T3 are structured like video games, each with plots that feature levels of increasing difficulty," says the study. "

-- A study by the Entertainment and Leisure Software Publishers Association

if i have the album "turn blue" by The Black Keys on my Android phone in the play store in a list of potential albums to buy, the icon/album picture is a spiral. If i then scroll up and down, one of those magnetic lines of force of a bar magnet-like pictures appears. Are the typical magnetic lines of force of a bar magnet a Moire pattern of two superimposed pictures of a spiral displaced slightly vertically? If so, does this mean anything for the physics of electromagnetism? Also, i recall that magnetism was shown to be a relativistic consequence of moving electrons; is that related?


hard work: hard work is indeed the largest controllable determinant of success, and it is also a virtue. However, one thing to be aware of is that you may have more demands on your time than you can satisfy even if you work hard. In this case, you will have to choose how much time to spend on each thing, and the effects of depriving some things of your time will be the same as if that was the only thing you had to do and you weren't working hard. Obvious implications: (a) if you work hard overall, this doesn't necessarily mean you are giving any specific thing enough time, (b) even if you work hard overall, you may not be seen as a hard worker if your efforts are divided, (c) this is yet another reason not to overcommit.

3 notions of "difficult":


wow this is so accurate:


you know that saying that in chemistry is really physics, physics is really math, etc? i have something to add:


it's easier to understand sentences that don't have negation (or have less instances of negation).

The concept of a "best x" can often be substituted for "there is no y better than x".


we don't know anything


there needs to be a 'futurist scenarios' wiki which gives brief standardized names or identifiers to scenarios (and 'possibilities', which are attributes of scenarios, eg 'teleportation invented' is a possibility, 'US GDP rises by 50% from 2015 to 2020' is (still only part of) a scenario) like does for 'tropes' (note; some tropes on are also scenarios)



in Sandman, Dream plays "the oldest game" with a demon. "The oldest game" turns out to be one where each participant imagines themselves to be whatever they want. This seems like a particularly Dream kind of game (no wonder he won). What would be the analogs for the other Endless?

Death: i'm not sure; maybe peek-a-boo? Destiny: flip a coin Destruction: dueling Delight/Delirium: tickling Desire: seduction Despair: i'm not sure (maybe prisoner's dilemma?)


conditional rationality: eg consider going to a casino with slot machines that let you choose how much to wager on each spin. You know the expected value is negative, so it's rational not to play. But what if you have already decided to play until either you lose $100, or you win $100? In this case it's rational to bet it all immediately (b/c the more iterations you play, by the law of large numbers (sorta?) the more the outcome will tend to converge to the expectation).


" Anthropologists tend to use a method called “ethnography” as a way of negotiating design validity. The method gets characterized in many different ways, but I find it useful to think about ethnography as requiring you to go into situations that are more-or-less foreign to you and to put your own intuition and assumptions to sleep. You get input from “natives” about how they view those situations, and then you wake your own intuition back up in order to translate what you’ve learned into something that looks like a coherent and reasonable story to you. You then take that story back and revise it until it seems like a coherent and reasonable story to them too. It’s iterated, negotiated story-telling. " [1]


" "You Are Not a Gadget." One of Lanier's points in that book is that technology can both augment expression and bound it. The Like Button is a classic example of bounding because it reduces your thoughts and feelings about a piece of content to a thumbs up, whereas a simple textbox would let you note whatever you want about the content (text is also harder to monetize and analyze and control).

I personally have been noticing more and more how technology and algorithms seem to bounding expression online and off, particularly around content creation, but also with influencers and personal branding and so on. " -- [2]


dunno where i wrote this before (i think i did, some of it at least), but some conventions i use for notes:


a misc. thought on goal-setting/design

in principle, if you optimize for one metric, you can do better on that metric than if you had optimized simultaneously for multiple metrics; that is, adding additional goals may detract from your ability to achieve the first goal.

However, there are different levels of sacrifice here. For example, sometimes there are things you can do that help goal #2 significantly while having only a small (or perhaps highly improbable but significant if it did occur) negative impact on goal #1. Let's define some language for talking about this sort of thing.

We could divide goals into 'major', 'minor', and 'tertiary', or alternately, describe goal importance/priority as high/medium/low.

Within one class (class = major/minor/tertiary, or high/medium/low), this definition neither prohibits or requires some level of sacrifice (this doesn't mean that anything goes; other constraints may or may be present and may or may not be given in some other way). Between two class separated by one step (eg major and minor), a 'moderate' level of sacrifice of the higher class to achieve a 'large' gain in the lower one may be tolerated; between two classes separated by two steps (eg major and tertiary), only a small or improbable sacrifice to achieve a large gain in the lower one may be tolerated.

Also, let's say that in addition to the class-ified goals, there may be other 'implicit constraint' goals. For example, a corporation doesn't need to list "Don't break the law" as a major goal, because this is implicit. The importance of implicit constraints over goals is even stronger than between classes; even a large gain in any goal may not justify even a small breaking of an implicit constraint. For example, many corporations have 'making money' as a goal, but many of them would not want to commit a crime for any amount of money. On the other hand, when uncertaintly comes into play, there are still tradeoffs; for example, in the modern world it may take boatloads of lawyers to even figure out what the law is, and even then there may be uncertainty; this makes it impossible to be 100% sure that you are compliant in all respects; in the face of the remaining 5% chance that they are accidentally breaking the law, instead of just closing up shop and quitting, many businesses try their best to understand and comply with the law and continue on.


on evil

my guess is:

on the existence and consequences of evil:

complexities in what is 'evil':

Because of these complexities, even if you knew for a fact that you were dealing with an 'extremely good' person it can be hard to have any confidence that they would deny themselves a particular action in a particular situation.

Because of those complexities it's usually (but not always) more profitable to think and speak in terms of evil actions rather than evil people; to act as if there is no such thing as a 'good person' or a 'bad person'.

some silver linings:


regarding "It's Easier To Ask Forgiveness Than To Get Permission":

" majos 1 day ago [-]

It's also easier to give forgiveness than permission, especially in an institutional context. Forgiveness after the fact doesn't imply approval of the act the way permission beforehand does.


rdtsc 23 hours ago [-]

You have to dig deeper and ask what makes asking for permission difficult.

In an institutional context when we ask for permission from higher ups, especially publicly, we put them on the spot to clarify some rule or make a pronouncement which rules will be enforced and which won't. Sometimes you have to break rules, or perversely are even expected to, just to get things done. Asking permission means the boss has to tell you that you cannot break the rule officially, even though they know the rule has to be broken for the task to be accomplished. So not only do they deny the permission, they also resent you for forcing them to make the pronouncement and preventing anyone in the near future from accomplishing the task at hand efficiently. In other words even as they respond with "No, Peter, you cannot bypass filling out 10 TPS reports just to fix this bug" in their head they are thinking "Why the fuck didn't you just do it. Why did you have to ask me about it in front of everyone..."

Large power structures usually have rules you cannot break, you can break if you want to, and perversely enough, you should or are expected to break. Winning or losing the politics game often comes down to simply understanding which rules belong to each set.

reply " [3]


orion's arm idea

A polity that uses modosophonts to assess the clarity / simplicity of expressions of ideas. It does experiments like have a jury of mods who are told an idea, permitted very few modifications and little time to think about it, and then tested to see if they understood it.


have you ever noticed that on news-ish discussion forums with a front page of ranked posts, that people who agree with an article or are glad about an event (or if not glad, at least i-told-you-so about a bad thing) tend to post more in that article?


"Only seven stories (six percent) were primarily based on original reporting. These were produced by The New York Times, The Washington Post, the Wall Street Journal, The Guardian, Tech News World, Bloomberg, Xinhua (China), and the Global Times (China)." [4]


one thing i've often wondered about is: if you could live for a very long time, say 10,000 years, and meet all sorts of people, would you find that you can categorize people into a small number of personality types? say, 400 types?

my current guess is, not exactly: what you would find is that people vary across a number of variables, and within each of those variables, there are a small number of options, but the number of total personality profiles is very large (on the order of options_per_variable^number_of_variables); and you would find that the interactions between variables are important and so prevent you from predicting behavior very well without thinking about multiple variables.

As a thought experiment, consider the relevance of physical variables to choices that we might be tempted to consider purely psychologic. For example, someone may get very sleepy in the afternoon; you can see how this may have an effect on the types of career they choose, effects which could not be predicted considering only their psychological inclinations; you'd need to consider both this physical variable and psychological variables to predict an individual's career preferences. You can see how similar effects would make it hard to predict most things knowing only one or two of the psychological variables.

With an equation like options_per_variable^number_of_variables, there doesn't have to be too many variables and options for the number of total personality types to rise way about 400. For example, 9 variables each with 3 options is already 3^9 = 19683.


two practical issues that come up when discussing politics are:

1) sometimes people talk about what the ideal policy should be, and othertimes they talk about what the ideal compromise policy should be out of the set of policies that they think have a reasonable chance of being enacted. If one person is talking about one and a second person is talking about the other, a lot of time can be wasted by not being explicit about this

2) People are offended by certain viewpoints and if someone tells others that you said something that sounds like an offensive viewpoint, you will be socially sanctioned, even if upon a very close nitpicking examination of what you said, you didn't actually support that viewpoint. Therefore you must refrain from agreeing with certain statements even if you think they are technically true if they happen to sound too close to other statements that you strongly disagree with (especially when the problems of taking something out of context are considered). Furthermore, you know that your reasoning ability is not perfect, so to protect yourself from misunderstanding the meanings of words and implications of a statement and accidentally saying something that you disagree with and that is offensive, you should probably add in a large additional margin of safety, and refuse to support statements that are even near statements that you disagree with.

This presents problems in debate because the counterparty might present a statement that both of you actually think is technically true, but that is worded to sound very close to something that you strongly agree with, and then ask you to agree with it. Now you cannot just say that you agree because this may be misinterpreted by others. But you cannot disagree either because, since you actually agree, that may be logically inconsistent with your other positions. This makes debate difficult because sometimes A or B are contradictory, and some of your statements seem to imply that you believe A, but your other statements seem to imply that you believe B, so the person you are talking to wants to know which it is, and your refusing to answer this question makes things hard and makes you appear to be trying to be slippery instead of earnest.

However, i don't see any easier way. I think people just have to accept the practical reality that sometimes the person they are talking to is unable to answer questions of the form "Do you acccept proposition X?". What they can usually do is to explicitly state that X sounds too close to things they don't support so they are not going to answer the question, and then rephrase X into some other similar proposition X' that they can support, and then say that they support X'. I advocate more awareness of this sort of situation and more acceptance of this solution.

Closely related, unfortunately, the popular understanding of various words and phrases is often at odds with their formal meaning and implications. Such words and phrases should be avoided when possible because of their potential to formulate assertions which can 'trap' people when the actual formal meaning of the assertion is something they agree with, but the popular understanding of the assertion is something they disagree with. Again, i think the remedy is to encourage and permit debaters to attempt to reformulate assertions using different wording.


not sure where this belongs, but consider the sci-fi situation where someone could create copies of themselves, not just clones of their body (twins) but also copies of their mind.

How would copies of you interact with each other? Would they work together? If so, how would they govern this interaction? Would they legally be considered one person or many?

One imagines that various people might have different ideas about how they and their copies should interact. It would be useful for them to think about this and come up with a firm idea for at least an initial collaborative scheme before they 'branched' into copies, so that it would be certain that all of the copies, at least initially, would agree with the plan.

One could imagine a variety of scheme. One idea would be a strict totalitarian authoritarian hierarchy (with one 'monarch' or 'prime' copy having unquestioned absolute authority over the others). Another would be a complete lack of coordination with one another, where the copies live separate lives just as if they were completely unrelated people. In the middle one could imagine various scheme using special decision-making protocols only appropriate where complete trust can be assumed.

However, my guess is that the best way to do it would be to use the same collaboration, decision-making, and communications protocols used elsewhere by humans. Why is this -- couldn't we expect one's copies to be more trustworthy and altrusistic to each other, enabling us to dispense with the large overhead of normal human protocols that protect against bad actors? Well, first, even if one could initially completely trust one's own copies, the system is vulnerable to subversion. This could come in various forms; (a) an enemy/criminal could brainwash or 'reprogram' some copies, (b) an enemy/criminal could impersonate some copies (even if the copies initially share 'passwords' or other secret ways of authenticating each other, over time enemy/criminal surveillance and intimidation could potentially 'break' this authentication), (c) an enemy/criminal could intimidate some copies. This means that even if all of the copies have good intentions towards each other, this is not sufficient to guarantee that, for example, suggestions apparently given by a copy are always in the interest of the group (so, for example, the 'monarch' system would be particularly vulnerable because an enemy/criminal need only subvert the monarch copy to gain control of the whole group). Second, since this has never occurred before, it's unknown if some of the copies would over time evolve to be selfish rather than to act in the interests of the group.

So, i would imagine that a good system would be for the copies would think of themselves as separate individuals, and set up some sort of ordinary legal corporate entity, with ordinary voting procedures, for collaboration. It is likely that the expected trust and altruism between copies, in addition to shared values, goals, and background, would lead to much less 'politics' than other corporations, leading to much more efficient than average corporate functioning (not to mention more pleasant interactions) -- although on the other hand, a lack of diversity of thought and lack of diversity of background knowledge would probably lead to the corporation to occasionally make surprisingly dumb decisions.


related technologies:

An 'upload' scenario may or may not be possible, where the mind can be transferred into a digitized storage medium and then either executed there in VR or 'downloaded' into other bodies.

Some consequencies of upload technology:

If the mind can be executed in VR, it may or may not be possible to:

It may or may not also be possible to:


" cperciva 3 days ago [-]

This reminds me of a different "rule of 3": If you want to compare two things (e.g., "is my new code faster than my old code"), a very simple approach is to measure each three times. If the all three measurements of X are smaller than all three measurements of Y, you have X < Y with 95% confidence.

This works because the probability of the ordering XXXYYY happening by random chance is 1/(6 choose 3) = 1/20 = 5%. It's quite a weak approach -- you can get more sensitivity if you know something about the measurements (e.g., that errors are normally distributed) -- but for a quick-and-dirty verification of "this should be a big win" I find that it's very convenient.

reply " -- [5]


" Suppose you’re proofreading a book. If you’ve read 20 pages and found 7 typos, you might reasonably estimate that the chances of a page having a typo are 7/20. But what if you’ve read 20 pages and found no typos. Are you willing to conclude that the chances of a page having a typo are 0/20, i.e. the book has absolutely no typos?

To take another example, suppose you are testing children for perfect pitch. You’ve tested 100 children so far and haven’t found any with perfect pitch. Do you conclude that children don’t have perfect pitch? You know that some do because you’ve heard of instances before. Your data suggest perfect pitch in children is at least rare. But how rare?

The rule of three gives a quick and dirty way to estimate these kinds of probabilities. It says that if you’ve tested N cases and haven’t found what you’re looking for, a reasonable estimate is that the probability is less than 3/N. So in our proofreading example, if you haven’t found any typos in 20 pages, you could estimate that the probability of a page having a typo is less than 15%. In the perfect pitch example, you could conclude that fewer than 3% of children have perfect pitch.


What makes the rule of three work? Suppose the probability of what you’re looking for is p. If we want a 95% confidence interval, we want to find the largest p so that the probability of no successes out of n trials is 0.05, i.e. we want to solve (1-p)n = 0.05 for p. Taking logs of both sides, n log(1-p) = log(0.05) ≈ -3. Since log(1-p) is approximately –p for small values of p, we have p ≈ 3/n.

The derivation above gives the frequentist perspective. I’ll now give the Bayesian derivation of the same result. Then you can say “p is probably less than 3/N” in clear conscience since Bayesians are allowed to make such statements.

Suppose you start with a uniform prior on p. The posterior distribution on p after having seen 0 successes and N failures has a beta(1, N+1) distribution. If you calculate the posterior probability of p being less than 3/N you get an expression that approaches 1 – exp(-3) as N gets large, and 1 – exp(-3) ≈ 0.95. " [6]


madrox 3 days ago [-]

What the author glosses over somewhat is the method of sampling. If you read the first 20 pages, find no typos, and use this rule to arrive at 15%, that could be way off. He's assuming the risk of typos are evenly distributed when there's a lot of reasons it may not be. For example, the first half of the book could've been more heavily proof-read than the latter half. It's not out of the question that editors get lazier the farther they get into the book.

If you were to randomly read 20 pages in a book and find no typos, 15% probability makes more sense.

It's understandable to not mention this in a short blog post about the rule of three, but never forget that when you're interpreting you build your sample matters.

reply " [7]

" ajkjk 3 days ago [-]

So basically the '3' comes entirely from the choice of a 95% confidence interval. If you want a 99% confidence interval it's instead the 'rule of 4.6', which doesn't roll off the tongue as well.


jedberg 3 days ago [-]

You could do the 'rule of 5' though and have a confidence of 99.3%, which is pretty close to 99.

reply " [8]


"The Lindy effect...

    The future life expectancy of technology is proportional to its current age. Every extra period of survival implies a longer remaining life expectancy."


"Posner and Weyl do give one example of what I would call a decentralized institution: a game for choosing who gets an asset in the event of a divorce or a company splitting in half, where both sides provide their own valuation, the person with the higher valuation gets the item, but they must then give an amount equal to half the average of the two valuations to the loser. "



the convention for handles that turn pipes on and off is:

when the handle is parallel to the pipe, it's on. When the handle is perpendicular to the pipe, it's off.


notes on


Caches that take into account amount of use as well as frequency, like the brain (reference that article on exponential for getting and exponential newspaper term appearance, as well as spaced repetition). This reminds me of advanced Branch prediction algorithms


There should be a word 4 the transient bitter sadness it occurs when you experience a beautiful qualia that you feel is likely highly specific to your personal aesthetic history including the specific aesthetic of the times that you lived in, difficult to reproduce even in your own mind, unable to be communicated or shared, and hence I'm likely to be experienced by very many people besides yourself, hence likely lost forever and will never again be experienced after your death and possibly not even more during your life


Action Programs



on design:

There are sort of 4 activites/types of phases involved in design. You may go through each of these phases many times before finishing the design (and in fact i suggest that you do).

Note however that for the most part, the experience you gain is proportional to the amount of projects you finish, not to the amount of projects you start; so although going through each of these phases many times improves the design, from the point of view of improving your self rather than improving the design, there is also value in stopping re-designing early so that you completely finish the project sooner (even when you could have improved the design further).


Not only war and poverty but also crime and corruption and disease are major problems


[9] [10]


Epacket is good, speedpack is slow. EMS is better than both.


zengargoyle 9 days ago

parent flag favorite on: Ask HN: What is the most beautiful piece of code y...

Pick a random line from a file / stream without knowing how many lines there are to choose from in one pass without storing the lines that have been seen.

  perl -e 'while(<>){$x=$_ if rand()<=(1/$.)}print $x'

For each line, pick that line as your random line if a random number (0<=n<1) is less than the reciprocal of the number of lines read so far ($.).

It hits my elegant bone. Only one line... rand < 1/1, pick it. Two lines, same as one, but the second line has a 1/2 change of replacing line one. Third line same as before but gets a 1/3 chance of taking the place of whichever line has survived the first two picks. At the end... you have your random line.

microtherion 9 days ago [-]

Same number of keystrokes, but IMHO more idiomatic & readable:

    perl -ne '$x=$_ if rand()<=(1/$.); END { print $x }'


zengargoyle 8 days ago [-]

Ha, that's how I originally wrote it, but I thought I should get rid of -n and END{} in an attempt to ward of the eww Perl comments. Sigh.

But at least now I know that it's called Reservoir Sampling. I had wondered how to generalize it to wanting N lines.


visarga 9 days ago [-]

Does this algorithm guarantee uniform probability for all lines? Seems like the original ordering of the lines is a factor.


chubot 9 days ago [-]

The OP should have mentioned that it's this algorithm:

IMO it's a lot clearer if it's not in Perl ...

The pseudocode in Wikipedia also avoids division.



One way in which the brain differs from conventional computers is that neurons appear to serve as both CPU and memory storage. Rather than having a few fast CPUs and a large bank of dedicated memory with a few busses (leading to "the von Neumann bottleneck" of data transfer between memory and CPU), the brain has many slow neurons each of which has many connections with other neurons.

So, in-memory computing may be more brain-like.

The brain appears to be very good at massive concurrency with low energy consumption, at the expense of slow serial computation and a high error rate.


An organization that determines its beliefs by voting can have inconsistent beliefs even if each voter has consistent beliefs and votes honestly.

For example, imagine that "(A AND B) logically implies C", and every voter agrees with that statement. Imagine that 1/3 of the voters believe (A, not-B, not-C), 1/3 believes (not-A, B, not-C), and 1/3 believes (A, B, C). If you hold a vote, 2/3s of voters believe A, 2/3s believe B, and 1/3 believes C; so the organization as a whole will hold the contradictory set of beliefs (A, B, not-C).


The division of the cerebral cortex into cortical areas is somewhat subjective.


Rational thinking is computationally intractable.