proj-future-goalsForSociety

Goals

My overall dream for society would be a place where everyone is nice to each other, where people are free, and where we have the ability to walk around and contemplate nature; take walks through forests, sit and watch the sky, etc.

But before that, my first goal for society is to make everyone at least as well-off as I am now.

Sure, there are plenty of terrible things about my life, but at least I'm not starving, I am free, I have some free time after work, I get some amount of choice in what I do, there is only a small chance that I'll be a target of violence.

Post-humanism & singularity

I'm not into post-humanism. I have nothing against it, and in fact I'm almost all for it, but not if it effectively forces everyone to partake; those who choose to should be able to remain un-upgraded, and yet still achieve the quality of living I've specified.

Another way to say this: not at the expense of my "overall dream" goal; not if it involves destroying the remaining nature on Earth, or if it involves making everyone so busy that they have no time to enjoy anything, or if it somehow effectively forced everyone to "upgrade" in order to make a living. Unfortunately, I feel that the current shortest path to post-humanism may indeed involve some of these things. So, I think we may need to take it slowly and figure out how to avoid those badnesses. This may indeed involve a system other than the post-humanist strategy of advancing technology as fast as possible.

So, I disagree with that strategy, although I think the goal is fine, as long as it doesn't mess up other things. Which is why I say I'm not into post-humanism, but I have nothing against it.

In other words, I think a singularity may be cool, and personally, I'd probably hop on the boat as fast as I could, but ultimately it may not be as cool as a utopia without transcendence. So I want to avoid a singularity until we can do it while also preserving things for those who choose not to transcend.

I should note that I think we're hundreds of years away from a singularity. I don't think it will happen within our lifetimes. But I guess the sooner we start thinking about such things, the better equipped society will be for it later.

My career

When I first got into A.I. and neuroscience, I thought that the way to solve the world's problems was to make each individual human smarter. I figure that we're probably about as smart as cows, and so we're too stupid to solve even the simplest things. If we could just get ourselves up to a semi-decent level of intelligence, I bet many of the world's "intractable" problems would turn out to have an obvious solution.

That is to say, I think humans are dumb animals and so we're too stupid to even think of what should be obvious solutions to our social problems.

However, while I still think my appraisal of human intelligence and the supposed difficulty of our problems is correct, I'm afraid there might be side-effects to making people smarter. Of the kind I discussed above under "post-humanism". Basically, inventing these new technologies will change society in unexpected ways. For example, if we invent ways to make people smarter, but capitalism is retained, won't everyone be forced to upgrade, or starve? If we link computers closer to our minds, won't everyone be forced to monitor their email 24-7 in case something "really important" comes through from our work?

I wouldn't mind if this was guaranteed to be just a temporary phase in between here and utopia, but there's no guarantee.

I think that there is a way to get from here to there without ruining everyone's life, but we're going to have to steer society better in order to achieve it.

So, I'm worried about the effects of good neuroscience research. The best neuroscience will lead us closer to a world of close interlink between brains and computers, and modifications of our own minds. The effects of this will likely be very systemic and very unpredictable. Singularity-type changes.

I'm less worried about A.I. research. Only because I don't think we'll get very close to an intelligence computer in the next few hundred years, so we have time. And before that, I think the effect of "more intelligent" computers will be big (will even destroy most present-day jobs), but ultimately will fit within our current framework; it will be a huge change, but will not fundamentally alter the human condition (not as fundamentally as singularity-type stuff, that is).

So, working on neuroscience is dangerous, because it brings us closer to an unplanned singularity. I think that working on A.I. is safer because the AI singularity is far away. Either one will improve things in the meantime.

Another professional interest that I've discovered in the last few years is using technology to aid human collaboration. I think this might be the best thing to work on (compared to the other two), because it might increase the ability of human society to steer itself. And, I'm afraid that whatever I think, our society is structured in such a way that there is a huge amount of momentum in its motion towards singularity, so it may be better to accept the inevitable and spend effort trying to plan for it, rather than trying to delay it.

The dark side of this technology is of course that it will itself restructure society, and who knows how that will turn out. Will better coordination lead to more intelligent group decisions (hopefully)? Will it allow small groups of caring citizens to coordinate better and have more of an impact (hopefully)? Or, will it lead to making governments and corporations intelligent enough that they will finally totally dominate humans, with ubiquitous surveillance and intelligent schemes to crush any opposition (hopefully not)?

The neuro and AI techs seems to increase the power of lone individuals (which I think is very good). The collaborative techs seem to increase the power of groups (which I fear, because I fear governments and corporations).

But, so far at least, the "tone" or touchy-feely feeling that you get from working on this stuff seems benevolent, whereas the feeling you get from working on neuro or A.I. seems more dangerous. So far, the collaborative techs are being exploited to increase activist power and decrease concentrations of power.

In addition, another side to these collaborative techs is that they effectively increase individual intelligence, but without directly altering (or creating) the mind (like neuro and AI would). For example, the internet allows one to learn things more easily, and to keep track of important development more easily. If the problem is that human are so stupid to apply obvious solutions to social problems, then this sort of boost is a step in the right direction. But the singularity-type side effects aren't as strong, since you are not directly altering the fabric of minds (although the effect on social organization could still be great).

So, collaborative technologies also effectively increase individual intelligence, but with less singularity-ish downside. This is why I'm into wikis and such.