opinions-singularity2

Slashdot discussed an article on the effect of widespread robots on the economy. The article was okay and the (high-rated subset of the) slashdot comments are even more interesting.

The article basically postulates that robotics will increase the concentration of wealth by throwing lots of people out of work, to the point wh\ ere the corporations and the rich will have lots of money and those people who today work as unskilled labor or customer service will have nothing at all.

This outcome is undesirable since of course the robotisization of most tasks should theoretically permit increased leisure for everyone. He recommends a government-provided income (i.e. giving e\ very citizen $25,000 a year or something).

Again, the article itself is okay but I also recommend the slashdot comments.

Article:

http://www.marshallbrain.com/robotic-freedom.htm

Slashdot comments:

http://slashdot.org/comments.pl?sid=76775&cid=0&pid=0&startat=&threshold=4&mode\

thread&commentsort=0&op=Change


---

if we can't fix our social system, do we really want our civilization to live forever?


wired, bill joy, why the future doesn't need us

---

> Are your convictions about the universe changing so fast? Do we really know so much more so as to make our mental model of the world obsolete every day? I dont think so.

No, they aren't changing so fast (I think more because of my personal limitations than because of humanity's understanding about mathematics and physics, though!), and that is a good point. But there is a large area between the noise-change that you described above and this sort of philosophical change. I think important technological change may start changing much faster.

The technological-development car can move faster than itself. I guess the only metaphor I can think of is the inflation theory of the universe, which is a poor choice because I don't know physics. But what I mean is, it would be theoretically possible for, say, people to build a space elevator at the same time that other people start cloning large numbers of humans while a third group develops teleportation of small objects, without (most of) any of the three groups being aware of the other (ludicrous in today's world, but in this postulated future scenario there are so many amazing developments that no one can keep up).

to get an idea of what this might feel like; a couple of days ago on nytimes.com there was an ad (is it called a "leader"?) for <a href=" http://www.nytimes.com/2002/10/29/science/space/29COSM.html">this story</a> on the front page. I can't remember the exact wording, but it was something like "Some scientists now believe that universes are constantly budding off of each other at a geometric rate." Now, that is not too surprising to me today, but imagine how much you would have flipped out if you read that in 1900. If you wanted to make up some ridiculous/silly sounded sci-fi "newspaper headlines from the year 3000", this is the sort of thing you would expect to see. but this is the front page of the nytimes, in real life!


  1. Bayle Says: October 31st, 2005 at 2:34 am e

personally, i believe the singularity is not due for 100 years or more. Why?

So I think the singularity is not near. But it may (or may not) happen eventually, and it might be a good idea to start thinking about and planning for it now in case it does.

one good argument that the \u201csingularity is near\u201d camp has, though, is that if there\u2019s 500 crazy things being developed, and the chance of any of them coming through within our lifetime is low, it\u2019s still possible that the chance of AT LEAST ONE of them coming through soon is high.

  1. Bayle Says: October 31st, 2005 at 5:29 am e

Stephen: I think there are multiple types of \u201csingularities\u201d. I think that either A.I. or augmented human intelligence could lead to a situation that counts as a singularity. Basically when you create a positive feedback loop in intelligence augmentation, that\u2019s a singularity. So you have have smart computer that can program smarter computers, or people who can make themselves smarter (and then since they\u2019re smarter they can figure out how to make themselves even smarter\u2026). The singularity folks consider the rise of civilization as itself a singularity seen on a longer timescale. Another way of looking at it: AI or neuro-based intelligence augmentation will be the tail end of the \u201ccivilization\u201d singularity.

BTW, here\u2019s the introduction of the singularity concept (by Vernor Vinge, who is also btw an awesome scifi author): http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html

  1. Bayle Says: November 3rd, 2005 at 5:11 pm e

So if we see a linear trend, what should cause us to assume that this is the \u201clinear beginning\u201d of an exponential (or at least S-shaped) curve? Wouldn\u2019t it be simpler to extrapolate based on the assumption of a linear trend?

But in my mind, my argument against is mostly heuristic. Basically, people have predicted before that \u201ceverything would change\u201d in short order by technology. But what happened before is that the \u201ctone\u201d of human life and society didn\u2019t change, or rather changed only over a period of multiple generations. My intuition is therefore that the \u201cordinariness\u201d or \u201cbanality\u201d of the world (one manifestation of this is, \u201cthe complexity and difficulty of making any really neat invention actually work\u201d) is a strong force. Basically, I think that people often postulate, on the basis of deductive-style thinking starting with an \u201caxiom\u201d of the possibility of a new technology, that everything will change. But the world\u2019s banality/muddiness makes any single possibility less important than it seems to be from the perspective of deductive reasoning.

Therefore, my heuristic is: if you\u2019re predicting that the basic \u201ctone\u201d of life will radically change in a short period of time, you\u2019re probably wrong. Also, if you\u2019re predicting that some technology will change everything, you\u2019re probably wrong \u2014 often the order of magnitude of the number of issues needed to work out the tech is drastically underestimated (consider that a few decades ago, \u201cmachine vision\u201d was assigned as a summer project to a single undergraduate by one of the fathers of A.I. \u2014 sadly I forgot the details of this story though), and even after the technology \u201ccomes to fruition\u201d, there are tons of kinks (technological, economic, and social) that have to be worked out before it can be used the way it was intended (why don\u2019t we have ubiquitous computing yet? why don\u2019t i have a PC in my pocket, sunglasses-screens, and finger position-sensors, despite prototypes of all of these being around for awhile? \u201ckinks\u201d).

  1. Bayle Says: November 4th, 2005 at 4:05 am e

Hmm, I guess I just feel that if you were able to plot \u201cthe tenor of life\u201d or \u201chow fast people\u2019s lives seem to be changing\u201d or \u201chow much technology is altering the way people actually live\u201d the same way Kurzwel plots raw computing power, you\u2019d find that the curve wouldn\u2019t predict dramatic changes for over a century. I think that that \u201cthe tenor of life\u201d would be some formula dependent upon linear social and economic paradigm factors (by \u201ceconomic paradigm\u201d I mean not \u201crate of GDP increase\u201d but rather things like \u201cthe speed that people become comfortable with new economic ideas such as idea of being paid wages to do a job, or the idea of insurance\u201d) as well as exponential technological factors, and that the linear factors are much stronger.

I guess I do think that perhaps eventually the technological factors will be so high that they will dominate even the \u201cstronger\u201d linear factors and cause a singularity, but I just don\u2019t see it happening so soon \u2014 this is all intutitive of course. But my intuition about this is pretty strong, and since it gets \u201ccommonsense\u201d points I weight it more than Kurzweil\u2019s purely intellectual argument. His argument is, at the root, based on looking at the shapes of graphs of the advancement of various technologies, and assuming that other graphs will have that same shape.

(other graphs? besides the computing power graph? yes, most notably, Kurzweil must assume that our ability to write intelligent algorithms that make use of the computing power grows quickly; in the sections of http://www.kurzweilai.net/articles/art0134.html?printable=1 called \u201cThe Software of Intelligence\u201d, \u201cReverse Engineering the Human Brain\u201d, \u201cHow to Use Your Brain Scan\u201d, and \u201cDownloading the Human Brain\u201d, he presents examples of projects along these lines that causes his personal intuition to think that the software is within reach in our lifetimes; my intuition disagrees, I think those same projects are much more preliminary and farther from the goal than he seems to \u2014 but again, notice that this is just my intuition against his)

More on the \u201ctenor of life\u201d argument; what does that have to do with the rate of technological advancement? Well, to be honest, my intution is simply that \u201cthe tenor of life wants to only change slowly\u201d \u2014 I am taking that as an axiom, finding that a singularity contradicts it, and then concluding that a singularity is not possible. However, I can manufacture a connection: for technological advance to cause the rate of technological advance to itself increase, I postulate that the technological advance has to cause society to change somewhat to become more efficient. But the speed of that feedback loop is limited by the rate of social and economic paradigmatic change (for example, what if the the internet enables a new economic organization in which \u201cvirtual corporations\u201d, social networking, and consulting is the norm, rather than large corporations and conventional long-term employment, and that the new form turns out to be drastically more profitable? What would eventually happen is that society would switch to this new form, and the greater profitability would enable more research, which would raise the rate of tech advancement. But this switch is likely to be slow because of the rate-limiting effect of social and \u201ceconomic paradigm\u201d change).

I\u2019m not totally ruling out the chance that I\u2019m wrong. I think there\u2019s probably a 10-20% chance that I\u2019m wrong about everything and that there will be a singularity in our lifetimes. It would be neat, if so.