Posted 2008-03-01 21:11:26 by
Charles Stross's recent blog post about an article in The Economist has me talking people's ears off about the technological Singularity again. This is, in part, because it provides me with another angle from which to approach the topic, for people who didn't understand the augmented and/or artificial intelligence approach. Viz:
Extrapolate the curve Stross describes. The current problem of the SF writer, that it's now hard to write plausible fiction set more than 15 years in the future, will soon be the problem of the venture capitalist, who will be unable to be able to effectively plan one year into the future. And soon after that, it will affect people trying to plan their day-to-day lives. Progress will just be happening too quickly.
People I've spoken to have argued that thinking about the Singularity as a tipping point doesn't really make sense, because it's been getting continuously harder to make good predictions about the future. In the past I didn't really have an answer for them, but I'd now argue that having difficulty making good plans for the weekend due to the rate of technological progress is a difference in kind.
Of course, what makes that scenario implausible is that like most things exhibiting exponential growth, human progress is limited; in this case, limited by the very cognitive ability we're using as a measuring stick -- how can you make improvements on something you don't comprehend? You probably wouldn't even be able to define “improvement.” I suppose that's where the augmented and/or artificial intelligence necessarily comes in. And that's where the caveat about the Singularity possibly not being particularly utopian comes in.
Those aren't really topics I feel like I can do justice right now, but here's some light reading on the subject if you're interested. Life is only the prologue to intelligence!
|Me:||“I'm going to go shopping. Frikkin' basic human needs. There'll be none of that shit after my consciousness is transferred over to Computronium.”|
|Danny:||“I'm counting the days.”|
|Me:||“Over 10,000 so far. If I hit 15,000 and I'm still flesh and blood, I'll be really disappointed.”|
| Posted by Anonymous on 2008-03-01 21:33:36 |
Interesting. But if you cannot define the "progress" or "improvement" quality of a change, it suggests you cannot perceive it. If mankind is incapable of perceiving this quality of a change, it follows that it is not a valuable change. I would suggest that improvement and progress are self-centered terms that necessitate subjective definability. Perhaps, if you cannot define them, you should make up a new word for the quality of change that the singularity will bring about?
|re: Yada Yada...|
| Posted by Jim Crawford
on 2008-03-01 22:48:48 |
After Tim posted the comment above, we took it to IM:
Me: Maybe I should change my captcha text to "sentient."
Tim: Agree :)
Me: Would probably weed out the bad spellers, too.
Tim: Need to make it an image.
Me: Nah, it's been like this for about three years and I haven't gotten a single spam. Sometimes raising the bar a couple inches off the floor is good enough.
[Tim makes the post above]
Me: Right, I would address that point by turning to augmented intelligence.
Tim: The augmented intelligence would need to define it for us then. It could define it to be "smelling more like cheese" as being improvement
Me: That would be the artificial intelligence -- the augmented intelligence would be us.
Tim: I see. So we create the AI, it defines improvement, we augment to percieve it...
Me: Defining improvement as "constantly playing pranks on the puny hu-mans."
Tim: It does seem implausible I think.
Me: So pessimistic :)
Tim: I think humans can't truly benefit from an improvement they aren't able to perceive without help.
Me: Well, I fully intend to take all the help I can get.
Tim: It's like saying "This pizza contains an amazing chemical that makes it taste really good! But you can't taste it with your taste buds." So you invent a system for converting that taste into something you can percieve. But there's nothing to stop you from converting anything to taste really good.
Me: Well, I'm thinking more along the lines of intellectual growth -- basically, all the stuff that we can't currently conceive because of our puny hu-man brains.
Tim: Right. My point is more that you might as well make sitting around -- or working for our robotic overlords -- feel like improvement. Augmented intelligence should make us happy doing anything.
Me: That's what drugs are for.
Tim: Sure. See Soma for the idealized dystopian version :)
Me: But -- this point is from the light reading I recommended, actually, so I'll quote directly:
| ||"The Singularity holds out the possibility of winning the Grand Prize, the true Utopia, the best-of-all-possible-worlds. Not just freedom from pain and stress or a sterile round of endless physical pleasures -- not that I'd object to a few thousand years of hedonism, but it would get boring eventually -- but the prospect of endless growth for every human being: growth in mind, in intelligence, in strength of personality; life without bound, without end; experiencing everything we've dreamed of experiencing, becoming everything we've ever dreamed of being; not for a billion years, or ten-to-the-billionth years, but forever... or perhaps embarking together on some still greater adventure of which we cannot even conceive."|
Me: This is presuming that augmentation wins out over AI, or that the AI is friendly. There are also less utopian visions. But he addresses those too:
| ||"Even if we somehow knew for a fact that any superintelligence would exterminate humanity - including me and all other Singularitarians, of course - this 'Shiva-Singularity' might still be a goal that all of humanity could share. Dying in the creation of something better strikes me as significantly less pointless than dying of old age or nanowar. As attractive possibilities go, this one is significantly less attractive than Apotheosis; but sometimes, choosing the best available action doesn't mean that any of the available actions are good. Think about the possibility that there might be a better world but that we are absolutely barred from it, no matter how unpleasant this possibility is - because once you've confronted this possibility and thought about it openly, it loses a lot of its "scare power", and you become a more confident futurist as a result."|
Tim: We think experience gives us satisfaction because experience at some level stimulates chemical reactions. There's nothing sterile about endless physical pleasures -- pleasure is physical. I don't see a need to go through with this elaborate uploading and preservation.
Me: Fair enough. Preservation in general is a fear of death thing. Uploading specifically, though, makes augmentation much more feasible.
Tim: That is an interesting angle on it -- "Dying in the creation of something better strikes me as significantly less pointless than dying of old age or nanowar." -- The human experience. That's why we make babies, and that's why it seems better. You want them to learn to play the instruments you never could, to be famous and powerful and, essentially, superior to you in any way possible. Humans are wired to die happy as long as they have made a wonderful child. Perhaps for some, the singularity tickles that neuron just right.
Me: Yeah. I identify a lot more as an intelligent entity than as a human being. So the idea of machines taking over, if they're smarter than us... doesn't seem so bad.
Tim: Yeah. I think I can get down with that in some ways.
Me: But I'd really prefer to be in on it :)
Tim: There are aspects of it that I don't like. Limitations and challenges are what make life satisfying. When you talk about the perfect utopia, I think it's an impossible goal, except at a chemical level.
Me: An impossible challenge, you mean? A limitation? :)
Tim: Perhaps that's why it's worthy of fascination ;) But more to my point, human satisfaction is based on overcoming limitations.
Me: Right, and if we aren't in on it, if our limitations and challenges are solved for us, and we aren't smart enough to perceive the new ones that our machine children can see and solve because they're so much smarter than us -- then game design becomes crucial. I see games as attempts at building utopias. Not utopias that you live in -- though there's World of Warcraft, I think it's pretty limited and pandering -- but utopias that you visit.
Tim: Hmm. So games present challenges, because we don't have enough left in the real world. Fascinating... I shall think on this.
Me: This is great, a conversation like this is exactly what I was hoping to get out of that post :)
Tim: Awesome! Pleased to be a part of it ;)
| Posted by Anonymous (your metal friend) on 2008-03-02 03:29:27 |
I am sentient.