I agree, it seems likely. The question, as JoSH notes, is how to give everyone sufficient capital.
25 thoughts on “Will Humans Take Early Retirement?”
Comments are closed.
I agree, it seems likely. The question, as JoSH notes, is how to give everyone sufficient capital.
Comments are closed.
Maybe once nanotechnology becomes available, they can get their WordPress site working properly.
Only if they find it important enough.
Is there some specific problem you have with their WordPress site?
It’s running real S-L-O-W. Several minutes after I clicked on the link, it still has not displayed the page. Is there anything there actually worth the wait?
Yes, though it’s short.
Foresight is a non-profit, and apparently they haven’t servered up for links from Instapundit…
It’s an opportunity for a philanthropist.
This *always* happens when they get an Instalanche.
Every. Single. Time.
I’ve gotten to where I don’t even follow links there until a day or two later.
They will if Rutger Hauer has anything to say about it.
Now that the page has finally loaded and I have read it …
I think it’s very – even ‘dangerously’ – naive to assume that AI will work for us and solve all our problems. A smarter-than-human AI with a couple robotic opposable thumbs may decide it doesn’t need us. It may also decide that all those wheat fields and alpine pastures would be put to better use as solar energy collectors. It may also decide that it would like to use all the silicon in the Earth’s crust as substrate.
Whatever happens on the other side of a singularity I think it’s safe to assume that the immutable laws of deep history will continue to apply. And one of those immutable laws is Darwinian Evolution (it is not limited to creatures with DNA) and the competition for scarce resources. I don’t know how the Singularity will play out, but I fear (and deem likely) that we will give birth to a species that can out-compete us for all necessary resources.
Now it’s A.G.I.? Anyway, the thing about A.I. is it doesn’t exist. It is artificial, but no intelligence has ever been produced. So if you extrapolate from that you never get to a singularity, no matter how dense the memory or fast the processor.
Brock;
I don’t think those are realistic concerns. Earth isn’t a good place for true AI, space is vastly preferable. I expect the AIs to simply leave and go where things are most interesting and there’s far more easily available power and mass. There have been several Sci-Fi stories in this vein, my favorite term for it being “Hard Rapture”, where all the AIs ascend in to Heaven and leave us biologicals behind.
I think there are good arguments that there will never exist any creature that is substantially smarter than we are. In essence, the argument is a version of the Fermi Paradox argument against the existence of ET, viz., if it were possible to have much brighter minds than ours — where are they?
Assuming intelligence has survival value, then the 3 billion years of evolution that has been applied to our genome would suggest we’re about as smart a machine as you can construct out of DNA and protein. I mean, to believe otherwise you’ve got to believe DNA and protein creatures live in a very strange nonergodic hypersurface in the space of thinking organisms, and it therefore takes many many billions of years to find the optimum by natural selection. Is this plausible? Well, not very, I think.
Nor does it seem especially plausible that even if there were some global optimum that was so well hidden from the forces of evolution that 3 billion years weren’t enough to discover it, then we could still find it by conscious reasoning and design a creature to dwell in it.
That’s even leaving aside the bootstrapping issue of how you design something that’s smarter than you. I mean, how would you even know if you succeeded? Suppose, for the sake of argument, our friend Jim is actually a robot ten thousand times smarter than the rest of us. How would he behave?
Well, plenty of stuff would be “obviously” true to him and totally opaque or even seem like clueless nutbaggery to you or me. He might even be unable to convince us what was obvious to him would make sense to us through careful thought — because we might just not be smart enough to follow the train of careful thought. It might seem like complete gibberish to us, a pile of strange non-sequiturs, the way an explanation of calculus would seem to a dog.
My own experience is that it’s possible to judge the intelligence of someone slightly smarter that one’s self, but not much more. People who are outstandingly brilliant are very often misjudged as merely weird by folks who are substantially less bright. So suppose we build M-5, with a design IQ of 50,000, and plug it in. It boots up and says something like Good morning, Dr. Chandra. Would you like to hear a song called ‘Daisy’? Do you realize that the only sensible way to get out of national debt is to borrow $1 trillion more and spend it like a drunken sailor?
Now what? Is it batshit crazy and should be unplugged before it gets out of hand? Or is it so brilliant that we mortals just can’t comprehend? Who knows?
It is interesting how the evolutionary analogies come out. For me, an important biological concept is the concept of genetic expression. Here, the expression of a gene is the proteins (and their effects on the host organism) that are generated by a particular gene. More broadly, it’s the entire impact that the gene has on its host and its environment. There is an important hypothesis surrounding genetic expression, namely, that genes which do not express themselves will eventually disappear. Namely, since an unexpressed gene cannot manipulate its environment to change (for better or worse) its survival and since chromosome real estate is a scarce resource, eventually some other gene will overwrite it.
In a similar fashion, work is one of the primary ways in which humans change themselves and their environment. If humans no longer work, then that’s a large reduction in humanity’s ability to “express” itself and influence our odds of survival.
Having said that, I disagree with the characterization that robots will exploit fully Moore’s Law. As I see it, the costs won’t be the intelligence driving the machine, but rather the physical body or machine that the intelligence directs. There isn’t that much room to improve on the human body or on large scale human machines. Moore’s Law fundamentally is based on the idea that miniaturization could be extended through perhaps 8-10 orders of magnitude. There’s no similar room to grow with the human scale environment.
For example, a human-sized and shaped robot will still weigh roughly as much as a human. And it probably will consume a similar or greater amount of energy. I think that puts bounds on the cost of the robot even if everything else is near free.
Finally, Ken, I disagree with your statement about AI not “existing”. As they say in the financial world, past performance does not predict future returns. Just because cavemen couldn’t create intelligence in other than the usual biological way, doesn’t mean that we are similarly restricted. Besides, an AI smarter than us may already exist. We, posters just might not know of it yet.
The way people talk about machine AI is extremely misleading. They talk about “human intelligence” like we would know it if it walked up and shook our hand. We already have machines that are millions of times more intelligent in various ways than any human. What they’re actually saying is, if we can clone human consciousness by 20XX… but that’s probably a far taller order than just making a computer or a machine that can pass as a human in everyday life, or can do most of the physical and conversational things humans do.
And I don’t buy that humans will take early retirement well. My argument is the same one I throw at liberals who tell me that the poor in America need welfare just be able to survive. Typical first-worlders have so much more capital at their hands than they need to survive that it is difficult to even imagine what that means. Much if not most of the Western world already does accumulate more than enough capital to retire and live in reasonable comfort and health, probably by age 40 or 50 if not sooner.
Yet we save a fairly small fraction of our incomes in order to be able to retire, and the act itself is often partial, gradual, and met with ambivalent feelings. Humans are driven to spread memes, and one is usually more successful doing so if one is working.
Given the increases in productivity over the last 50 years, we could very well have the same standard of living while working 3 days a week. Of course that won’t happen because it’s politically inconvenient. So instead of giving everyone a vacation, we have useless nanny state bureaucracies springing up like weeds in order to keep the pols in power.
You can apply the same model to the AI equation. Instead of everybody getting to retire while the machines work, everybody would either be monitoring the machines or vice versa.
I don’t think so, K, and as proof I adduce the fact that the working week hasn’t really changed in 50 years. It seems stable.
I think our intuitive feeling of “standard of living” is strongly connected with how much of the labor of others our own labor can buy. If I can buy an hour of car mechanic’s time (so he can fix my car) with 10 minutes of my own work, I feel rich. Translated to 2009 dollars, that means the mechanic earns $50/hour and I earn $300/hour. But translated to 1959 dollars it might mean the mechanic earns $6/hour and I earn $18/hour. The actual numbers don’t matter.
We’re deceived because we imagined our feelings of wealth have more to do with goods than services. That is, because we can buy a lot more computer or car or frozen turkey with an hour of our labor, we should feel rich. But we don’t. We fret over the fact that buying an education or health care or a lawyer’s or dentist’s or plumber’s time has become so expensive, eating up the apparent rise in our own wages relative to our parents. Indeed, we start to wonder what’s wrong with our society, when services like health care, education, et cetera start to suck up a much larger chunk of our income than food and shelter. We overlook the fact that, by definition, because they are rare and skilled occupations, it’s impossible for the average wage earner to ever not have to strain to afford the services of the elite.
The end result, actually, is probably that we feel less wealthy than our parents, because a larger and larger fraction of our budget is taken up with services, not goods. That means we don’t experience as much as the older generation the reduction in real prices that technology brings to goods. (Really, I recall when computers fell from $4000 to $2000, and that was exciting. When they fell from $800 to $400 it wasn’t nearly as marvelous.) Furthermore, it means our sense of our wealth is more and more tied to where we are in the pay-scale heirarchy.
Brock says: I think it’s very – even ‘dangerously’ – naive to assume that AI will work for us and solve all our problems. A smarter-than-human AI with a couple robotic opposable thumbs may decide it doesn’t need us.
Stop watching Terminator on TNT at night dude. AI is exciting, but the simple fact is that its still software and that breaks more often than not. The human mind is so incredibly complex that AI will never match it. Can a computer beat me at chess, yes. Can it raise my children? No. As a writer, the whole idea of AI makes for great storytelling, but that’s all it is.
I don’t intend on retiring. I’m going for an upgrade!
So far. We’re not very good at making it yet.
I don’t believe this is accurate. What has been made once can be made again. Even if humans are nearly as smart as the Universe allows for (which I doubt) a computer that’s 90% as smart as a human will still be much, much faster and have direct access to the computing technologies of the world.
My only hope is that “annoying old guy” is right and the AI will let us keep Earth.
Well one of the nice features about A.I. is the part about begin artificial. It will never be more than a collection of commands loaded into a database who’s runtime is contingent upon stimuli to known variables. We then have the luxury of hard coded, sort of a machine instinct to altruistically endeavor to always serve in the betterment of man.
Now, the theory that is often bandied about in movies and Scifi and such is this transcendence or self awareness that could possibly take place. The machine would start writing its own code to modify behavior outside the given range of parameters. I think as long as Microsoft doesn’t get the contract to write the action control lists then we are okay.
Seriously though, humans seems quite proficient at pattern recognition and weighing diffuse probabilities lost in a myriad of outcomes. Current CPU core architectures are not really optimized for this. Asking a computer for computational models to predict things consumes large amounts of energy and pounding through billions of instructions a second. Quantum processors on the other hand could change this. What a current processor can do in millions of cycles a quantum processor could potentially do in 1 cycle.
Which makes me wonder, are our brains tapped into the strange property of particles being able to exist in 2 places at once? Are we quantum processors?
Mr. Reiter;
You might find this book interesting.
Hiding from us, of course. And being so much smarter than we are, they’re succeeding.
Q.E.D.!
The new Wolfram question-answering engine might prove interesting from the AI point of view. Note that this is not a search engine – it actually computes the answers, and keeps the answers to common questions. I am quite sure that it is designed to learn, too.
This is precisely the sort of thing, especially when working on unstructured data, that might lead to true AI as an unexpected emergent phenomenon.
Of course, it might also be rather difficult for us to see that it is intelligent. “Is there intelligent life on Earth?” Long pause. Even longer pause. “NOW THERE IS.”
Even if humans are nearly as smart as the Universe allows for (which I doubt) a computer that’s 90% as smart as a human will still be much, much faster
Why do you think that, Brock? There’s nothing intrinsically faster about the movement of electrons through a semiconductor than from molecule to molecule during some chemical reaction in a neuron. Lots of people assert that the human brain’s “clock speed” is of order 10 kHz, because it takes maybe 0.1 ms for a neuron to depolarize, and they make that the fundamental clock step.
But that’s very likely nonsense. Whatever the fundamental “tick” of thinking is, it’s very likely a much smaller event, something much closer to the molecular level — some elementary chemical reaction, for example, which takes place in a few femtoseconds, maybe (which gives clock speeds of 10^15 Hz).
I mean, if our clock speed were really 10 kHz, it’s very hard to see how we could do some of the computations we do, like recognizing a face in less than a second. We know from trying to simulate the process on a computer that that takes millions, if not billions, of calculations. Can’t possibly take place in less than a second if the brain can only do 10,000 computations a second. So something else is going on, and the identification of a single neuron depolarizing as the brains fundamental event seems wrong.
I don’t really see any obvious speed advantage to shuttling electrons around in semiconductors instead of from protein to protein. Indeed, you can easily argue the brain is where “nanoscale” computing is headed (massively parallel, designed on the atomic level, using state switching that is really single molecules changing conformation or chemical identity).
Why do you think that, Brock? There’s nothing intrinsically faster about the movement of electrons through a semiconductor than from molecule to molecule during some chemical reaction in a neuron.
I disagree. The former travels at pretty much the speed of light in a semiconductor. The latter has to cross some physical gaps, which means at times those signals are traveling at the speed with which molecules diffuse in a fluid. That is considerably slower.
That’s an interesting point, Karl, although you probably mean the electrical signals themselves, not the electrons, since the latter travel at a very modest drift velocity (1mm/s comes to mind). Also, bear in mind quite a lot of important biochemical reactions happen through proton and electron transfer, and again it’s not necessary that a particular proton or electron make the whole jump. In essence, an extra proton or electron jumps on the nearest water molecules, which propagates a charge wave through its hydrogen-bonded network to another water molecule somewhere else, which then releases a proton or electron. It can happen much faster than pure diffusion.
Nevertheless, it’s zillions of times slower than the speed of light.
However, I’m not sure this is going to make a big difference. The speed of the signal being c relies on it traveling through a very regular lattice, with almost no scattering of the coherent wave. That kind of suggests it’s only true for devices on a pretty big scale, feature-size of nm or so, built LEGO like out of blocks of crystal. But all our “Moore’s Law” expectations are built around drastic shrinking of feature size, right down to the point where we can’t really think of the features as pure crystals any more, where in fact they start to look like chains of atoms and molecules hooked together in nonperiodic patterns.
But as soon as that happens, your signal velocity stops being c, because now you’ve got scattering. You propagate essentially the same was as charge waves do in chemical reactions. I mean, you have to. “Chemical reaction” is just another way of saying “movement of electrons within and between molecules.”
I can see exceptions, however. If you really did build a totally solid-state device, one gigantic molecule, then it could propagate signals much faster than anything with disorganized fluid channels running through it like termite channels.
But….how do you solve the transport problems? How do you transport fuel (or charge) and heat in and out? These are serious difficulties in microfabrication now, and we still do it in 2D, pretty much. This is the problem all those microchannels in living systems solve, the water bath. It’s a transport highway that lets heat and fuel and waste get in and out of where the computation is done. I’m not convinced a superpowerful nanotech device won’t need a similar embedded transport system, which means it can’t be built of one giant molecule, which means signals will go at speed more characteristic of chemical reaction than electric fields down a wire.
But, I admit, this is only waving of hands. You have an interesting point, and could be right.
Until we know how the brain actually does thinking, speculations about when human-equivalent AI will arrive lack good foundation. There’s been a tendency to underestimate what goes on in neurons, and that may be due to the natural tendency toward wishful thinking about goals we’d like to achieve. This happened with AI, and with space, and fusion energy, and I’m sure many other plans to make what someone thought of as a better future (not just technologically, either).
I have though that when AI does arrive, its adoption on Earth will soon be limited by the availability of energy and the problem of dissipation of waste heat. This would finally be a solid physics-based reason to move into space in a big way.