I have no idea why Hanson assumes that robots would abide by human laws, or treat us nicely once they don’t need us.
Even the most powerful interests in the world (governments and large corporations) need humans to run them (and tax/sell products to). They are run for someone’s benefit. A nation of robots, able to reproduce and repair without human input, wouldn’t need us.
Further, when robots are sufficiently ubiquitous, independent and powerful, they become resource competitors and the Matrix/Terminator scenario isn’t crazy. It’s logical that robots would “repurpose our carbon” as easily and with as little care as we mulch garden plants and set traps for mice. Consider that we’re using all those Iowan corn fields to grow corn when they could be used for solar panels to feed the robots directly.
Hanson’s students understand what “the academic” has forgotten – economics is nice, but evolution rules. His students would seek to disempower robo-Hanson for the same reason I could want to take down a grizzly in my kitchen, or termites in my foundation — the robot is a competing life form. They’re afraid of human beings going extinct.
“Prefering Law To Values” is the problem with America today. Laws are derived from the values of those that write them.
Rule (or laws) have definite limits which is why we need judge(s/ment) to handle details that laws are unable to cover.
When people share values the need for law diminishes.
Robots should be treated like any other machine, UNTIL they can prove empathy. Sentience isn’t enough. Prisons are full of “thinking, aware” people, I’ve been a volunteer to them.
Until empathy can be proved, they’re the same as cars, planes, trucks and microwave ovens. All those things have high computers now, do some “thinking” by way of firm ware and programming. But they’re not ready to join the human race as equals or partners either.
I’m not competing with a microwave for a job or a mortgage!
Not unless it’s going to lose some sleep over beating me in said competition.
You’d think it would occur to people that trying to enslave and repress intelligent folks not only is likely to provoke a nasty reaction, it historically degrades a society. I.E. if you openly consider one class of “people” reasonable to oppress as “non human”, you tend to see your self as a Ellette deserving of more rights then other classes of people.
Sounds like the backstory for terminator movies, not a basis for a social contract with (vitually inevitable) A.I’s..
> Sharpen the Guillotines Says:
> October 19th, 2009 at 2:38 pm
> I’m not competing with a microwave for a job or a mortgage!
You’ve been competing with machines for jobs for generations. If you arn’t cost competative – eventually you wont win the competition.
Asking humans what kind of robots they prefer is a lot like asking what sort of small furry mammals the dinosaurs preferred, no?
There’s a name for people who DO want to live in a world where everyone else is short, weak, sickly, and poor just so they can — without effort or achievement — be above everyone else: Socialist.
The thing is ruthlessly stamping out the competition doesn’t make you competitive when you can no longer do that. If we fail to evolve to keep up with robots, then that is a failure on our part.
Mentats, si.
Collosus, no.
Mentats – no.
Cyborgs – likely.
After all we all can hardly do any job from engineer to truck drver without computer augmentation. Implants could save a lot of pocket space and carpel tunel.
😉
Robots, with evolutionary motivations, would I expect very quickly move to space – they could probably adapt to it better than us. I suspect they would largely leave us to Earth – so long as we took good care of it.
Humanity goes to some trouble to look after life less capable than ourselves. I would expect similar behavior from robots – for similar reasons.
One I can shut off.
At the rate our politicians are going I’d being willing to robotically automate Congress.
Robots, with evolutionary motivations, would I expect very quickly move to space – they could probably adapt to it better than us. I suspect they would largely leave us to Earth – so long as we took good care of it.
The first supposition, while likely, doesn’t lead to the second one. Life forms do not leave a habitat they can survive in, even if there’s a better one next do. Do humans refuse to live in Moscow just because the Mediterranean coast is nicer? Obviously not.
Anywhere life can extract resources and reproduce, it does so. And all that silicon in the Earth’s crust is just so many computer chips waiting to be born.
Just think about it – would you like to build a million O’Neil cylinders for humanity to move out into the cosmos in? Of course you would. So why wouldn’t a super-intelligent AI want to do the same thing, and use the entirety of the Earth and the Moon as the materials source?
I’m with Andrea – I’m fine with any robot that has a prominent on/off switch within easy reach. And I’d test it regularly to make sure it hasn’t been sabotaged.
Life forms do not leave a habitat they can survive in, even if there’s a better one next do. Do humans refuse to live in Moscow just because the Mediterranean coast is nicer? Obviously not.
Actually, many places do have reducing populations.
AI robots will want energy and resources, Earth is poor on the former. The capacity for Earth to support robotic life is also negligible compared to the rest of the solar system – I would expect them to turn Earth into a museum of sorts where they could preserve and study organic life. Just as we feel compelled to preserve archaic ecosystems.
They will want a full and diverse ecosystem, they will want biological life around, even if only to a small extent. For example, biological life may be far more immune to computer viruses, which may be a capability worth keeping around.
> Pete Says:
> October 20th, 2009 at 7:29 pm
>> Life forms do not leave a habitat they can survive in, even
>> if there’s a better one next do. Do humans refuse to live
>> in Moscow just because the Mediterranean coast is nicer? Obviously not.
> Actually, many places do have reducing populations.
Like virtually all the industrial world Parts of Europe adn Asia expect 90% population reduction in 3 generations.
> AI robots will want energy and resources, Earth is poor
> on the former. The capacity for Earth to support robotic life
> is also negligible compared to the rest of the solar system ==
true
>= – I would expect them to turn Earth into a museum of sorts
> where they could preserve and study organic life. Just as we
> feel compelled to preserve archaic ecosystems.
Maybe – maybe not.
They likely will want some contact with us and Earths civilization — assuming we don’t move on to — our minds work very differently from AI’s, so working together, and merging, is beneficial. But what the desires and tastes for scenery the resulting beings would have — real hard to speculate.
They might still be human like and want to look and feel human and be in a area we would think as pretty. They might be more totally machine like with no interest in such things at all. They might prefer a bio-free lunar or asteroid surface.
Hell even amoung humans some love theopen planes – others can stand them and need mountains or forrests. Others get queasy outside of cities.
>== For example, biological life may be far more immune to
> computer viruses, which may be a capability worth keeping around.
Thats like us keeping insects around the house because they don’t get cancer. What benifit is that to us?
Pete: I don’t think it’s wise to assume any motivations to AI apart from Darwinian survival. For one, AI will no doubt come in many flavors – some may be as you surmise, but others may be more genocidal or indifferent to our extinction. For two, do you really want to bet the survival of our species on a hunch?
cute cartoon.
As to a hunch. Whats the odds of us surviving as a species without AI? What are the odds we actually could prevent everyone from ever building a AI? Home computers will soon match and exceed the capacities of the brain, and hackers will tinker.
Really our only real chance is to preemptively develop a powerful AI that would realize destroying humans and their civilization is counter productive, and be able to hunt down and control rogue AIs.
Really our only real chance is to preemptively develop a powerful AI that would realize destroying humans and their civilization is counter productive
But would it be counter productive? Why do you assume that? No humans = more sunlight & silicon for robots. Even though energy is more abundant in space than on Earth, we’ve got most of the mass down here in this big ol’ gravity well. And you need both to make baby robots with.
I actually agree about the inevitability of AI though. Even if no one sought to make one, given enough powerful computers AI would probably happen by accident eventually. Someone might start with a automata program running single-celled organisms and have them “escape” onto the net, and then evolve from there.
Personally I think the only way “out” of this box is to “rise to the occasion” by augmenting biological with silicon intelligence (direct brain interfaces), while keeping the biological the ultimate decision-maker. You’d need the processing power to stay ahead of the AI, but stay human at the core. I’m not sure what the QoL is in that scenario, but it’s better than getting left behind and having some godlike AI take over society and make decisions for us.
> But would it be counter productive? Why do you assume that?
> No humans = more sunlight & silicon for robots. ==
Trades generally more productive then wars of genocide, and theres plenty of raw materials – vastly beyond what any of us can use. A global war would destroy current civilization – which is a huge and more limited resource.
Generally when different species are in the same ecology, if they don’t get in each others way – much less are mutually benificial – they don’t try to whipe each other out.
>== Even though energy is more abundant in space than on
> Earth, we’ve got most of the mass down here in this big
> ol’ gravity well. And you need both to make baby robots with.
Solars not really a competative power source now, much less after major AIs develop. Again resources are plentiful – us as a market could be valuble, assuming we have anything to contribute. If we don’t, weer doomed regardless of what they do.
> I actually agree about the inevitability of AI though. Even
> if no one sought to make one, given enough powerful
> computers AI would probably happen by accident eventually.
> Someone might start with a automata program running
> single-celled organisms and have them “escape” onto
> the net, and then evolve from there.
Interesting idea about the accidental AI. Could be plausible.
> Personally I think the only way “out” of this box is to
> “rise to the occasion” by augmenting biological with
> silicon intelligence (direct brain interfaces), ==
Agree that merger would likely also be inevitable. It would just be to useful for us. Otherwise the non-augmented would be like illiterates in modern society.
I’m holding out for the robot girlfriend. Trust me, there will be a day. My greatest fear is that it will come too late for me. I’ve had plenty of human girlfriends mind you, but I would definitely be game to ascribe to “The Church of Appliantology”*. I’m sure like everything else there would be tradeoffs but how bad could it be?
I have no idea why Hanson assumes that robots would abide by human laws, or treat us nicely once they don’t need us.
Even the most powerful interests in the world (governments and large corporations) need humans to run them (and tax/sell products to). They are run for someone’s benefit. A nation of robots, able to reproduce and repair without human input, wouldn’t need us.
Further, when robots are sufficiently ubiquitous, independent and powerful, they become resource competitors and the Matrix/Terminator scenario isn’t crazy. It’s logical that robots would “repurpose our carbon” as easily and with as little care as we mulch garden plants and set traps for mice. Consider that we’re using all those Iowan corn fields to grow corn when they could be used for solar panels to feed the robots directly.
Hanson’s students understand what “the academic” has forgotten – economics is nice, but evolution rules. His students would seek to disempower robo-Hanson for the same reason I could want to take down a grizzly in my kitchen, or termites in my foundation — the robot is a competing life form. They’re afraid of human beings going extinct.
“Prefering Law To Values” is the problem with America today. Laws are derived from the values of those that write them.
Rule (or laws) have definite limits which is why we need judge(s/ment) to handle details that laws are unable to cover.
When people share values the need for law diminishes.
Robots should be treated like any other machine, UNTIL they can prove empathy. Sentience isn’t enough. Prisons are full of “thinking, aware” people, I’ve been a volunteer to them.
Until empathy can be proved, they’re the same as cars, planes, trucks and microwave ovens. All those things have high computers now, do some “thinking” by way of firm ware and programming. But they’re not ready to join the human race as equals or partners either.
I’m not competing with a microwave for a job or a mortgage!
Not unless it’s going to lose some sleep over beating me in said competition.
You’d think it would occur to people that trying to enslave and repress intelligent folks not only is likely to provoke a nasty reaction, it historically degrades a society. I.E. if you openly consider one class of “people” reasonable to oppress as “non human”, you tend to see your self as a Ellette deserving of more rights then other classes of people.
Sounds like the backstory for terminator movies, not a basis for a social contract with (vitually inevitable) A.I’s..
> Sharpen the Guillotines Says:
> October 19th, 2009 at 2:38 pm
> I’m not competing with a microwave for a job or a mortgage!
You’ve been competing with machines for jobs for generations. If you arn’t cost competative – eventually you wont win the competition.
Asking humans what kind of robots they prefer is a lot like asking what sort of small furry mammals the dinosaurs preferred, no?
There’s a name for people who DO want to live in a world where everyone else is short, weak, sickly, and poor just so they can — without effort or achievement — be above everyone else: Socialist.
The thing is ruthlessly stamping out the competition doesn’t make you competitive when you can no longer do that. If we fail to evolve to keep up with robots, then that is a failure on our part.
Mentats, si.
Collosus, no.
Mentats – no.
Cyborgs – likely.
After all we all can hardly do any job from engineer to truck drver without computer augmentation. Implants could save a lot of pocket space and carpel tunel.
😉
Robots, with evolutionary motivations, would I expect very quickly move to space – they could probably adapt to it better than us. I suspect they would largely leave us to Earth – so long as we took good care of it.
Humanity goes to some trouble to look after life less capable than ourselves. I would expect similar behavior from robots – for similar reasons.
One I can shut off.
At the rate our politicians are going I’d being willing to robotically automate Congress.
The first supposition, while likely, doesn’t lead to the second one. Life forms do not leave a habitat they can survive in, even if there’s a better one next do. Do humans refuse to live in Moscow just because the Mediterranean coast is nicer? Obviously not.
Anywhere life can extract resources and reproduce, it does so. And all that silicon in the Earth’s crust is just so many computer chips waiting to be born.
Just think about it – would you like to build a million O’Neil cylinders for humanity to move out into the cosmos in? Of course you would. So why wouldn’t a super-intelligent AI want to do the same thing, and use the entirety of the Earth and the Moon as the materials source?
I’m with Andrea – I’m fine with any robot that has a prominent on/off switch within easy reach. And I’d test it regularly to make sure it hasn’t been sabotaged.
Life forms do not leave a habitat they can survive in, even if there’s a better one next do. Do humans refuse to live in Moscow just because the Mediterranean coast is nicer? Obviously not.
Actually, many places do have reducing populations.
AI robots will want energy and resources, Earth is poor on the former. The capacity for Earth to support robotic life is also negligible compared to the rest of the solar system – I would expect them to turn Earth into a museum of sorts where they could preserve and study organic life. Just as we feel compelled to preserve archaic ecosystems.
They will want a full and diverse ecosystem, they will want biological life around, even if only to a small extent. For example, biological life may be far more immune to computer viruses, which may be a capability worth keeping around.
> Pete Says:
> October 20th, 2009 at 7:29 pm
>> Life forms do not leave a habitat they can survive in, even
>> if there’s a better one next do. Do humans refuse to live
>> in Moscow just because the Mediterranean coast is nicer? Obviously not.
> Actually, many places do have reducing populations.
Like virtually all the industrial world Parts of Europe adn Asia expect 90% population reduction in 3 generations.
> AI robots will want energy and resources, Earth is poor
> on the former. The capacity for Earth to support robotic life
> is also negligible compared to the rest of the solar system ==
true
>= – I would expect them to turn Earth into a museum of sorts
> where they could preserve and study organic life. Just as we
> feel compelled to preserve archaic ecosystems.
Maybe – maybe not.
They likely will want some contact with us and Earths civilization — assuming we don’t move on to — our minds work very differently from AI’s, so working together, and merging, is beneficial. But what the desires and tastes for scenery the resulting beings would have — real hard to speculate.
They might still be human like and want to look and feel human and be in a area we would think as pretty. They might be more totally machine like with no interest in such things at all. They might prefer a bio-free lunar or asteroid surface.
Hell even amoung humans some love theopen planes – others can stand them and need mountains or forrests. Others get queasy outside of cities.
>== For example, biological life may be far more immune to
> computer viruses, which may be a capability worth keeping around.
Thats like us keeping insects around the house because they don’t get cancer. What benifit is that to us?
I doubt anyone’s going to see this besides Rand, but this cartoon is funny and on topic. Check it out.
Pete: I don’t think it’s wise to assume any motivations to AI apart from Darwinian survival. For one, AI will no doubt come in many flavors – some may be as you surmise, but others may be more genocidal or indifferent to our extinction. For two, do you really want to bet the survival of our species on a hunch?
cute cartoon.
As to a hunch. Whats the odds of us surviving as a species without AI? What are the odds we actually could prevent everyone from ever building a AI? Home computers will soon match and exceed the capacities of the brain, and hackers will tinker.
Really our only real chance is to preemptively develop a powerful AI that would realize destroying humans and their civilization is counter productive, and be able to hunt down and control rogue AIs.
But would it be counter productive? Why do you assume that? No humans = more sunlight & silicon for robots. Even though energy is more abundant in space than on Earth, we’ve got most of the mass down here in this big ol’ gravity well. And you need both to make baby robots with.
I actually agree about the inevitability of AI though. Even if no one sought to make one, given enough powerful computers AI would probably happen by accident eventually. Someone might start with a automata program running single-celled organisms and have them “escape” onto the net, and then evolve from there.
Personally I think the only way “out” of this box is to “rise to the occasion” by augmenting biological with silicon intelligence (direct brain interfaces), while keeping the biological the ultimate decision-maker. You’d need the processing power to stay ahead of the AI, but stay human at the core. I’m not sure what the QoL is in that scenario, but it’s better than getting left behind and having some godlike AI take over society and make decisions for us.
> But would it be counter productive? Why do you assume that?
> No humans = more sunlight & silicon for robots. ==
Trades generally more productive then wars of genocide, and theres plenty of raw materials – vastly beyond what any of us can use. A global war would destroy current civilization – which is a huge and more limited resource.
Generally when different species are in the same ecology, if they don’t get in each others way – much less are mutually benificial – they don’t try to whipe each other out.
>== Even though energy is more abundant in space than on
> Earth, we’ve got most of the mass down here in this big
> ol’ gravity well. And you need both to make baby robots with.
Solars not really a competative power source now, much less after major AIs develop. Again resources are plentiful – us as a market could be valuble, assuming we have anything to contribute. If we don’t, weer doomed regardless of what they do.
> I actually agree about the inevitability of AI though. Even
> if no one sought to make one, given enough powerful
> computers AI would probably happen by accident eventually.
> Someone might start with a automata program running
> single-celled organisms and have them “escape” onto
> the net, and then evolve from there.
Interesting idea about the accidental AI. Could be plausible.
> Personally I think the only way “out” of this box is to
> “rise to the occasion” by augmenting biological with
> silicon intelligence (direct brain interfaces), ==
Agree that merger would likely also be inevitable. It would just be to useful for us. Otherwise the non-augmented would be like illiterates in modern society.
I’m holding out for the robot girlfriend. Trust me, there will be a day. My greatest fear is that it will come too late for me. I’ve had plenty of human girlfriends mind you, but I would definitely be game to ascribe to “The Church of Appliantology”*. I’m sure like everything else there would be tradeoffs but how bad could it be?