I think there’s cause to believe that strong AI could be scary dangerous, but also that there isn’t much point to the current debate. It’s a bit like climatology. Lots of claims are being made without new evidence introduced to distinguish between the testable hypotheses. Let’s develop some strong AI first and then see how things turn out.
There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.
The evolution argument is to deny this assertion. “But it’s billions of years!” As if that alone makes the impossible possible. “The car has no wheels but give it time.”
If machine intelligence is ever achieved it will because programmers lent it some of theirs. However, if machines lack some qualitative requirement, which I believe they do, then no amount of time, speed, memory, data or algorithm will bridge the difference. We can mimic intelligence (in the very first machines) which confuses many into thinking we can realize the reality of it in the machine. When all the time all we’ve done is used a machine as a window on human intelligence (which in turn may just be a window on god’s?)
Really, what’s it all about? Another feeble attempt to replace god. What else would this super intelligent machine be? Other than lacking qualities of god. God is love, power, wisdom and justice. You don’t get any of that from Moore’s law.
Super intelligence is not something new. It has always existed. That being true, it means where missing it by looking in the wrong direction. We have the capacity to be more than we are. The good news is we will never reach the end of the road. There is no A.S.I. singularity. The onion is infinite.
AI will become dangerous when it begins to go beyond its instruction set to make decisions. Otherwise it will always be subject to the control of its programmers, and therefore won’t be genuine AI.
Oddly enough, that’s also when government becomes dangerous.
Rand, when I tried to post the above from my phone, via the same wifi I’m using on my laptop, it was rejected with the following message:
Your comment has been blocked because the blog owner has set their spam filter to not allow comments from users behind proxies.
If you are a regular commenter or you feel that your comment should not have been blocked, please contact the blog owner and ask them to rectify this setting.
Obviously if my laptop isn’t behind a proxy on my home wifi network, neither is my phone. In both instances I was using Chrome — for Android on my phone, for Windows on the laptop.
I’ve gotten that complaint from other people, but I have no idea where the “setting” for that is.
AI is unlikely to be produced by programmers writing code, because any complex intelligence has to be able to reprogram itself to handle new things. It’s far more likely to be some kind of neural net or similar technology, where you have very little idea of why it does what it does, and little chance of restricting what it can do (other than by not connecting it to anything else).
In fact, i strongly suspect the first AIs will be uploaded humans, because we’ll have a hard time doing from scratch what evolution has spent a billion years developing.
What about bugs in the programming?
We might start by classifying bugs. Is the code ever executed or even executable? Will it halt processing or just insert wrong data? Is it bounded or unbounded?
The article talks about an A.I. growing in functions toward a predefined goal, but that isn’t something a program is likely to ever do without some very specialized coding which is so unlikely to happen as to be called impossible. It is nowhere the same as a virus program that grows faster than intended because of a bdly chosen parameter.
What I’m really trying to say is it’s very easy to describe in words some nightmare scenarios that are extremely difficult if not impossible to accomplish in reality.
How could a machine go beyond it’s instruction set (which defines it?)
It could be damaged. Which means an instruction would either produce different bits as a result or branch to a wrong location. Those are the only two possibilities. Also it could be flaky, sometimes producing the right result and other times not.
If not flaky, then it’s simply a different processor. If flaky, I’d put all my money against Johnny five.
The gap between President Obama and the rest of us is the same (or larger) as the gap between us and the chimps. That explains why some people can still oppose Him. They simply cannot grasp the products of His super-intelligent mind, or the wonders He has wrought…
Figging toasters!
AI will be a problem when some idiot programs an AI to be a problem. People keep thinking about the unintentional Skynet or Cylon, or whatever.
The ones to worry about are those that are specifically programmed to be evil….
Today’s virus is tomorrows killer AI.
I figure strong AI has the potential to be really dangerous under the same conditions any of history’s madmen were dangerous.
I think there’s cause to believe that strong AI could be scary dangerous, but also that there isn’t much point to the current debate. It’s a bit like climatology. Lots of claims are being made without new evidence introduced to distinguish between the testable hypotheses. Let’s develop some strong AI first and then see how things turn out.
There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.
The evolution argument is to deny this assertion. “But it’s billions of years!” As if that alone makes the impossible possible. “The car has no wheels but give it time.”
If machine intelligence is ever achieved it will because programmers lent it some of theirs. However, if machines lack some qualitative requirement, which I believe they do, then no amount of time, speed, memory, data or algorithm will bridge the difference. We can mimic intelligence (in the very first machines) which confuses many into thinking we can realize the reality of it in the machine. When all the time all we’ve done is used a machine as a window on human intelligence (which in turn may just be a window on god’s?)
Really, what’s it all about? Another feeble attempt to replace god. What else would this super intelligent machine be? Other than lacking qualities of god. God is love, power, wisdom and justice. You don’t get any of that from Moore’s law.
Super intelligence is not something new. It has always existed. That being true, it means where missing it by looking in the wrong direction. We have the capacity to be more than we are. The good news is we will never reach the end of the road. There is no A.S.I. singularity. The onion is infinite.
AI will become dangerous when it begins to go beyond its instruction set to make decisions. Otherwise it will always be subject to the control of its programmers, and therefore won’t be genuine AI.
Oddly enough, that’s also when government becomes dangerous.
Rand, when I tried to post the above from my phone, via the same wifi I’m using on my laptop, it was rejected with the following message:
Obviously if my laptop isn’t behind a proxy on my home wifi network, neither is my phone. In both instances I was using Chrome — for Android on my phone, for Windows on the laptop.
I’ve gotten that complaint from other people, but I have no idea where the “setting” for that is.
AI is unlikely to be produced by programmers writing code, because any complex intelligence has to be able to reprogram itself to handle new things. It’s far more likely to be some kind of neural net or similar technology, where you have very little idea of why it does what it does, and little chance of restricting what it can do (other than by not connecting it to anything else).
In fact, i strongly suspect the first AIs will be uploaded humans, because we’ll have a hard time doing from scratch what evolution has spent a billion years developing.
What about bugs in the programming?
We might start by classifying bugs. Is the code ever executed or even executable? Will it halt processing or just insert wrong data? Is it bounded or unbounded?
The article talks about an A.I. growing in functions toward a predefined goal, but that isn’t something a program is likely to ever do without some very specialized coding which is so unlikely to happen as to be called impossible. It is nowhere the same as a virus program that grows faster than intended because of a bdly chosen parameter.
What I’m really trying to say is it’s very easy to describe in words some nightmare scenarios that are extremely difficult if not impossible to accomplish in reality.
How could a machine go beyond it’s instruction set (which defines it?)
It could be damaged. Which means an instruction would either produce different bits as a result or branch to a wrong location. Those are the only two possibilities. Also it could be flaky, sometimes producing the right result and other times not.
If not flaky, then it’s simply a different processor. If flaky, I’d put all my money against Johnny five.
The gap between President Obama and the rest of us is the same (or larger) as the gap between us and the chimps. That explains why some people can still oppose Him. They simply cannot grasp the products of His super-intelligent mind, or the wonders He has wrought…
Figging toasters!
AI will be a problem when some idiot programs an AI to be a problem. People keep thinking about the unintentional Skynet or Cylon, or whatever.
The ones to worry about are those that are specifically programmed to be evil….
Today’s virus is tomorrows killer AI.
I figure strong AI has the potential to be really dangerous under the same conditions any of history’s madmen were dangerous.
Fascinating. Thanks for link, Rand. Cheers –