This is one of the numerous situations where people who haven’t/couldn’t pass a first year course on the subject seem to think they’re qualified to talk about it. Elon Musk referring to Rodney Brooks as “the Roomba guy” just about says it all. The sad lesson here is that scaremongering works. You can get otherwise intelligent people to say really stupid stuff if you style yourself as a prophet.
The real danger I gather, from reading the article’s concern about under represented minorities, is that this genius comp-sci-guy will invent a strong-AI, but he will take the shortcut of programming his thought structure into the machine, and it will have all of his resentments and neurotic tendencies from experience of lifetime as an under represented minority person doing AI research.
The AI will then run amuck, wiping out the Red Shirt who is ordered to pull the plug by going at the beamed power coupling, and then it will wreck a bunch of Federation Starships with serious casualties, but a Starship captain who in a later career shills for an Appleton, Wisconsin personal injury law practice will be able to “talk it down” from a Starship posse needing to take it out.
We already have a lot of history that indicates that technological superiority is a real danger. In my view, strong AI has the potential to speed up technology development by a few orders of magnitude. This is particularly pronounced for virtual technologies, like building a better AI.
Artificial Stupidity is much more likely to wipe us out.
This is one of the numerous situations where people who haven’t/couldn’t pass a first year course on the subject seem to think they’re qualified to talk about it. Elon Musk referring to Rodney Brooks as “the Roomba guy” just about says it all. The sad lesson here is that scaremongering works. You can get otherwise intelligent people to say really stupid stuff if you style yourself as a prophet.
The real danger I gather, from reading the article’s concern about under represented minorities, is that this genius comp-sci-guy will invent a strong-AI, but he will take the shortcut of programming his thought structure into the machine, and it will have all of his resentments and neurotic tendencies from experience of lifetime as an under represented minority person doing AI research.
The AI will then run amuck, wiping out the Red Shirt who is ordered to pull the plug by going at the beamed power coupling, and then it will wreck a bunch of Federation Starships with serious casualties, but a Starship captain who in a later career shills for an Appleton, Wisconsin personal injury law practice will be able to “talk it down” from a Starship posse needing to take it out.
We already have a lot of history that indicates that technological superiority is a real danger. In my view, strong AI has the potential to speed up technology development by a few orders of magnitude. This is particularly pronounced for virtual technologies, like building a better AI.
Artificial Stupidity is much more likely to wipe us out.
Intelligence is not an algorithm.