Rob Bailey says that he’s mistakenly using the precautionary principle.
I’ve discussed this logical fallacy in the context of climate change.
Rob Bailey says that he’s mistakenly using the precautionary principle.
I’ve discussed this logical fallacy in the context of climate change.
Comments are closed.
Perhaps. But he isn’t the only one having nightmares about it.
I think any AI which has neural feedback i.e. which can learn and act on what it learned without supervision is potentially dangerous.
Even if it isn’t acting on its own desires it can be dangerous. Like:
https://xkcd.com/416/
“Perhaps. But he isn’t the only one having nightmares about it.
I think any AI which has neural feedback i.e. which can learn and act on what it learned without supervision is potentially dangerous.”
Yes. And upgrade itself at will; becoming smarter at an exponentially increasing rate relative to clunky wetwear humans. If we every foolishly invent something like a self-aware thinking computer that can make copies of itself etc.; game over for the human race. Don’t put much stock in “safeguards” in its program that will keep it under control either.
Yeah. You can catch a person relatively easily. If the AI is software then good luck catching something that can transmit itself at the speed of light and infinitely replicate itself faster than a virus.