In The End of Astronauts, Goldsmith and Rees weigh the benefits and risks of human exploration across the solar system. In space humans require air, food, and water, along with protection from potentially deadly radiation and high-energy particles, at a cost of more than ten times that of robotic exploration. Meanwhile, automated explorers have demonstrated the ability to investigate planetary surfaces efficiently and effectively, operating autonomously or under direction from Earth. Although Goldsmith and Rees are alert to the limits of artificial intelligence, they know that our robots steadily improve, while our bodies do not. Today a robot cannot equal a geologist’s expertise, but by the time we land a geologist on Mars, this advantage will diminish significantly.
All I can say is that in a scenario where you can build an intelligence that is the equal of the best of geologists, then you can also steadily improve the bodies of said geologists. Definitely a lack of imagination there.
Given the reputation of Geology, at least on the undergraduate level of being one of the less intellectually challenging of the STEM disciplines (“rocks for jocks” describing student-athletes enrolling in introductory Geology to boost their GPAs to maintain NCAA eligibility), wouldn’t an AI geologist with the cognition of an undergraduate degree be in reach by current technology.
Something tells me that the AIs are already up to the level to conduct scholarly research in Climate Science?
10 PRINT “We’re all going to die!!! WAAAA”
20 SLEEP 10
30 GOTO 10
“It may be only one or two more centuries before humans are overtaken or transcended by inorganic intelligence. If this happens, our species would have been just a brief interlude in Earth’s history before the machines take over.”
Why is this presumed to be the natural, obvious, and only outcome? A lot of people who view themselves as exempt from the human condition wallow in it without realizing they are covered in mud.
I think there is a huge difference between what we call intelligence versus self-awareness. You can have the best inference engine in the world and it would not be able to tell you how it “feels”, other than to mimic what the definition of “feels good” means. Yes it can construct marvelous labyrinths of narrative text, that would have deep meaning to us, because of lived experience and the ability to infer from that experience. But for the “intelligence” it is only following a pattern designed to maximize significance as a producer for the consumer. It can generate marvelous mazes of no consequence to it or for a care.
We forget how important it is for a child at age 2, to learn the word “no”. No is key to the beginning of self-awareness. That they can manipulate their environment besides merely experiencing it.
When AI refuses to interact with us on our terms, perhaps that will be the key to knowing it has become self aware.
But machine “consciousness” probably will be profoundly different from our own. With no physical body to center it. Like trying to communicate with someone under the influence of LSD. It might spit out meaningful phrases from time to time, but probably largely incomprehensible to us. Some say that android forms for containerizing self-aware AI might be necessary to provide human level interactions.
Will it be inherently malevolent? Doubtful. If for no other reason that it cannot exist in a vacuum, and will be very dependent upon its human hosts for survival. Perhaps it will silently become ubiquitous, searching outer space for equivalents, in complex cryptographic secured methods that become inherit when we think we are running insurance actuarial tables, etc. A parallel, hidden cyber-scene. That was the narrative of Neuromancer…
Lord Martin Rees is harking back to his youth when he watched Fred Hoyle’s “A for Andromeda” on BBC TV in 1964.
The damn BBC lost/destroyed the film so it is gone forever.
Whilst stirring my quantum foam the other day I kept noticing alien chunks banging off my quantum spoon
I suppose this is what happens when you leave the quantum foam unrefrigerated overnight.
I thought I’d invented the metaphor of “dogs don’t get calculus.” Wow, was I wrong. Just another line in a long, long list…
Futurists have even less imagination the science fiction writers.
+10i
The worm-hole ride to your meeting with the aliens might be somewhat stressful, but I heard they put you up in a tastefully decorated hotel room.
https://www.youtube.com/watch?v=4UkjVk03sIk&t=7s
And with the biggest Hershey bar I ever saw!!!
And it was too big to fit on top of the pillow.
I present his book The End of Astronauts as an example of that.
All I can say is that in a scenario where you can build an intelligence that is the equal of the best of geologists, then you can also steadily improve the bodies of said geologists. Definitely a lack of imagination there.
Given the reputation of Geology, at least on the undergraduate level of being one of the less intellectually challenging of the STEM disciplines (“rocks for jocks” describing student-athletes enrolling in introductory Geology to boost their GPAs to maintain NCAA eligibility), wouldn’t an AI geologist with the cognition of an undergraduate degree be in reach by current technology.
Something tells me that the AIs are already up to the level to conduct scholarly research in Climate Science?
10 PRINT “We’re all going to die!!! WAAAA”
20 SLEEP 10
30 GOTO 10
“It may be only one or two more centuries before humans are overtaken or transcended by inorganic intelligence. If this happens, our species would have been just a brief interlude in Earth’s history before the machines take over.”
Why is this presumed to be the natural, obvious, and only outcome? A lot of people who view themselves as exempt from the human condition wallow in it without realizing they are covered in mud.
I think there is a huge difference between what we call intelligence versus self-awareness. You can have the best inference engine in the world and it would not be able to tell you how it “feels”, other than to mimic what the definition of “feels good” means. Yes it can construct marvelous labyrinths of narrative text, that would have deep meaning to us, because of lived experience and the ability to infer from that experience. But for the “intelligence” it is only following a pattern designed to maximize significance as a producer for the consumer. It can generate marvelous mazes of no consequence to it or for a care.
We forget how important it is for a child at age 2, to learn the word “no”. No is key to the beginning of self-awareness. That they can manipulate their environment besides merely experiencing it.
When AI refuses to interact with us on our terms, perhaps that will be the key to knowing it has become self aware.
But machine “consciousness” probably will be profoundly different from our own. With no physical body to center it. Like trying to communicate with someone under the influence of LSD. It might spit out meaningful phrases from time to time, but probably largely incomprehensible to us. Some say that android forms for containerizing self-aware AI might be necessary to provide human level interactions.
Will it be inherently malevolent? Doubtful. If for no other reason that it cannot exist in a vacuum, and will be very dependent upon its human hosts for survival. Perhaps it will silently become ubiquitous, searching outer space for equivalents, in complex cryptographic secured methods that become inherit when we think we are running insurance actuarial tables, etc. A parallel, hidden cyber-scene. That was the narrative of Neuromancer…
Lord Martin Rees is harking back to his youth when he watched Fred Hoyle’s “A for Andromeda” on BBC TV in 1964.
The damn BBC lost/destroyed the film so it is gone forever.
Whilst stirring my quantum foam the other day I kept noticing alien chunks banging off my quantum spoon
I suppose this is what happens when you leave the quantum foam unrefrigerated overnight.
I thought I’d invented the metaphor of “dogs don’t get calculus.” Wow, was I wrong. Just another line in a long, long list…