6 thoughts on “What If The Singularity Doesn’t Happen?”
They once used the term ‘electronic brains’ but stopped when it started to make them look foolish. Now there is an assumption that enough complexity itself leads to intelligence… this too will seem pretty foolish in time. The singularity, the point beyond which we can not see, seems a bit presumptuous as well because it assumes we can see that which leads up to it (if it happens at all.)
I would certainly like to talk with a true electronic brain one day… I’m not holding my breath.
True AI isn’t necessary for the Singularity – quantum computing and molecular storage will do. Better modeling of real-world systems, faster testing of hypotheses and near-instant, near-infinite access to info for the common person will speed the pace of discovery and the economy, which is really the only prerequisite for the Singularity.
We can’t know what a world that operates ten times faster than ours will look like, any more than a medieval serf could know our world.
near-infinite access to info for the common person will speed the pace of discovery and the economy, which is really the only prerequisite for the Singularity.
So all we need is a good database, eh?
Ever see what a politician can do with facts?
I don’t think you’re giving serfs the credit they deserve. I’d pick one over any Obama voter.
I can envision enhanced humans, their memory augmented by a direct link to a search engine. While they may gain some advantage on ‘who wants to be a millionaire’ in real life you don’t get that much leverage. Even if we become borg having a hive mind (sort of like blogging) I don’t see that much improvement.
We have greater technology. That’s not to say we have greater thinking ability. A few hundred years ago, your average student before starting college already knew Latin and Calculus. Today they can barely read.
There will always be the paradox of The Chinese Room argument.
If at any point someone can say, “oh, this machine reacted this way because of the XX line of code” then that will not be true AI. Only when a machine reacts a certain way and people are left shrugging as to how it came about then we will begin to refute this argument. I believe some degree of cognitive evolution will have to take place and propagate out from a baser logic.
Josh: so, if the functioning of the human brain becomes sufficiently well understood, we’ll stop considering humans to be intelligent?
They once used the term ‘electronic brains’ but stopped when it started to make them look foolish. Now there is an assumption that enough complexity itself leads to intelligence… this too will seem pretty foolish in time. The singularity, the point beyond which we can not see, seems a bit presumptuous as well because it assumes we can see that which leads up to it (if it happens at all.)
I would certainly like to talk with a true electronic brain one day… I’m not holding my breath.
True AI isn’t necessary for the Singularity – quantum computing and molecular storage will do. Better modeling of real-world systems, faster testing of hypotheses and near-instant, near-infinite access to info for the common person will speed the pace of discovery and the economy, which is really the only prerequisite for the Singularity.
We can’t know what a world that operates ten times faster than ours will look like, any more than a medieval serf could know our world.
near-infinite access to info for the common person will speed the pace of discovery and the economy, which is really the only prerequisite for the Singularity.
So all we need is a good database, eh?
Ever see what a politician can do with facts?
I don’t think you’re giving serfs the credit they deserve. I’d pick one over any Obama voter.
I can envision enhanced humans, their memory augmented by a direct link to a search engine. While they may gain some advantage on ‘who wants to be a millionaire’ in real life you don’t get that much leverage. Even if we become borg having a hive mind (sort of like blogging) I don’t see that much improvement.
We have greater technology. That’s not to say we have greater thinking ability. A few hundred years ago, your average student before starting college already knew Latin and Calculus. Today they can barely read.
There will always be the paradox of The Chinese Room argument.
If at any point someone can say, “oh, this machine reacted this way because of the XX line of code” then that will not be true AI. Only when a machine reacts a certain way and people are left shrugging as to how it came about then we will begin to refute this argument. I believe some degree of cognitive evolution will have to take place and propagate out from a baser logic.
Josh: so, if the functioning of the human brain becomes sufficiently well understood, we’ll stop considering humans to be intelligent?
…we’ll stop considering humans to be intelligent?
Hey, you might be on to something! 🙂