I am going to watch the progress of Wolfram Alpha with great interest. I suspect that the Google server farm may have enough processing power for sapience, and the Alpha server cluster may end up with about the same – and it’s designed to develop the appropriate software.
I think I’ve become a confirmed singularity skeptic.
I think Vernor Vinge is right about what would cause the singularity as science fiction imagines it–superhuman intelligence. Things like nanotechnology and unlimited lifespans might make some big differences but won’t approach the idea of the singularity unless there is some intelligence that operates both better and more quickly than the human brain. Or at least much more quickly.
However, I don’t we’re going to have that any time soon (at least not in the next 100 years). I think intelligence, self awareness, and consciousness is more complicated than singularity speculators believe. I don’t think intelligence, self awareness, and consciousness will simply emerge accidentally if we get enough processing power and/or software in the right place at the right time. Specific attempts to create it would seem to me more likely to succeed (e.g. try to figure out all the things the human brain is actually doing and mimic them all or most of those things in software), but I predict that this will not be something that can be done in the next 30-50 years, and may actually be intractable. (Note that part of that problem is understanding exactly what’s going on in the brain.) Not every problem can be solved.
I think intelligence, self awareness, and consciousness is more complicated than singularity speculators believe. I don’t think intelligence, self awareness, and consciousness will simply emerge accidentally if we get enough processing power and/or software in the right place at the right time.
Very likely on that second sentence, Jeff … but perhaps some aspects of AI are simpler than we’ve made them out to be, too. I’m still waiting for rigorous, meaningful definitions of “thinking,” “intelligence,” “self awareness,” and “consciousness.” After we get those definitions, the outlook could change dramatically.
My cat is definitely self-aware and conscious, so this tells me that those characteristics can be implemented in a brain much less complex than a human’s. OK, she isn’t at all good at manipulating symbols … but we don’t need to wait for symbol manipulation software to arrive “by accident,” we’re working on that already. And yes, my cat may not be very smart, but if we can make something as smart as a cat, we should be able to add lots more processing power and memory.
(Note that part of that problem is understanding exactly what’s going on in the brain.) Not every problem can be solved.
Perhaps we don’t need to duplicate or even understand what’s going on in the brain, if we can come up with something that gives the same sort of results. Consider, for example, that analog and digital computers function in very differnt ways, but for some problems are equally effective at producing solutions. As Edsger Dijkstra said, “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
I am going to watch the progress of Wolfram Alpha with great interest. I suspect that the Google server farm may have enough processing power for sapience, and the Alpha server cluster may end up with about the same – and it’s designed to develop the appropriate software.
I think I’ve become a confirmed singularity skeptic.
I think Vernor Vinge is right about what would cause the singularity as science fiction imagines it–superhuman intelligence. Things like nanotechnology and unlimited lifespans might make some big differences but won’t approach the idea of the singularity unless there is some intelligence that operates both better and more quickly than the human brain. Or at least much more quickly.
However, I don’t we’re going to have that any time soon (at least not in the next 100 years). I think intelligence, self awareness, and consciousness is more complicated than singularity speculators believe. I don’t think intelligence, self awareness, and consciousness will simply emerge accidentally if we get enough processing power and/or software in the right place at the right time. Specific attempts to create it would seem to me more likely to succeed (e.g. try to figure out all the things the human brain is actually doing and mimic them all or most of those things in software), but I predict that this will not be something that can be done in the next 30-50 years, and may actually be intractable. (Note that part of that problem is understanding exactly what’s going on in the brain.) Not every problem can be solved.
I think intelligence, self awareness, and consciousness is more complicated than singularity speculators believe. I don’t think intelligence, self awareness, and consciousness will simply emerge accidentally if we get enough processing power and/or software in the right place at the right time.
Very likely on that second sentence, Jeff … but perhaps some aspects of AI are simpler than we’ve made them out to be, too. I’m still waiting for rigorous, meaningful definitions of “thinking,” “intelligence,” “self awareness,” and “consciousness.” After we get those definitions, the outlook could change dramatically.
My cat is definitely self-aware and conscious, so this tells me that those characteristics can be implemented in a brain much less complex than a human’s. OK, she isn’t at all good at manipulating symbols … but we don’t need to wait for symbol manipulation software to arrive “by accident,” we’re working on that already. And yes, my cat may not be very smart, but if we can make something as smart as a cat, we should be able to add lots more processing power and memory.
(Note that part of that problem is understanding exactly what’s going on in the brain.) Not every problem can be solved.
Perhaps we don’t need to duplicate or even understand what’s going on in the brain, if we can come up with something that gives the same sort of results. Consider, for example, that analog and digital computers function in very differnt ways, but for some problems are equally effective at producing solutions. As Edsger Dijkstra said, “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”