Via Geek Press, we have some interesting audio illusions over at The New Scientist. I found this one particularly so:
Some pieces of music consist of high-speed arpeggios or other repeating patterns, which change only subtly. If they’re played fast enough, the brain picks up on the occasional notes that change, and links them together to form a melody. The melody disappears if the piece is played slowly.
This is called an emergent property, and while many emergent properties arise from a critical mass (say, of the number of ants in a colony), they can also do so as a result of speed. Some AI researchers argue that human intelligence (and non-human as well) is in fact a result of simply having enough neurons (and at a higher level) various cognitive functions in one place to a degree that consciousness emerges. Others (such as Searle) scoff at the notion, arguing that gathering a large number of entities together isn’t going to change their properties in a qualitative way, and that’s simply common sense. You can’t combine a lot of dumb things and somehow get something smart. The whole may be greater than the sum of its parts, but not (so to speak) the sum of its (not so) smarts.
The argument against this is to point out another non-intuitive result. Prior to Maxwell, who would have imagined that you could wave a magnet back and forth and create color? Well, if you just wave it slowly, you won’t–all you’ll see is someone waving a magnet. But wiggle it half a quadrillion times per second, and suddenly there’s a electromagnetic wave that, when captured by the eye, causes one to (literally) see red. The auditory phenomenon described above is similar–play it too slowly and the music disappears, but speed it up, and a melody emerges.
Too much stew from one oyster, Rand. I suggest neither example is an emergent property, but rather just the result of the fact that the eye-brain or ear-brain combination has a built-in band-pass filter, and can only make sense of signals within a certain frequency range. You might as well be amazed by the fact that you can’t hear subsonics or supersonics, but if you speed up the latter or slow down the former, you can. Similarly, you can’t see infrared or ultraviolet, but lengthen or squeeze the waves, and you can.
It is interesting that we have amazing pattern-detection circuits in our visual and auditory processing systems, such that — for example in this case — we can pick out a recognizable pattern, a tune, from amid a distracting tune, provided the “camouflaged” tune is within the right frequency range. This is similar to our ability to understand one conversation in the middle of a loud cocktail party, or (in the visual arena) our ability to pick out a familiar face in a crowd. But these are not emergent properties of the cocktail party, or the crowd: they are simply very good pattern-recognition abilities in our detection circuits.
On the subject of emergence, there are plenty of phenomena that arise uniquely when you get enough of small “uninteresting” degrees of freedom interacting. The most obvious is irreversibility, everything that follows from the Second Law of Thermodynamics, including the definition of past and future, the concepts of aging and irreversible change, et cetera, which are impossible to see when you measure atoms by the handful, but which emerge unexpectedly and dramatically when you have 10^23 of them together.
With respect to the debate about human consciousness, I personally suggest both POV are amusingly egocentric. There’s a hidden assumption that human consciousness is some amazing wonderful complex property that only some truly remarkable processing system could provide. That’s yet to be proven. Consciousness may be wonderful and ineffable to experience, but we’re wired to think that, just like we’re wired to think sex is equally wonderful and ineffable — and no imaginable alien scientist could possibly see in sex the operation of a mysterious and amazingly wonderful contraption.
I suggest there’s every chance that consciousness is a fairly simple mechanism, far simpler, for example, than the visual pattern-matching mechanism mentioned above, and the only reason it isn’t more widely spread among Earth animals is that its value to the survival of the species is marginal.
I’ve always thought Conway’s game of “Life” is an amazing example of emergent behavior. Really simple rules, amazingly complex activity.
…the only reason it isn’t more widely spread among Earth animals is that its value to the survival of the species is marginal.
Actually, we don’t know how widely-spread among earth animals it is. We assume that many animal aren’t conscious, but we have no way to know that. Based on my experience, I’m pretty sure that cats and dogs are, as well as dolphins. I suspect that all mammals are, in fact, and many other animals as well.
I believe the whole can indeed be greater than the sum of the parts, both qualitatively and quantitatively. However, if by ‘human intelligence’ they mean ‘human consciousness’, I disagree.
The problem is that if every human action is controlled by electro-chemical reactions, then there is no evolutionary reason for human beings (or any life form for that matter) to be self-aware. A chemical reaction doesn’t need to be aware of what it is doing in order to take place. My brain sees a pretty girl and a chemical reaction tells it that I desire to communicate with her. In fact, that desire itself is just a chemical reaction. Talking to her is just another set of super-complicated electro-chemical reactions taking place in order to send signals to my mouth muscles and vocal cords based on physical and chemical patterns stored in another part of my brain called memory, social learning, whatever. None of these chemical reactions requires some undefined, unscientific, totally subjective property called self-awareness in order to take place.
My question is, and I apologize if I’m going off topic, why are we self-aware in the first place. Saying that ‘human consciousness’ is just an illusion is a cop out. Calling it an illusion doesn’t change the reality of what it is.
Saying that self-awareness is some hidden latent property of all matter that suddenly emerges in sufficiently complex and interconnected physical systems is also a cop out. This doesn’t explain why matter behaves that way, or why other kinds of incredibly complex interconnected systems (like planets or galaxies) don’t seem to be self aware (maybe they are). It?s an outrageous, un-proven explanation for outrageous, unnecessary phenomena. The only way to test it would be to build a machine as complex as a self aware organism. Then what? Ask it if it is self-aware? Wait and see if it starts writing poetry or developing religion, philosophizing, waging war on the weak carbon based life-forms? Just because it acts like a living creature doesn’t mean it is necessarily aware that it is doing so. How can you tell?
Just because it acts like a living creature doesn’t mean it is necessarily aware that it is doing so. How can you tell?
Ultimately, you can’t. We don’t have the specs for a selfawarenessometer.
Alan Turing, who gave a lot of thought to this, couldn’t come up with anything other than his test. Since we can’t even know whether other humans are self aware (it’s ultimately taken on faith), it’s the best we can do with non-humans as well.
Chris, are you acknowledging the existence of human self-awareness and then asking why it would have evolved? Or are you saying it doesn’t really exist because all behavior is simply explainable as a series of very complex chemical reactions? The latter proposition is self-refuting, as I cannot contemplate the proposition without demonstrating my own self-awareness.
Others (such as Searle) scoff at the notion, arguing that gathering a large number of entities together isn’t going to change their properties in a quantitative way…
I’m pretty sure you meant “qualitative” here.
Other counterexamples along the lines of yours: gas or fluid pressure vs. individual molecular impacts… or the reliable emergence in so many different domains of the Gaussian distribution. None of the random binary choices going into it “knows” what the ensemble result will be… but there it is anyway.
To be repetitive, Dennett’s Darwin’s Dangerous Idea (like Marvin Minsky’s Society of Mind) is all about just this: emergence of the “highest” attributes from the patient, cumulative stupidity of simple algorithms. Along the way, he leaves Searle in small pieces. What seems “common sense” is really the argument from incredulity: “I can’t imagine how it might happen, so it must not be happening.” Fortunately, the world isn’t constrained by the limitations of some humans’ imagination.
Consciousness may be wonderful and ineffable to experience, but we’re wired to think that
Carl, do you know Tor Norretranders’ The User Illusion? It casts some sharp highlights on just this subject: how the epiphenomenon of consciousness or self-awareness — which is quite dispensable much of the time, and shows most of its curlicue-ness only on the very rare occasions we’re ‘thinking philosophically’ — gets elevated to the marvelous mysterious essence-ness of us-ness.
No I don’t, Monte Davis, but I’m not surprised.
The gosh-wow that surrounds the quasi-mystic investigation of consciousness reminds me amusingly of the long “deep” conversations we used to have as college freshmen and sophomores about girls…
You know, I really feel like there’s something deep, marvelous, and ineffably wonderful about [insert her name here], especially in that magic moment when she undoes the first button on her blouse…
In truth, in hindsight, we were just incredible horndogs, and the wonder-mystery was because we were trying to rationalize basal urges that had zero underlying rationality. We might as well have been trying to discern meaningful patterns in blank TV static.