What? Somebody noticed that boys and girls are different? Call the police. Write some legislation. Send anyone (especially parents) that notices to the gulag.
“Sexist” is inflammatory language, and it is a short step to “there ought to be a law.” There are certain parts of the internet that I wouldn’t want to mention in her presence. Vive la 1st Amendment!
Her thesis is fundamentally flawed in that shape-from-shading has *nothing* to do with motion of the observer’s eyes, it’s entirely dependent upon the lightfield and object. Only second-order effects of backscatter from the observer’s body and head can change the illumination of the lightfield, but that is dependent on the contrast of your body vs the surroundings. If the surroundings are dark and your clothing light, the scene contains light scattered from yourself, but in bright surroundings with darker clothing you actually darken nearby objects. Since the sign and magnitude of this effect is unpredictable, it is unlikely to be useful for 3D perception.
If only your eyeballs move, there is no impact on shape-from-shading. Boyd claims “Shape-from-shading is a bit trickier. If you stare at a point on an object in front of you and then move your head around, you’ll notice that the shading of that point changes ever so slightly depending on the lighting around you. The funny thing is that your eyes actually flicker constantly, recalculating the tiny differences in shading, and your brain uses that information to judge how far away the object is.”
Rotating the eye in its socket cannot change the illumination of a scene. She’s simply making shit up.
I have a lot of practical knowledge of 3D perception because I was walleyed until corrective surgery at age 13, and I was able turn out the lights in a cluttered room and walk through easily. This ability faded after I gained stereo vision.
Rotating the eye isn’t what she is describing.
What was your interpretation of the phrase “that tiny, constant flickering of your eyes” in her context?
It’s pretty well known that men and women have different saccade patterns.
Flickering eye movement doesn’t just look at different parts of a view, which rotating the eye would suggest. It changes the actual perception of the scene. The movement of the eye doesn’t change the illumination. The illumination changes from moment to moment by itself which a flickering eye can reveal. It’s like looking for defects in a stack of common items by riffling through them.
You really enjoy making up shit as you go along, don’t you? Face it, she’s just an idiot. Her paper was word salad with a sprinkling of real research.
My response to you had content. Her article had content. Your comment is devoid of content. Your earlier comment did have this content…
…cannot change the illumination of a scene.
My response: The illumination changes from moment to moment by itself which a flickering eye can reveal.
How this is interpreted by the brain we may only guess, but that it counters your non sequitur should be obvious. The illumination of a scene is constantly changing and has nothing to do with eye movement… but eye movement does change perception which is input to the brain.
Making shit up would be to not be responsive to your statement (an extremely common trick of liberals and wives.)
But why would the scene illumination change from “moment to moment”? For example, say I’m resting in a cave with my helmet sitting next to me on a rock. The helmet has a Cree XML2 10-Watt LED driven by a constant-current regulator. As long as the lithium polymer battery holds out, the scene illumination is going to be as steady and unchanging as the rock walls themselves. Nevertheless, as I sit there looking around, there isn’t any odd lack of 3-D appearance.
Or suppose you’re sitting in a desert on a clear day near noon. The only illumination changes that occur happen slowly as the sun moves across the sky, and yet you don’t have to sit motionless for half an hour of the Earth’s rotation before the scene pops into 3-D.
Her theory sounds more like the ancient idea that the eyeball emits rays that have to bounce off the scene.
why would the scene illumination change from “moment to moment”?
That is an excellent question George. I don’t know the answer but can provide a reference to experience. Back when dinosaurs roamed the earth I had a chance to work with the only Forth system I worked with in a commercial setting rather than playing with it as a hobbyist. It was a quality control inspection system that used a grid of grayscale sensors which is similar to a light sensor. Each pixel had a range of values (0 to some number I don’t remember, probably not 256.) It would take a snap shot of parts moving on a conveyor but you could stop the conveyor giving you multiple snap of the exact same arrangements of parts. The cool thing was a puff of air would push rejected parts off the conveyor.
The thing is the repeat snaps for a nonmoving conveyor were hardly ever exactly the same. There were almost always pixels that were 1 or 2 off in either direction. There could be many reasons for that not having to do with lighting and the fact that it was only some pixels that changed might suggest that.
The thing with Doug is that we both agree that the movement of the eye doesn’t change the illumination. He’s making shit up when he says the author of the article is saying that. She’s just providing food for thought.
Flickering eye movement is a fact. She isn’t saying with absolute authority that nobody is allowed to question that her assertions are what’s happening. She’s just proposing and admits that.
We can safely assume that flickering eye movement has some purpose related to perception. We might ask Doug what that purpose is?
Not many Forth programmers left! When I was a teenager I wrote a Forth interpreter in 8080 assembly language, which got me my first programming job. ^_^
Conveyor systems vibrate like a banshee when you get down to optics, often even from someone walking by, so I’ve sometimes even had to suspend scanners from the ceiling just to get around particularly troublesome installations. Then there’s the problem that most conveyor systems are in buildings lit with LPS or HPS lamps that are inherently unstable (thus the big ballasts) and at heart are about as steady as fire or lightning, and everything is cycling at 60 Hz. That’s why my example was an LED with a constant DC current source. 🙂
Another clue is that when caving over breakdown piles in a group, everyone else’s lights create dancing shadows that make foot placement quite confusing. Similar things happen with outdoor shadows under waving leaves or the shifting shadows of passing car headlights. Those don’t enhance our 3-D image processing, they create sometimes sickening and confusing sensations where it seems like the ground is moving.
Our rapid eye flickers do several things. They allow us to create a more detailed mental map by scanning the high-resolution portion of the eye across the image, and they allow us to create a more accurate 3-D image by sampling key points for both focus and parallax shifts (point A is x units nearer than point B, point C is nearer still, point D is more distant). Without the shifts, we could measure point A but the other points would just have some parallax information from an image that was out of focus everywhere but point A.
So we’re both scanning o get a higher resolution 2-D painting of the scene and probably pinging selected features for range information.
So this brings up a 3-D movie question, and I haven’t watched 3-D movies. Do they tend to have close-ups with a rich field depth, or are most of the scenes maybe five or ten feet out and beyond? I’m wondering if the visual effects people could have encountered a problem getting close-up 3-D images to feel quite right, and If there are close-ups, does the audience find them a bit disorienting or wrong in some way without being able to quite say why?
We’re quite used to watching 2-D images where things that aren’t in focus can’t be brought into focus by eye movement, and so we just sit and watch the screen at whatever fixed focal distance away it is. If the 3-D stereoscopic image is immersive enough, could a subject start unconsciously trying to change their focal distance like they naturally do in the real world, and essentially sending their 3-D visual system into spasms and overshoots until they start to get a form of motion sickness?
Vibration was always one problem we dealt with. Lot’s of polished granite in the quality control department. You never really could completely isolate all the different variables.
We had this one machine that never really worked that was supposed to measure the thickness of a part of stellar sensors use for satellite navigation. The problem is beryllium oxide is translucent so the opposing sensors would interfere with each other. I seem to remember having to adjust the distance between the sensors to cancel out the interference for different thicknesses of parts. Also it was influenced by power regulation which I could never get tight enough… ended up using dry cells which drained to create a whole different problem. Talk about rube goldberg. To this day I’m still trying to figure out how I became responsible for making this thing work and not the vendor that sold it to us? Sometimes you can be too smart.
Before someone else points it out, yes granite transmits vibration. The granite would have to be isolated from other vibration sources.
The granite has to sit on inner tubes for vibration isolation. 🙂
Anyway, I notice the article’s author says her vomiting occured back in 1997, so it’s taken her 17 years to come up with the feminist eye movement theory. I have no idea if there have been any advances in display technology since then, but it makes me wonder if the vomitous 3-D environment that upset her showed Mario and Luigi jumping over mushrooms.
The eye movement lost me, because the pupil shifts only a small amount as our eyes flicker around (about their center), and it’s certainly not enough of a baseline to form any useful 3-D mapping at distance. Shadows and light shifting wouldn’t explain it either because we see in 3-D just fine while steel cutting and doing other tasks that create an intense, complicated shower of changing sparks and shadows that shift at speeds much faster than we could possibly process.
My sneaking suspicion is that as our eyes dance around we refocus on objects within the scene and thus use focal-length feedback as another source of input, which is quite usable at typical indoor or picking and gathering distances. A VR set can precompute the left-right parallax for a scene, but it can’t bring each object in or out of focus in concert with the eye’s focal changes without much greater optical and computational sophistication, along with tracking the eye’s focusing in real time. Without such a system, the left/right signals are telling the brain one story about the field but as the eye wanders the scene, everything is focusing at the same distance, creating a 3-D paradox.
If you’ve been tuned for long range hunting and throwing, the focal feedback isn’t nearly as useful as the parallax, but if you’re tuned to pick berries (a situation where one eye is often obstructed by a leaf), then the focal length feedback system might play a more important role.
The difference is obvious proof of the oppressive male patriarchy.
Also, a trivially easy test if to see if this is the case is to just remove the VR foreground objects so everything in the scene requires a far (and thus virtually constant) focal distance, so the parallax information is stripped of important focal distance information. If the women still get sick, then it’s because they instinctively sense how ridiculous a VR headset makes them look, while men really don’t care, which is why men are content to walk around in welding helmets or wearing giant blocks of cheese on their heads.
I would also investigate by tracking the users’ eye movements and see if there’s also a difference in where the wearers are looking. Men might be going “Cool. VR. I’m going to zap some aliens!” as they continually scan the far distance, searching for threats and opportunities, while the women might be paying much more attention to the foreground objects where the parallax/focal distance paradox would come into play. If the men aren’t paying attention to the foreground and the women are, it might explain why one group gets sick and the other doesn’t.
The difference is obvious proof of the oppressive male patriarchy.
Yep.
Paraphrase: “I suck at video games and so do my women friends so all females must suck at video games.” What a load of S***
Not surprising that men and women would evolve different perceptual systems given their different roles. Emphasis on motion parallax would be quite handy in dynamic and interesting scenarios, like trying to poke a charging lion with a stick.
An interesting article on an interesting subject rendered stupid by the over-use of the inappropriate term “sexist”.
Great article. Perception is fascinating. Vive la différence!
What? Somebody noticed that boys and girls are different? Call the police. Write some legislation. Send anyone (especially parents) that notices to the gulag.
“Sexist” is inflammatory language, and it is a short step to “there ought to be a law.” There are certain parts of the internet that I wouldn’t want to mention in her presence. Vive la 1st Amendment!
Her thesis is fundamentally flawed in that shape-from-shading has *nothing* to do with motion of the observer’s eyes, it’s entirely dependent upon the lightfield and object. Only second-order effects of backscatter from the observer’s body and head can change the illumination of the lightfield, but that is dependent on the contrast of your body vs the surroundings. If the surroundings are dark and your clothing light, the scene contains light scattered from yourself, but in bright surroundings with darker clothing you actually darken nearby objects. Since the sign and magnitude of this effect is unpredictable, it is unlikely to be useful for 3D perception.
If only your eyeballs move, there is no impact on shape-from-shading. Boyd claims “Shape-from-shading is a bit trickier. If you stare at a point on an object in front of you and then move your head around, you’ll notice that the shading of that point changes ever so slightly depending on the lighting around you. The funny thing is that your eyes actually flicker constantly, recalculating the tiny differences in shading, and your brain uses that information to judge how far away the object is.”
Rotating the eye in its socket cannot change the illumination of a scene. She’s simply making shit up.
I have a lot of practical knowledge of 3D perception because I was walleyed until corrective surgery at age 13, and I was able turn out the lights in a cluttered room and walk through easily. This ability faded after I gained stereo vision.
Rotating the eye isn’t what she is describing.
What was your interpretation of the phrase “that tiny, constant flickering of your eyes” in her context?
How mammals interpret scenes?
http://en.wikipedia.org/wiki/Saccade
It’s pretty well known that men and women have different saccade patterns.
Flickering eye movement doesn’t just look at different parts of a view, which rotating the eye would suggest. It changes the actual perception of the scene. The movement of the eye doesn’t change the illumination. The illumination changes from moment to moment by itself which a flickering eye can reveal. It’s like looking for defects in a stack of common items by riffling through them.
You really enjoy making up shit as you go along, don’t you? Face it, she’s just an idiot. Her paper was word salad with a sprinkling of real research.
My response to you had content. Her article had content. Your comment is devoid of content. Your earlier comment did have this content…
…cannot change the illumination of a scene.
My response: The illumination changes from moment to moment by itself which a flickering eye can reveal.
How this is interpreted by the brain we may only guess, but that it counters your non sequitur should be obvious. The illumination of a scene is constantly changing and has nothing to do with eye movement… but eye movement does change perception which is input to the brain.
Making shit up would be to not be responsive to your statement (an extremely common trick of liberals and wives.)
But why would the scene illumination change from “moment to moment”? For example, say I’m resting in a cave with my helmet sitting next to me on a rock. The helmet has a Cree XML2 10-Watt LED driven by a constant-current regulator. As long as the lithium polymer battery holds out, the scene illumination is going to be as steady and unchanging as the rock walls themselves. Nevertheless, as I sit there looking around, there isn’t any odd lack of 3-D appearance.
Or suppose you’re sitting in a desert on a clear day near noon. The only illumination changes that occur happen slowly as the sun moves across the sky, and yet you don’t have to sit motionless for half an hour of the Earth’s rotation before the scene pops into 3-D.
Her theory sounds more like the ancient idea that the eyeball emits rays that have to bounce off the scene.
why would the scene illumination change from “moment to moment”?
That is an excellent question George. I don’t know the answer but can provide a reference to experience. Back when dinosaurs roamed the earth I had a chance to work with the only Forth system I worked with in a commercial setting rather than playing with it as a hobbyist. It was a quality control inspection system that used a grid of grayscale sensors which is similar to a light sensor. Each pixel had a range of values (0 to some number I don’t remember, probably not 256.) It would take a snap shot of parts moving on a conveyor but you could stop the conveyor giving you multiple snap of the exact same arrangements of parts. The cool thing was a puff of air would push rejected parts off the conveyor.
The thing is the repeat snaps for a nonmoving conveyor were hardly ever exactly the same. There were almost always pixels that were 1 or 2 off in either direction. There could be many reasons for that not having to do with lighting and the fact that it was only some pixels that changed might suggest that.
The thing with Doug is that we both agree that the movement of the eye doesn’t change the illumination. He’s making shit up when he says the author of the article is saying that. She’s just providing food for thought.
Flickering eye movement is a fact. She isn’t saying with absolute authority that nobody is allowed to question that her assertions are what’s happening. She’s just proposing and admits that.
We can safely assume that flickering eye movement has some purpose related to perception. We might ask Doug what that purpose is?
Not many Forth programmers left! When I was a teenager I wrote a Forth interpreter in 8080 assembly language, which got me my first programming job. ^_^
Conveyor systems vibrate like a banshee when you get down to optics, often even from someone walking by, so I’ve sometimes even had to suspend scanners from the ceiling just to get around particularly troublesome installations. Then there’s the problem that most conveyor systems are in buildings lit with LPS or HPS lamps that are inherently unstable (thus the big ballasts) and at heart are about as steady as fire or lightning, and everything is cycling at 60 Hz. That’s why my example was an LED with a constant DC current source. 🙂
Another clue is that when caving over breakdown piles in a group, everyone else’s lights create dancing shadows that make foot placement quite confusing. Similar things happen with outdoor shadows under waving leaves or the shifting shadows of passing car headlights. Those don’t enhance our 3-D image processing, they create sometimes sickening and confusing sensations where it seems like the ground is moving.
Our rapid eye flickers do several things. They allow us to create a more detailed mental map by scanning the high-resolution portion of the eye across the image, and they allow us to create a more accurate 3-D image by sampling key points for both focus and parallax shifts (point A is x units nearer than point B, point C is nearer still, point D is more distant). Without the shifts, we could measure point A but the other points would just have some parallax information from an image that was out of focus everywhere but point A.
So we’re both scanning o get a higher resolution 2-D painting of the scene and probably pinging selected features for range information.
So this brings up a 3-D movie question, and I haven’t watched 3-D movies. Do they tend to have close-ups with a rich field depth, or are most of the scenes maybe five or ten feet out and beyond? I’m wondering if the visual effects people could have encountered a problem getting close-up 3-D images to feel quite right, and If there are close-ups, does the audience find them a bit disorienting or wrong in some way without being able to quite say why?
We’re quite used to watching 2-D images where things that aren’t in focus can’t be brought into focus by eye movement, and so we just sit and watch the screen at whatever fixed focal distance away it is. If the 3-D stereoscopic image is immersive enough, could a subject start unconsciously trying to change their focal distance like they naturally do in the real world, and essentially sending their 3-D visual system into spasms and overshoots until they start to get a form of motion sickness?
Vibration was always one problem we dealt with. Lot’s of polished granite in the quality control department. You never really could completely isolate all the different variables.
We had this one machine that never really worked that was supposed to measure the thickness of a part of stellar sensors use for satellite navigation. The problem is beryllium oxide is translucent so the opposing sensors would interfere with each other. I seem to remember having to adjust the distance between the sensors to cancel out the interference for different thicknesses of parts. Also it was influenced by power regulation which I could never get tight enough… ended up using dry cells which drained to create a whole different problem. Talk about rube goldberg. To this day I’m still trying to figure out how I became responsible for making this thing work and not the vendor that sold it to us? Sometimes you can be too smart.
Before someone else points it out, yes granite transmits vibration. The granite would have to be isolated from other vibration sources.
The granite has to sit on inner tubes for vibration isolation. 🙂
Anyway, I notice the article’s author says her vomiting occured back in 1997, so it’s taken her 17 years to come up with the feminist eye movement theory. I have no idea if there have been any advances in display technology since then, but it makes me wonder if the vomitous 3-D environment that upset her showed Mario and Luigi jumping over mushrooms.
The eye movement lost me, because the pupil shifts only a small amount as our eyes flicker around (about their center), and it’s certainly not enough of a baseline to form any useful 3-D mapping at distance. Shadows and light shifting wouldn’t explain it either because we see in 3-D just fine while steel cutting and doing other tasks that create an intense, complicated shower of changing sparks and shadows that shift at speeds much faster than we could possibly process.
My sneaking suspicion is that as our eyes dance around we refocus on objects within the scene and thus use focal-length feedback as another source of input, which is quite usable at typical indoor or picking and gathering distances. A VR set can precompute the left-right parallax for a scene, but it can’t bring each object in or out of focus in concert with the eye’s focal changes without much greater optical and computational sophistication, along with tracking the eye’s focusing in real time. Without such a system, the left/right signals are telling the brain one story about the field but as the eye wanders the scene, everything is focusing at the same distance, creating a 3-D paradox.
If you’ve been tuned for long range hunting and throwing, the focal feedback isn’t nearly as useful as the parallax, but if you’re tuned to pick berries (a situation where one eye is often obstructed by a leaf), then the focal length feedback system might play a more important role.
The difference is obvious proof of the oppressive male patriarchy.
Also, a trivially easy test if to see if this is the case is to just remove the VR foreground objects so everything in the scene requires a far (and thus virtually constant) focal distance, so the parallax information is stripped of important focal distance information. If the women still get sick, then it’s because they instinctively sense how ridiculous a VR headset makes them look, while men really don’t care, which is why men are content to walk around in welding helmets or wearing giant blocks of cheese on their heads.
I would also investigate by tracking the users’ eye movements and see if there’s also a difference in where the wearers are looking. Men might be going “Cool. VR. I’m going to zap some aliens!” as they continually scan the far distance, searching for threats and opportunities, while the women might be paying much more attention to the foreground objects where the parallax/focal distance paradox would come into play. If the men aren’t paying attention to the foreground and the women are, it might explain why one group gets sick and the other doesn’t.
The difference is obvious proof of the oppressive male patriarchy.
Yep.
Paraphrase: “I suck at video games and so do my women friends so all females must suck at video games.” What a load of S***
Not surprising that men and women would evolve different perceptual systems given their different roles. Emphasis on motion parallax would be quite handy in dynamic and interesting scenarios, like trying to poke a charging lion with a stick.
An interesting article on an interesting subject rendered stupid by the over-use of the inappropriate term “sexist”.