I’ve always suspected this: it’s lying to you.
Somehow it seems like a high-tech metaphor for people who call themselves “progressives.”
I’ve always suspected this: it’s lying to you.
Somehow it seems like a high-tech metaphor for people who call themselves “progressives.”
Comments are closed.
Progress bars are not the most annoying part of modern computing.. that has to go to inexplicable freezes and slowdowns. Hey, my computer just stopped responding, what happened? I know, I’ll try to open task manager .. 15 minutes later .. nope, nothing is using 100% cpu. The harddrive light is going nuts! Why is it doing that? No idea. Anyway, the harddrive has stopped now and the machine is still unresponsive. Screw it, I’ll go get coffee and damn you computer, you better be running fast when I get back!
We have powerful, in many cases multi-core CPUs, and we have multi-tasking multi-process and multi-threaded operating systems. I can understand a single task getting bogged down such as completing a download or upload over the less-than-reliable Internet, installing a large program, opening or saving a large file, and so on. What I cannot understand is why one bogged down task seems to lock out the whole system.
The worst offender is PDF downloads over the Internet. Yeah, yeah, PDF is not HTML, but Adobe must be badly written from a UI/GUI standpoint that a PDF download can lock up your browser and in some cases your entire system.
One of the things about this whole GUI paradigm is that it is pseudo-multitasking. No one has figured out or perhaps dared to make a GUI multi-threaded, although they tell me the Be OS did something of the sort, but I hear that writing Be OS apps as a consequence was hard to do. The whole GUI paradigm depends on dispatch of events, and that the response to each event be short and sweet — in other words, everybody has to “play fair” and not tie up the CPU.
The problem is, what do you do when you have some long operation, like a long download, file save, or file open, how do you handle it? A GUI is supposed to be non-modal, and I remember an Apple developers conference where non-modal was not as much as an application design trait or an Apple way of doing things but a kind of religion. Anyway, you have more modes (different “states” the app is in) they people let on. One mode is waiting for some long operation to complete.
Guess what. It has been how many years since Xerox PARC, and how many years since the Apple Lisa and Windows 1.0? No one has figure out a good design pattern for this. What you want to happen is that the long operation be interruptable, that a person can cancel out of it if they get tired of waiting or if they have to go on vacation or something. To do that, the long operation has to allow for checking the event queue at regular intervals.
The second thing is that the long operation has to put the GUI app into a kind of restricted mode so that when checking the event queue, you don’t allow other commands that would mess up the long operation. You may want to go into a mode where the only allowed events are Paint along with selection of the Cancel button on the long operations.
I think you can implement this in a native-code Windows app, but it is a question of writing a modal event loop with special handling, for which there are not good descriptions. There was some kind of modal event handling in Visual Basic 6 for which I cannot remember the command name, but that had some limitations and restrictions (i.e. bugs). There is a way that Sun recommended to do this in a Java Swing app, but it involved a worker thread for the long operation (yuck!) and some baroque scaffolding of interfacing the worker thread with the GUI to get this all to work.
What I am saying is that this is not rocket science, we have been at this GUI business for maybe 40 years by now, but every one of these hurry-up-and-wait modes in a GUI is a custom affair in each app, which many app developers manage to mess up and hence the problem you describe. The only solution Microsoft seems to have is to ride the wave of Moores Law of waiting for the hardware to catch up with their too coarse task-switching granularity, kernel processes that hog time slices, and that they don’t have a good app programming pattern to follow for the progress of a long operation.
that they don’t have a good app programming pattern to follow for the progress of a long operation
They don’t?
They have background threads and progress callbacks, last I checked, and have for years and years now.
There’s this whole “.NET” thing that they’ve been doing for over a decade now…
Another Windows 7 user, I take it? It took me months to stop that behavior by disabling most of the OS.
If you want to see what tasks are hogging the hard drive then you can add the I/O reads and I/O writes columns to your task list. Open up task manager and click the ‘View’ menu at the top and then select ‘Columns’. Put a check mark next to I/O reads and I/O writes and hit okay (you can add ‘I/O Read(Write) Bytes’ if you want to see a delta of bytes read or written). I usually sort the task list by the I/O reads column because read operations happen far more often than writes. My experience is that it is usually your anti-virus software or the ‘Svchost.exe’ process that query the hard drive the most often . Svchost is tricky because it represents the processes that all of the background services run under. Many times I find that it is the ‘Automatic Updates’ service that maybe in the process of downloading a patch or security update in the background. You can either stop the service from the services console or go into your ‘security center’ in the control panel and change the way windows handles automatic updates. By default it set to automatically download updates from Microsoft.
Although I should add that if you disable your antivirus or automatics updates then you of course leave your computer vulnerable to future viruses or security holes. So, it’s a give take.
I’m running Win 7 Starter on a Netbook with 1 GB of memory. Windows tuning algorithms apparently can’t cope with such a small machine and in a couple of days can detune the OS until it mostly thrashes the hard drive. It was so bad that most of the applications I have loaded on the machine are diagnostics to figure out what the OS is up to.
Just off the top of my head, I had to disable Background Intelligent Transfer Service, Ready Boost, Ready Boot (hard to find), logging, backups, Windows diagnostics, error reporting (as tasks timed out they would generate errors, but the error reporting would also time out, and round and round it went), and of course automatic updates and about a dozen other services.
I figure I’ve disabled everything that Microsoft thought would add peformance improvements over XP, but at least my lobotomized OS is stable and runs great. My guess is that management thought that making a stripped down starter version of Win 7 was a trivial task so they assigned it to third-string development and testing teams who totally botched it.
I’m not sure lying or dishonesty is really the right word. I think it’s largely an expectations game: people want to have a hard number for an indeterminate quantity. Time to complete any given process is knowable only after the fact, because while computers are theoretically deterministic, in practice this determinism is undermined by the complexity of the system and the number of interactions that it has with other systems and even the (theoretically non-deterministic) user. If you want computers to be precise and exact, with essentially no margin for error, you can get that. You only have to remove all the power and generalizability that makes them useful. For example, a car’s braking computer is (absent mechanical breakdown) deterministic and real-time. But it’s incredibly inflexible.
So basically, people’s expectations need to change from predicting how long something is going to take, to seeing that progress is (or is not) still occurring.
Jeff;
In this case it can be more that you would need to know what everyone else on the Internet is doing, which is a tad difficult 🙂
all;
I like the analogy in the original post. So true! The progress bar is something that seems like it should be easy, but in practice the ugly details of the real world prevent it from ever being “solved”. But people want to see it, so it’s done anyway to put a happy face on it. And because it’s not completely useless it can’t be abandoned.
As usual, Randall Munroe has the last word here…
http://xkcd.com/612/
Programmers should take a tip from Scotty, a great engineer, who never told the captain how long something was really going to take.
So when you click on a button, instead of a progress bar the program should say “Preparing for installation” and have a little thing going round and round. Meanwhile the program is actually performing the installation. Once installation is complete, the program should display the progress bar and then pretend to be doing the installation, with a very precise estimate of the time till it completes. It’s what Scotty would do.
Riffing on comedian Paul Reiser, my proudest moment as an educator was my students learning how to lie.
My TA and I had gone through one of these exercises of revising a lab on digital logic into which we had added a software programmable FPGA. Wanting to make the lab more real-world relevant, I put something into the lab manual to the effect that the students had to keep track of the time they took to assemble a simpler circuit from discrete logic and then extrapolate their time estimate to what it would take to implement the complicated circuit they do on the FPGA, but only this time in discrete logic.
In other words, I thought of incorporating some simple engineering project estimating into the lab. The students would keep track of their time and answer the lab question, but I didn’t sense that they saw the point of it, maybe regarding this as another make-work lab question.
One fine evening I was in the lab because I was required to observe the teaching of the lab TAs. One student, who apparently was in the leadership role in his lab group, blurted out, “We’re engineers, dammit! We should take whatever estimate we get and then double it!”
That young man was going places. I felt so proud to have him in Circuits class.
I often joke about the “Programmer’s Time Estimation Algorithm”. When you receive a new assignment and the boss asks for an estimate of how long it’ll take:
1. Take the initial answer that pops into your mind.
2. Double the digits
3. Go to the next higher unit
Example:
Initial Estimate: 2 weeks
Estimate submitted to boss: 4 months
You know the boss will at least cut the estimate in half but that’ll give you 2 months. In reality, it almost always takes longer than you intially thought anyway and if you come in early, you’re a Miracle Worker!
“Oh, you didn’t tell them how long it would really take, did ya? How d’ya expect people to think you’re a miracle worker?”
^_^ That’s the line!
When NASA completed their cost estimates for Apollo, the head of NASA doubled the final number before he presented it to the President. If was sound engineering based on previous missed estimates and gut feel.
When the Space Shuttle estimates were assembled, they should’ve shifted the decimal point to the right.
This type of pyschological trickery is nothing new. Sometimes when stage performers are reading a small selection from a thick book they will cover part of the book because even if the audience knows the reading will be short, seeing four inches of stacked paper makes them think it will take forever.