They forgot the B-52, KC-135, Huey, C-130, Boeing 737, Boeing 747 – oh wait those are all 40 to 60 year old systems still doing their job.
It’s interesting to think many of the journalist probably flew in commercial aircraft designed a decade or more before the Shuttle and are still in production 🙂
That should “flew to Florida to view the launch”.
However, the key point is different technologies are in different of the life cycles which what really determines progress. Not to mention the Shuttle’s replace will be a capsule similar to the ones used before the Shuttle era. Even SNC’s lifting body is a throwback to NASA X-craft from the 1960’s.
None of the vehicles flying to and from orbit in the next few years will be the “Shuttle’s replacement.”
Ah, playing word games again. But I am glad you admit that New Space is incapable of replacing the Shuttle.
OK let’s change that to the “new” systems being designed to transport astronauts to orbit are all throwbacks to the 1960’s.
Basically the equivalent of a contractor “retiring” their Ford F-350 p/u and switching to driving a VW Bug because they are no longer rich enough to afford to drive the F-350.
I am glad you admit that New Space is incapable of replacing the Shuttle.
Why would that be an “admission”? I’ve never claimed that it would, or should. No one can, or should replace the Shuttle. The very concept was flawed. But all of its capabilities will be replaced. By “New Space.”
OK let’s change that to the “new” systems being designed to transport astronauts to orbit are all throwbacks to the 1960′s.
The costs aren’t.
They forgot the B-52, KC-135, Huey, C-130, Boeing 737, Boeing 747 – oh wait those are all 40 to 60 year old systems still doing their job.
44 years of progress is sometimes hard to tell in a photo.
>> But I am glad you admit that New Space is incapable of replacing the Shuttle.
An easy admission, as STS only real function was keeping an army of civil servants and contractors employed.
It’s a funny analogy folks. No analogy is perfect. But it makes the point. 30 years is a long time to be stagnate.
The real point is behind the pictures. The other pictures are of commercial products. We all know the story of the governments competition to the wright brothers, don’t we?
In most cases a long life indicates a successful design. The shuttle was a very wasteful jobs program. That’s opportunity cost lost. As far as duplicating capabilities… give business time, they can’t put the full force of government into a thing. But to suggest they can’t meet or surpass any particular capability is stupid. Really stupid.
The comments in the linked page are a hoot. For example, I was unaware that Newspace is cheaper than NASA because the former uses kerosene as a fuel, rather than the expensive liquid nitrogen used by NASA.
Some support isn’t all that helpful…
bbbeard,
Yes, progress is difficult to tell from a photo.
For example the uprated SSME, the light weight ET and the glass cockpits the Shuttle Orbiters have today, all major upgrades to the system.
Although the design is 30 years old on the outside, on the inside the Shuttle have had a number of major upgrades so they are not really the same craft that were flying in 1981 anymore then the B-52’s the U.S.A.F. is flying today are the same as they were when they left Boeing in 1962.
Rand,
[[[But all of its capabilities will be replaced. By “New Space.”]]]
So show me the design for the New Space vehicle that will be repairing satellites, the one with the robotic arm and workbay 🙂
Mr. Matula – The latest versions of all those (except possibly the KC-135, which rather makes the point in the OP rather than standing against it) bear rather little relation to the earliest versions in terms of efficiency and ease-of-use.
FYI,
Here is a webpage from NASA from 2006 detailing just some of the upgrades that were done to the STS since the first flight in 1981.
So its time to kill this new “no innovation” myth the Rand is trying to start about the Space Shuttle. Just like the other legacy systems I listed, the Shuttle has seen its share of modernizations.
You’re right Thomas, there is one capability new space will never match. The ability to stay operational while burning money.
So show me the design for the New Space vehicle that will be repairing satellites, the one with the robotic arm and workbay
See: But to suggest…
Ken,
Perhaps, but Virgin Galactic has been doing a good job of burning money without revenue for the last 7 years….
I made my first cell phone call on one of those Motorola Dynatech phones.
Rand:
You missed one of the 30-years-ago comparisons.
On one panel you should have had President Carter; on the neighboring panel, President Obama.
Virgin Galactic has been doing a good job of burning money without revenue for the last 7 years
Virgin is a conglomerate. Galactic is in development, not operational. If they do go operational they will only continue if it’s profitable. Unlike government which does not spend it’s own money.
Thomas, the NASA link just reinforces the OP’s graphic.
The space shuttle main engines were upgraded to incorporate a large throat main combustion chamber to further improve the reliability of the engine systems and increase engine reliability.
Because the engines were designed too close to the margins, and many at NASA had argued from the beginning that the throat should be widened so they weren’t running at a chamber pressure so close to the limit. This isn’t an “innovative” change or evolution, it was a fix to reduce the chance of losing a Shuttle.
Improved nose wheel steering mechanisms and the addition of carbon brakes.
A change required because the original steering mechanism and brakes were inadequate. The main landing gear is also inadequate, with a significant risk of blowing a tire on a hard landing. Unfortunately the designers realized it after the main wing box design had already been approved and sent out, so they couldn’t afford the delay and costs to switch to dual-tires, which would’ve require a slightly thicker wing.
Outfitted with a 5th set of cryogenic tanks and an external airlock to support missions to the International Space Station.
The orignal purpose of the Space Shuttle was to build and support a space station. Somehow they left off the most critical component required for such missions.
An updated avionics system that included advanced general purpose computers, tactical air navigation systems and a solid-state star tracker.
They moved from 1970’s computers using magnetic core memory to newer ones using Intel 80386’s. In part they had to make the switch because they couldn’t keep supporting the original computers because inventories were running out of the obsolete components. Wisely they quit using HAL/S (a programming language so bizarre that subscripts and superscripts are put on seperate lines) and switched to Ada, which means their programmers get to sit at a bigger table at the support conference of Obsolete Computer Languages. Their new navigation system still uses an early 5-channel GPS, and their solid-state star tracker is solid state! By gosh, the thing is transistorized!
Improved auxiliary power units that provide power to operate the space shuttle’s hydraulic systems.
They shouldn’t be “improved”, they should be “removed”. The APUs run at ridiculous RPM’s, are failure prone and troublesome, yet are a flight-critical system. To really improve reliability the system should’ve swtiched to all-electric control for flight surfaces and engine gimballing, with the electrical power provided by fuel cells which have proven to be extremely reliable and where its much easier an cheaper to incorporate large levels of redundancy.
The list you linked just focused on the orbiter, so it didn’t mention other major improvements to the launch system, such as incorporating another O-ring in the SRBs, or new ways to glue foam to the external tank.
You also mentioned the new light weight external tank, which uses aluminum lithium alloy, and that change hammers home the original point. The only really significant change greatly improved performance was done to the part that is thrown away. The ET has to maintain a production line, where changes and improvements can be incorporated easily.
The comparison of evolution rates with airliners is a very false analogy. Airliners don’t seem to change much because they’re a very mature technology in a highly developed and competitive field. The 737 is still flying because the design is still good enough to compete in the free market with the latest innovations from Boeing, Airbus, and anyone else who wants to enter the field, and that 737 operators (the world’s airlines) still regard it as a cost competitive solution to their needs. The reason airliners all seem to look the same (most being indistinguishable to the untrained eye) is that the shape and configuration is near a locally optimal solution. Unless some major jump is made to a flying wing or highly blended carbon fiber configuration, all airliners will look and perform like a slightly improved 737.
In contrast, nobody will design something new that looks and operates like the Shuttle, a craft that was merely a first attempt at making a reusable spacecraft, and one that only achieved partial reusability. The Soviets built a clone of it, and rightly decided that it belonged out in a field somewhere, surrounded by weeds. Nobody but the US government could afford to operate it, and even they couldn’t afford to operate it very often.
We should’ve moved on decades ago, matching the design to real requirements and doing so with lessons learned about refurbishment and overhaul costs, perhaps splitting our system into several different craft better tailored to different missions.
But back to the airliner comparison. The Shuttle was promised to have a fast turn-around time and a high flight rate, which is why the country, as a customer, bought it. Suppose you bought one Shuttle on the expectation that you could fly passengers to space every fourteen days, perhaps twenty one days at the outset. That’s your business model, just as an 737 customer has hard expectations about how many flights per day they need to stay profitable.
Then the Columbia is built, and in its first decade, decade of operation the launches are spaced out like this (days between launches)
214, 130, 97, 137, 382, 776, 1304, 154, 327, 185.
Later in its operations, STS-94 was an immediate repeat of the STS-83 mission, which had been cut short. No major configuration changes or mission planning was required. It was still 88 days between flights.
To claim the Shuttle’s lack of evolution is comforting, just like a 737’s apparent stasis, requires us to believe that the Shuttle’s flight-rate, turn-around times, and maintenance and operational costs are as close to optimal for manned space flight as a 737’s are for commercial air travel. If that’s true then we might as well give up on space now, because a couple of dozen humans a year in two-week stints, and payloads at $10,000 a pound and up, is all this planet will ever afford to launch into LEO.
And why in the world would you want a work bay? If there was anything learnt from the brief foray into satellite servicing done by NASA it was that the concept of retrieving a satellite to a bay doesn’t work.
Virgin Galactic has been doing a good job of burning money without revenue for the last 7 years
Virgin is a conglomerate. Galactic is in development, not operational. If they do go operational they will only continue if it’s profitable. Unlike government which does not spend it’s own money.
It seems to me the only interesting question is how much money has been spent over the 7 years. And more interesting, when is the money spent.
Eg 50,5,5,5,5,5,5 million is more than 5,5,5,5,5,5,50 million
If only there had been anything like the same kind of customer demand that would’ve driven product innovation (and diversity) in that last category…
…But it’s still possible.
You got too far ahead of yourself showing an intact shuttle landing at the bottom right corner. We don’t yet know that’s the way it’s going to happen. I hope so, that last thing we need is to lose yet another…
Actually, the Atari 5200 did not come out till late 82. I used to have one. Got a Colecovision for Christmas in 82.
dont forget, even government can make progress when it absolutely has to. Wars for instance, tend to make the bullshit walk, hence why we have the M-1A Abrams tank today rather than mules and cavalry horses… Then again, the M-1 does date back 30 years….though its been upgraded quite a bit. The proper way to compare airliners isnt from the outside, but in the cockpit, the airliner for the everyman will always be subsonic because the average person doesnt earn enough per hour to justify supersonic flight.
@Mike: I had a Kaypro II, which the image compares to Apple’s latest gadget. The Kaypro was a serious computer. Great keybord, Z-80 processor, CP/M operating system, dual double-density floppies. I programmed the heck out of it in 8080 assembly langauage, even writing a Forth interpretter. I still think the 80286 was a kluge.
I met a salesman that wanted to buy an Osborne. I directed him to Kaypro just before Osborne went Kapute. He thought I was a genius.
Regarding Abrams, Obama is about to sell them to Egypt (and the Muslim brotherhood.) This is not the same Egypt we’ve been dealing with in the past. I’m sure Israel is thrilled.
The shuttle had the same design committee as the camel. We may yet see a lifting body that makes sense.
Wisely they quit using HAL/S (a programming language so bizarre that subscripts and superscripts are put on seperate lines) and switched to Ada, which means their programmers get to sit at a bigger table at the support conference of Obsolete Computer Languages.
The shuttle GPCs are still programmed in HAL/S. HAL/S allows multiline format for exponents and subscripts, but does not require it. The shuttle flight software disallows multiline as a matter of style, preferring ‘**’ for exponents and ‘()’ for subscripts, like many other languages still in use.
One example (of many in a long post) of someone bluffing like he knows what he’s talking about but doesn’t.
George,
I disagree with your contention that the shuttle should have switched to electric actuation for flight control and engine gimballing to improve reliability. Just ask the F-35 folks how many problems they’ve had with their EHAs (which are hybrid electric and hydraulic, but the problems have been with the electric half). EMAs are much less efficient than hydraulics in the high power realms in which the Shuttle systems must operate. You would need high voltage systems (probably 270V), which are beyond the expertise of most actuation system vendors at this time. Hydraulic rams are some of the most reliable actuation systems possible, which is why they are used on large commercial aircraft. The only problem is ensuring sufficient redundancy in the hydraulic supply system. I confess that I don’t know how the Shuttle solved that problem, but I see no indication in your post that you know either.
@Nemo, if HAL/S is great, why does nobody else in the world use it except NASA? It can run on an IBM 360 and a Data General Eclipse (I used to own a DG Nova), but what other advantages does it have? Admittedly it’s a big step up from the Apollo GNC, which I tried translating to Intel in a long series of steps, but the entire architecture is bizarre, from the one’s complement notation to the mnemonics. Some things are so obsolete that it’s harder to upgrade them or translate them than to replace them entirely.
I’ve played around a bit with translating the Apollo GNC programs to Allen-Bradley PLC 5000 (a Rockwell product!), since so much of what it does is just reading switches and setting outputs, and a PLC is a vastly better platform for such things, which is why almost every factory and machine tool uses them in instead of custom software on a PC platform, which carries much higher development and support costs and higher risks of inadvertent bugs. The ideal would probably be an intermediate system where the high level GNC software reads data from the PLC and talks directly to the navigations sensors, performs the advanced mathematical calculations, and then writes the results to the PLC. The other advantage of a PLC is that you could monitor, debug, and modify the code as the craft is flying, just as everybody makes factory changes on the fly, sometimes hundreds of changes a day without even shutting anything down.
But all else aside, your rabid defense of HAL/S illustrates what went so, so wrong with the Shuttle program and NASA in general. They’ve ended up living in their own little bubble, where everything they do is “advanced”, even though the rest of the world long ago rolled past them. Even the new Shuttle computers can’t calculate the craft’s moments of inertia as the robot arm moves around, which you’d think would be a pretty basic thing for a spacecraft with a robot arm. I’m sure that like IBM, they have vast books of software management, tallying every lesson learned from every project, and following those books guarantees cost overruns, delays, and market failure.
So yes, I don’t program in HAL/S, and I no longer program in Fortran, Forth, APL, PL/I, Pascal, Modula-2, Ada, or COBOL, even though I used to. It’s an early 1970’s language that died on the vine, preserved in amber in the one place that’s still stuck in the 1970’s.
@cthulhu, fighter jets have to get the electric supply from the same place they get their hydraulics, from gears hooked to the engine turbine. If you’ve got an air-breathing engine running all the time, you’ve also got a way to run your hydraulic pumps. On the Shuttle the hydraulics have to have their own dedicated turbine to drive the hydraulic pumps, and that turbine is fed by hypergolic fuels. The Shuttle already has fuel cells that have to run all the time (or the flight computers and lots of other mission critical systems would fail), and having the flight controls run hydraulically off a turbo pump means there has to be yet another mission critical system, one that’s equivalent to a rocket engine turbo-pump. At the time the Shuttle was designed, electric actuation was a bit beyond the state of the art, but it isn’t any longer. Now it’s cutting edge. But an even simpler approach would have the existing hydraulic system powered by an electric motor (running off the fuel cells) to run the hydraulic pump.
It would eliminate the troublesome APU’s, where a catastrophic failure at 35,000 RPM would create a very bad situation.
George – I’ll just point out that what you originally said was “To really improve reliability the system should’ve swtiched to all-electric control for flight surfaces and engine gimballing”; that’s a far cry from “an even simpler approach would have the existing hydraulic system powered by an electric motor (running off the fuel cells) to run the hydraulic pump”.
@George Turner: So yes, I don’t program in HAL/S, and I no longer program in Fortran, Forth, APL, PL/I, Pascal, Modula-2, Ada, or COBOL, even though I used to.
I’m just curious — what language do you prefer to use for programming these days?
FWIW I don’t do real-time applications. I tend to use R as a data handling environment. For scientific programming I use Fortran. Once upon a time the choice of Fortran was justified by performance considerations, nowadays other languages have pretty good optimizing compilers, so I have to say it’s just habit now.
I suppose it’s an interesting question why aerospace in general lags behind the commercial electronics industry. I recall being baffled in the mid-1980s that the engine companies had not gone over to completely digital engine controls. The hydromechanical engine controls of the day were absolutely Byzantine. Aerospace as a culture seems to be incapable of changing on the timescales typical of consumer electronics. (Imagine a sci-fi world where NASA upgrades the electronics suite of the shuttle every year, in order to incorporate the fastest and most capable processors!) Do you think NewSpace companies will be different? Or do you think we will someday think of the Falcon 9 as antiquated because it uses chips designed in the mid-2000s?
bbbeard,
[[[I suppose it’s an interesting question why aerospace in general lags behind the commercial electronics industry.]]]
The need to document changes and show they will not result in increase risk of failure. For example in terms of electronics in space, you need to prove in will survive radiation, that is why the chips used in space always lags those found in consumer electronics. That is why many of the systems on the Shuttle are “outdated”, because of the cost and time needed to prove the new ones will at least as safe.
George,
No, it doesn’t. Like all aerospace systems, the Shuttle has undergone regular modernization as the technology advances. Today’s Shuttle carries more payload then the version first launched in 1981 which is an improvement. And it would be capable of doing more if Congress had funded additional upgrades like the Liquid Flyback Boosters proposed in the 1990’s.
Like The B-52 the Shuttle just kept on trucking doing jobs that no other vehicle is capable, or will be for many, many years.
Trent,
One slide is interesting, the one should how the Dragon could be used to crash the Hubble into the ocean. And this illustrates the difference between the new generation of capsules and the Shuttle Orbiter. All the Dragon is able to do with a malfunction satellite like the Hubble is dump it in the ocean. By contrast the Shuttle has the capability to return Hubble to Earth, for research and then to be placed in a museum. It already capability demonstrated that capability with a pair comsats in the 1980’s.
For example in terms of electronics in space, you need to prove in will survive radiation, that is why the chips used in space always lags those found in consumer electronics.
These days it’s not merely survivability – often the geometries are designed with redundancies and physical separations to avoid single-event effects (e.g: if your ram gets 4x denser, a neutron can take out 4x data…): a whole new expensive set of microcircuits that take years to develop and qualify. The commercial sector will have moved-on to the next big thing in the meantime and your newly-qualified component is now “legacy.”
I feel obligated to point out how unreliable consumer electronics are. It’s the elephant in the room if you are considering incorporating bleeding-edge technology in your rocket ship. It’s not exactly why aerospace lags by decade-scale times, but it is indicative that there are different requirements for stuff that flies and stuff that crashes.
@cthluhu: I did originally say switching to all electric would be better, but going halfway would be simpler, since it wouldn’t involve re-engineering the connections to the flight control surfaces and such.
Using an APU sounds simple, until you try to design one that works in space. The fuel has to be heated so it doesn’t freeze (so dual redundant fuel heaters) and pressurized with a nitrogen system. And of course the fuel system has to have fill ports, drain ports, vents, and valves. The lubricating oil has to be heated so it doesn’t freeze, too. It also has to be cooled, which requires a water spray boiler. The boiler has to have a big water tank, which has to be heated so it doesn’t freeze. After the water boils it has to be vented. So more dual redundant heaters, nitrogen supplies, valves, pressure controllers, fill ports, drain ports, vents, etc. And of course you also have to keep the hydraulic fluid warm. All of this has to work in high-G, zero-G, hard vacuum, extreme heat, extreme cold. It’s one of those things where the idea was simple (just use an APU to run the hydraulics), but the required complexity to implement the solution necessarily exploded on them.
Google up some of the APU system diagrams. Going all-electric would replace not a simple APU system that you’d find on a jet, but a system with layers and layers of complexity to manage four different fluids (fuel, oil, hydraulic fluid, and water) and three gases (nitrogen pressurization, turbine exhaust, and steam). As I recall, even the water boiler quits working as they get low enough in the atmosphere, so there’s yet another cooling system somewhere in the loop.
This is a very unfair comparison, the Shuttle may look the same on the outside but it’s changed a lot. It may have the same old flight frame but at least they got the flight rate up… Err, at least they got the reliability up… Err, at least they got the delays under control… Err, at least they lowered the overall operating costs…
Mr. Goodfellow, if the final landing goes well, you could say they finally got it to stop exploding…
A few notes from someone who programmed SPL then HAL/S then Ada then …
We went through a stupid phase when the language was thought to be necessary for the coding of reliable software. Just like when we thought we needed to build secure computing systems from the gate level on up.
We learned the hard way that it was methodology of programming and test that mattered, and that the volume of use meant that specialized *anything* in computer technology actually increased cost and decreased reliability.
Many of us pointed this out back then and were overridden. None of that time could remotely imagine how omnipresent computer systems would become, so they couldn’t see the scaling. This is related to the problem afflicting aerospace overall.
You should listen to George Turner’s remarks very carefully. I don’t agree with some of his rational, but he’s dead-on in his review of Shuttle “improvements”.
The more substantive ones are the ones that we not done, including what Thomas Matula brings up with the Liquid Flyback Boosters. The reason these didn’t happen was because of the “dead man walking” view of Shuttle by Congress (both aisles) – they planned on the obsolescence of Shuttle while still flying it, without regard for anything else. Why invest when the Shuttle might kill another crew and then be permanently be shutdown, not allowing the return to be garnered on the investment?
The root of the problem was overspending on the original Shuttle and under-justification – the military use was a political contrivance that forced an insane series of decisions that lead directly to Shuttle conclusion – it was dead once it was finished. It was kept alive solely by political accommodation, and barely improved to keep from being too obviously deadly to crews. All along the same political forces were/are scheming to inherent the next HSF boondoggle that replaces it.
The reason we haven’t done better is greed and power wrestles with fate over next steps with no one getting an upper hand. Meanwhile, the world has changed, and the knife in the dark that ends it is the senescence of the technology and primes that it is linked to.
-nooneofconsequence.
PS. nemo is mostly right to my knowledge about Shuttle use of HAL/S. HAL/S was used on other avionics and space projects with other considerations. I remember mostly “**” ala Fortran. Too much converted Fortran-4 code.
Once you get this kind of software running and refined, it never ever can be changed. In some parts of the aerospace/military, there are virtual machines / emulators that run other vm/emulators … that run old codes. There once was a IBM 7094 at the Army’s BRL kept for 40 years, because of certain codes for weapons. Many, many things like this.
We learned the hard way that it was methodology of programming and test that mattered, and that the volume of use meant that specialized *anything* in computer technology actually increased cost and decreased reliability.
There is a lot of truth in that. I’m wondering if you have any theories about why commercial software these days is so crappy? I don’t just mean Windows and Windows programs. Just in the last six months, I have suffered through forced “upgrades” in my DVR and Android phone software. The DVR upgrade pushed the “closed captioning” setup option to an inscrutable location (“Subtitles”) requiring about 20 button presses instead of 3 — and they also broke the picture-in-picture feature. The Android upgrade completely broke voice recognition, which was one of the neat features of Android that really worked… until June. And yeah, Windows, too — just bought a netbook that came with Windows 7 Starter, which is about as crippled an operating system as I’ve used since, oh, 1983 and the CDC Cyber 205 with NOS/BE. And the list keeps growing….
@bbbeard:
I’m just curious — what language do you prefer to use for programming these days?
Since the 1980’s, until fairly recently, I’ve probably used ‘C’ 90% of the time for PC projects, whether on DOS, Windows, or Unix boxes. Other projects have required assembly language on microcontrollers or the occassional oddball customer requirement. For general factory automation, machine control, or robotics projects I am almost always using a PLC. I used to do a lot of those project on PC’s, when those started becoming commonplace, but like the rest of the automation industry, we found that the lead-time, start-up, and long term support in any algorithmic language was much more difficult than going with a PLC, where writing the logic to control thousands of inputs and outputs is trivial, as is migrating the code to a different PLC, troubleshooting on the fly, and making changes while the code is still running.
It’s not at all uncommon to get called to some factory I’ve never heard of to fix an issue or make a change, hook up to the PLC, make significant program changes, verify them, and be done in the same evening. With a PC it would generally take several weeks just to figure out the code and get the compiler working, even if I was the one who’d written the program years ago. Since the late 80’s it’s been commonplace to rip out old PC controlled systems and replace them with PLC’s, which we can usually get up and running over a weekend.
I have no idea why aerospace hasn’t gone that route, especially since their factories are PLC controlled, so they have to know that a whole plant can be programmed in a couple months and will then run 24/7 for decades without a glitch, and the program can be expanded or rewritten while the factory is still in operation without any serious risk of downtime, although I did once shut a Dell plant down for 20 minutes on a fundamental configuration change that was a beyond state of the art issue, one that even surprised Allen-Bradley. PC’s have mostly been relegated to a middle layer of monitoring, communications, interfacing to mainframes, and of course user interfaces.
NASA is probably afflicted with some of the same problems that affect IBM, vast books of knowledge and rules about how software must be written. I’m sure many here have some hair-raising IBM stories. I know I do.
A management system can stifle creativity, innovation, and in such places making even tiny program changes to fix irritating and obvious bugs becomes harder than amending the Constitution. As a new graduate adapts to that environment, the code becomes a holy relic instead of a blackboard reflecting a work in progress. Every change has institutional ramifications, cadres of managers to consult, manuals that have to be rewritten, exceptions that must be granted. The easiest part of the vehicle to modify (with no weight penalty) becomes the hardest, since an airfoil change can be modeled and quantified, but a trivial software change could just blow up, killing everyone on board, since software can do anything.
One of the advantages of getting away from direct PC control in critical applications is that a software bug can’t do anything imaginable. If the PC gets stuck in an infinite loop then the PLC stops getting updates from the PC, but all of the buttons, switches, sensors, modes, a critical logic is still working. If you’ve partitioned the system’s functions accordingly, the ship will have lost some higher-level navigation functions (GPS, radar) but still has all its lower brain functions and survival instincts intact, and can perform a safe abort or just hang on for a reboot of the PC. That provides a huge advantage to the PC programmers, because their code goes from being mission critical LVAC that they can’t touch without an act of Congress to the inflight equivalent of Facebook access. They are freed to constantly improve, innovate, and move back out on the cutting edge.
Meanwhile the PLC functions, where changes to running, critical systems are routine (lots of factory automation can rip a human in half) allows the other set of programmers to ask the pilot, “Hey, they TACOM switch isn’t being used for anything now. Can you think of something you’d like that switch to do?” Five minutes later the switch will do something else, That function will get noted and the switch will get a new label. It could be anything from pizza-preheat to half-gain on the re-entry controls. It’s just bits, the things programmers play with to keep users happy.
Anyway, in answer to your original question, lately I’ve been using Visual Basic express 2010. I think the lack of big institutional users and the lack of a defined standard has left some folks at Microsoft free to keep updating the languages fundamentals to reflect what programmers really need to accomplish. It’s odd for me because I always used to laugh and snort at VB, but I suppose I’m in the same boat as Rand defending Obama’s commercial space policy.
… why commercial software these days is so crappy?
Good software development organizations tend to do themselves out of business because they make it look too easy. HR, which as usual is “dumb as a stump”, often has too much say in the staffing up of new parts of the organization. Management only figures this out when the support cycle surges, and then they’re bailing water attempting to rectify a total CF that they let happen because they didn’t do their jobs in over-sighting the hiring process in the first place.
I have no idea why aerospace hasn’t gone that route, especially since their factories are PLC controlled
The software talent they hire is very parochial – they only know X86 and C++. When all you know is a hammer, everything else looks like a nail.
We had a hard time introducing PLC to the original mainframe crowd. Only the CDC folk understood PPU’s, and could bridge to controllers and DSPs – the IBM guys didn’t. Only through military projects like radar did we get good DSP skills brought in.
On the science size, much of the Fortran programming persists still as IDL. The newbie students I work with develop in Python and translate into “accepted” languages on demand to satisfy requirements. Its a very different world.
The best embedded systems work involves elaborate test frameworks and regression testing – that’s how you tell the maturity of a critical software development organization.
In my corner of software development (my PhD is in computational nuclear physics) there is a lot of cultural inertia favoring Fortran. Not so much because of legacy code, but just because it gets the job done while giving you explicit control of memory allocation, etc…. I have to admit I never quite got the hang of figuring out what object-oriented code is doing in terms of compute cycles, memory allocation, etc., which are things you have to control when you are developing new algorithms. I never had a Fortran code develop a memory leak or bring down the operating system.
I have on occasion developed programs that used Visual Basic to provide a graphical front end while a Fortran DLL was used for the number-crunching. But the life cycle of VB is such that code becomes obsolete really quickly. I would spend all my time updating and maintaining code if I tried to make that my standard. I have Fortran applications from 30 years ago that I could dust off and compile with g77 and run today. But I have no idea how to update my VB4 code to work with VBExpress 2010 — and no guarantee that the code would work in 2012. Add to that the headache of perpetually-changing IDEs and I just can’t afford the investment in time to re-learn.
Yes, Fortran is very stable. Years ago I tried to convert a bunch of Fortran finite element analysis routines to C and C just can’t handle arrays quite as well, in terms of allocation. I eventually gave up on it. On top of memory leaks and blown stacks, C is also horrible about exception handling and is happy with overflows and such, which is why lots of Unix programs are easily crashed with unexpected inputs. As The Unix Haters Handbook (PDF) puts it, Unix isn’t a real operating system in part because C isn’t a real language.
If you have VB4 code you might as well start over. Heck, a few months ago I was beating my head against a wall trying to get VB2003 code to compile in VB2005. They’d decided that different threads shouldn’t be able to interact as intimately as I’d been doing, and it took a major rewrite to work around the issue. But VB is great for simple little front ends that you won’t have to maintain indefinitely.
I also have to agree about some of the fancy object oriented code. Who knows what’s really going on at run time?
I have to admit I never quite got the hang of figuring out what object-oriented code is doing in terms of compute cycles, memory allocation, etc., which are things you have to control when you are developing new algorithms.
Across the room I’m watching a undergraduate student take apart a science satellite’s sensor test code, most of which is written in horribly bad Fortran code. I know because I wrote horribly bad Fortran code at NASA, and was compelled to make it work on various inconsistently implemented floating point processors prior to IEEE floating point standardization, some of which I was involved with.
She takes apart the code, re-implements it in Python, proves it, puts it into a object oriented structure, numerically qualifies it with exhaustive correlations, then translates it into the final accepted language (which has changed) and reruns the correlations.
Large portions of the Fortran code have been found to be simply wrong. Some parts are partial, inconsistent copies of earlier versions of similar codes. Often there are offsetting errors, where the “original” program author knew there was an error but couldn’t find it in the code, so simply added in more to get the appropriate result.
It is slow and tedious work. In the end, a reliable, well-documented and understood code emerges. The scientists involved subject it to even more proofs, but in the end, rely on it more than what they started with.
They do not use, however, the version developed directly. That is because the massive number of related software that it must be used in concert with, is still in obsolete forms – there is no budget to address those.
I never had a Fortran code develop a memory leak or bring down the operating system.
I remember fixing a bug in the memory allocator for the Fortran runtime for LLNL decades ago on a “non critical code”. They do happen. It was in a massively parallel processor. There’s nothing magic here – just how much something is used.
Google routinely runs >1,000x larger processor clusters running Python – its an even larger tested environment than LLNL ever has had. And its growing at a rate that means it will always be larger. And yes, the data driven parts use floating point rather extensively.
Fortran “runtime” is relatively unreliable. You are simply measuring inertia.
nooneofconsequence, I knew that if a person got good enough at programming then watching female undergrads would become part of the job! I knew it!
They forgot the B-52, KC-135, Huey, C-130, Boeing 737, Boeing 747 – oh wait those are all 40 to 60 year old systems still doing their job.
It’s interesting to think many of the journalist probably flew in commercial aircraft designed a decade or more before the Shuttle and are still in production 🙂
That should “flew to Florida to view the launch”.
However, the key point is different technologies are in different of the life cycles which what really determines progress. Not to mention the Shuttle’s replace will be a capsule similar to the ones used before the Shuttle era. Even SNC’s lifting body is a throwback to NASA X-craft from the 1960’s.
None of the vehicles flying to and from orbit in the next few years will be the “Shuttle’s replacement.”
Ah, playing word games again. But I am glad you admit that New Space is incapable of replacing the Shuttle.
OK let’s change that to the “new” systems being designed to transport astronauts to orbit are all throwbacks to the 1960’s.
Basically the equivalent of a contractor “retiring” their Ford F-350 p/u and switching to driving a VW Bug because they are no longer rich enough to afford to drive the F-350.
I am glad you admit that New Space is incapable of replacing the Shuttle.
Why would that be an “admission”? I’ve never claimed that it would, or should. No one can, or should replace the Shuttle. The very concept was flawed. But all of its capabilities will be replaced. By “New Space.”
OK let’s change that to the “new” systems being designed to transport astronauts to orbit are all throwbacks to the 1960′s.
The costs aren’t.
They forgot the B-52, KC-135, Huey, C-130, Boeing 737, Boeing 747 – oh wait those are all 40 to 60 year old systems still doing their job.
With major upgrades I believe.
737-100 (First flight April 1967)
737-900 (Still in service)
44 years of progress is sometimes hard to tell in a photo.
>> But I am glad you admit that New Space is incapable of replacing the Shuttle.
An easy admission, as STS only real function was keeping an army of civil servants and contractors employed.
It’s a funny analogy folks. No analogy is perfect. But it makes the point. 30 years is a long time to be stagnate.
The real point is behind the pictures. The other pictures are of commercial products. We all know the story of the governments competition to the wright brothers, don’t we?
In most cases a long life indicates a successful design. The shuttle was a very wasteful jobs program. That’s opportunity cost lost. As far as duplicating capabilities… give business time, they can’t put the full force of government into a thing. But to suggest they can’t meet or surpass any particular capability is stupid. Really stupid.
The comments in the linked page are a hoot. For example, I was unaware that Newspace is cheaper than NASA because the former uses kerosene as a fuel, rather than the expensive liquid nitrogen used by NASA.
Some support isn’t all that helpful…
bbbeard,
Yes, progress is difficult to tell from a photo.
For example the uprated SSME, the light weight ET and the glass cockpits the Shuttle Orbiters have today, all major upgrades to the system.
Although the design is 30 years old on the outside, on the inside the Shuttle have had a number of major upgrades so they are not really the same craft that were flying in 1981 anymore then the B-52’s the U.S.A.F. is flying today are the same as they were when they left Boeing in 1962.
Rand,
[[[But all of its capabilities will be replaced. By “New Space.”]]]
So show me the design for the New Space vehicle that will be repairing satellites, the one with the robotic arm and workbay 🙂
Mr. Matula – The latest versions of all those (except possibly the KC-135, which rather makes the point in the OP rather than standing against it) bear rather little relation to the earliest versions in terms of efficiency and ease-of-use.
FYI,
Here is a webpage from NASA from 2006 detailing just some of the upgrades that were done to the STS since the first flight in 1981.
http://www.nasa.gov/mission_pages/shuttle/main/shuttle_evolves.html
It Only Looks Like the Original
So its time to kill this new “no innovation” myth the Rand is trying to start about the Space Shuttle. Just like the other legacy systems I listed, the Shuttle has seen its share of modernizations.
You’re right Thomas, there is one capability new space will never match. The ability to stay operational while burning money.
So show me the design for the New Space vehicle that will be repairing satellites, the one with the robotic arm and workbay
See: But to suggest…
Ken,
Perhaps, but Virgin Galactic has been doing a good job of burning money without revenue for the last 7 years….
I made my first cell phone call on one of those Motorola Dynatech phones.
Rand:
You missed one of the 30-years-ago comparisons.
On one panel you should have had President Carter; on the neighboring panel, President Obama.
Virgin Galactic has been doing a good job of burning money without revenue for the last 7 years
Virgin is a conglomerate. Galactic is in development, not operational. If they do go operational they will only continue if it’s profitable. Unlike government which does not spend it’s own money.
Thomas, the NASA link just reinforces the OP’s graphic.
Because the engines were designed too close to the margins, and many at NASA had argued from the beginning that the throat should be widened so they weren’t running at a chamber pressure so close to the limit. This isn’t an “innovative” change or evolution, it was a fix to reduce the chance of losing a Shuttle.
A change required because the original steering mechanism and brakes were inadequate. The main landing gear is also inadequate, with a significant risk of blowing a tire on a hard landing. Unfortunately the designers realized it after the main wing box design had already been approved and sent out, so they couldn’t afford the delay and costs to switch to dual-tires, which would’ve require a slightly thicker wing.
The orignal purpose of the Space Shuttle was to build and support a space station. Somehow they left off the most critical component required for such missions.
They moved from 1970’s computers using magnetic core memory to newer ones using Intel 80386’s. In part they had to make the switch because they couldn’t keep supporting the original computers because inventories were running out of the obsolete components. Wisely they quit using HAL/S (a programming language so bizarre that subscripts and superscripts are put on seperate lines) and switched to Ada, which means their programmers get to sit at a bigger table at the support conference of Obsolete Computer Languages. Their new navigation system still uses an early 5-channel GPS, and their solid-state star tracker is solid state! By gosh, the thing is transistorized!
They shouldn’t be “improved”, they should be “removed”. The APUs run at ridiculous RPM’s, are failure prone and troublesome, yet are a flight-critical system. To really improve reliability the system should’ve swtiched to all-electric control for flight surfaces and engine gimballing, with the electrical power provided by fuel cells which have proven to be extremely reliable and where its much easier an cheaper to incorporate large levels of redundancy.
The list you linked just focused on the orbiter, so it didn’t mention other major improvements to the launch system, such as incorporating another O-ring in the SRBs, or new ways to glue foam to the external tank.
You also mentioned the new light weight external tank, which uses aluminum lithium alloy, and that change hammers home the original point. The only really significant change greatly improved performance was done to the part that is thrown away. The ET has to maintain a production line, where changes and improvements can be incorporated easily.
The comparison of evolution rates with airliners is a very false analogy. Airliners don’t seem to change much because they’re a very mature technology in a highly developed and competitive field. The 737 is still flying because the design is still good enough to compete in the free market with the latest innovations from Boeing, Airbus, and anyone else who wants to enter the field, and that 737 operators (the world’s airlines) still regard it as a cost competitive solution to their needs. The reason airliners all seem to look the same (most being indistinguishable to the untrained eye) is that the shape and configuration is near a locally optimal solution. Unless some major jump is made to a flying wing or highly blended carbon fiber configuration, all airliners will look and perform like a slightly improved 737.
In contrast, nobody will design something new that looks and operates like the Shuttle, a craft that was merely a first attempt at making a reusable spacecraft, and one that only achieved partial reusability. The Soviets built a clone of it, and rightly decided that it belonged out in a field somewhere, surrounded by weeds. Nobody but the US government could afford to operate it, and even they couldn’t afford to operate it very often.
We should’ve moved on decades ago, matching the design to real requirements and doing so with lessons learned about refurbishment and overhaul costs, perhaps splitting our system into several different craft better tailored to different missions.
But back to the airliner comparison. The Shuttle was promised to have a fast turn-around time and a high flight rate, which is why the country, as a customer, bought it. Suppose you bought one Shuttle on the expectation that you could fly passengers to space every fourteen days, perhaps twenty one days at the outset. That’s your business model, just as an 737 customer has hard expectations about how many flights per day they need to stay profitable.
Then the Columbia is built, and in its first decade, decade of operation the launches are spaced out like this (days between launches)
214, 130, 97, 137, 382, 776, 1304, 154, 327, 185.
Later in its operations, STS-94 was an immediate repeat of the STS-83 mission, which had been cut short. No major configuration changes or mission planning was required. It was still 88 days between flights.
To claim the Shuttle’s lack of evolution is comforting, just like a 737’s apparent stasis, requires us to believe that the Shuttle’s flight-rate, turn-around times, and maintenance and operational costs are as close to optimal for manned space flight as a 737’s are for commercial air travel. If that’s true then we might as well give up on space now, because a couple of dozen humans a year in two-week stints, and payloads at $10,000 a pound and up, is all this planet will ever afford to launch into LEO.
Tom, http://quantumg.net/spacex_dragon_as_inorbit_servicing_platform.pdf
~$80M each.
And why in the world would you want a work bay? If there was anything learnt from the brief foray into satellite servicing done by NASA it was that the concept of retrieving a satellite to a bay doesn’t work.
Virgin Galactic has been doing a good job of burning money without revenue for the last 7 years
Virgin is a conglomerate. Galactic is in development, not operational. If they do go operational they will only continue if it’s profitable. Unlike government which does not spend it’s own money.
It seems to me the only interesting question is how much money has been spent over the 7 years. And more interesting, when is the money spent.
Eg 50,5,5,5,5,5,5 million is more than 5,5,5,5,5,5,50 million
If only there had been anything like the same kind of customer demand that would’ve driven product innovation (and diversity) in that last category…
…But it’s still possible.
You got too far ahead of yourself showing an intact shuttle landing at the bottom right corner. We don’t yet know that’s the way it’s going to happen. I hope so, that last thing we need is to lose yet another…
Actually, the Atari 5200 did not come out till late 82. I used to have one. Got a Colecovision for Christmas in 82.
dont forget, even government can make progress when it absolutely has to. Wars for instance, tend to make the bullshit walk, hence why we have the M-1A Abrams tank today rather than mules and cavalry horses… Then again, the M-1 does date back 30 years….though its been upgraded quite a bit. The proper way to compare airliners isnt from the outside, but in the cockpit, the airliner for the everyman will always be subsonic because the average person doesnt earn enough per hour to justify supersonic flight.
@Mike: I had a Kaypro II, which the image compares to Apple’s latest gadget. The Kaypro was a serious computer. Great keybord, Z-80 processor, CP/M operating system, dual double-density floppies. I programmed the heck out of it in 8080 assembly langauage, even writing a Forth interpretter. I still think the 80286 was a kluge.
I met a salesman that wanted to buy an Osborne. I directed him to Kaypro just before Osborne went Kapute. He thought I was a genius.
Regarding Abrams, Obama is about to sell them to Egypt (and the Muslim brotherhood.) This is not the same Egypt we’ve been dealing with in the past. I’m sure Israel is thrilled.
The shuttle had the same design committee as the camel. We may yet see a lifting body that makes sense.
The shuttle GPCs are still programmed in HAL/S. HAL/S allows multiline format for exponents and subscripts, but does not require it. The shuttle flight software disallows multiline as a matter of style, preferring ‘**’ for exponents and ‘()’ for subscripts, like many other languages still in use.
One example (of many in a long post) of someone bluffing like he knows what he’s talking about but doesn’t.
George,
I disagree with your contention that the shuttle should have switched to electric actuation for flight control and engine gimballing to improve reliability. Just ask the F-35 folks how many problems they’ve had with their EHAs (which are hybrid electric and hydraulic, but the problems have been with the electric half). EMAs are much less efficient than hydraulics in the high power realms in which the Shuttle systems must operate. You would need high voltage systems (probably 270V), which are beyond the expertise of most actuation system vendors at this time. Hydraulic rams are some of the most reliable actuation systems possible, which is why they are used on large commercial aircraft. The only problem is ensuring sufficient redundancy in the hydraulic supply system. I confess that I don’t know how the Shuttle solved that problem, but I see no indication in your post that you know either.
@Nemo, if HAL/S is great, why does nobody else in the world use it except NASA? It can run on an IBM 360 and a Data General Eclipse (I used to own a DG Nova), but what other advantages does it have? Admittedly it’s a big step up from the Apollo GNC, which I tried translating to Intel in a long series of steps, but the entire architecture is bizarre, from the one’s complement notation to the mnemonics. Some things are so obsolete that it’s harder to upgrade them or translate them than to replace them entirely.
I’ve played around a bit with translating the Apollo GNC programs to Allen-Bradley PLC 5000 (a Rockwell product!), since so much of what it does is just reading switches and setting outputs, and a PLC is a vastly better platform for such things, which is why almost every factory and machine tool uses them in instead of custom software on a PC platform, which carries much higher development and support costs and higher risks of inadvertent bugs. The ideal would probably be an intermediate system where the high level GNC software reads data from the PLC and talks directly to the navigations sensors, performs the advanced mathematical calculations, and then writes the results to the PLC. The other advantage of a PLC is that you could monitor, debug, and modify the code as the craft is flying, just as everybody makes factory changes on the fly, sometimes hundreds of changes a day without even shutting anything down.
But all else aside, your rabid defense of HAL/S illustrates what went so, so wrong with the Shuttle program and NASA in general. They’ve ended up living in their own little bubble, where everything they do is “advanced”, even though the rest of the world long ago rolled past them. Even the new Shuttle computers can’t calculate the craft’s moments of inertia as the robot arm moves around, which you’d think would be a pretty basic thing for a spacecraft with a robot arm. I’m sure that like IBM, they have vast books of software management, tallying every lesson learned from every project, and following those books guarantees cost overruns, delays, and market failure.
So yes, I don’t program in HAL/S, and I no longer program in Fortran, Forth, APL, PL/I, Pascal, Modula-2, Ada, or COBOL, even though I used to. It’s an early 1970’s language that died on the vine, preserved in amber in the one place that’s still stuck in the 1970’s.
@cthulhu, fighter jets have to get the electric supply from the same place they get their hydraulics, from gears hooked to the engine turbine. If you’ve got an air-breathing engine running all the time, you’ve also got a way to run your hydraulic pumps. On the Shuttle the hydraulics have to have their own dedicated turbine to drive the hydraulic pumps, and that turbine is fed by hypergolic fuels. The Shuttle already has fuel cells that have to run all the time (or the flight computers and lots of other mission critical systems would fail), and having the flight controls run hydraulically off a turbo pump means there has to be yet another mission critical system, one that’s equivalent to a rocket engine turbo-pump. At the time the Shuttle was designed, electric actuation was a bit beyond the state of the art, but it isn’t any longer. Now it’s cutting edge. But an even simpler approach would have the existing hydraulic system powered by an electric motor (running off the fuel cells) to run the hydraulic pump.
It would eliminate the troublesome APU’s, where a catastrophic failure at 35,000 RPM would create a very bad situation.
George – I’ll just point out that what you originally said was “To really improve reliability the system should’ve swtiched to all-electric control for flight surfaces and engine gimballing”; that’s a far cry from “an even simpler approach would have the existing hydraulic system powered by an electric motor (running off the fuel cells) to run the hydraulic pump”.
@George Turner: So yes, I don’t program in HAL/S, and I no longer program in Fortran, Forth, APL, PL/I, Pascal, Modula-2, Ada, or COBOL, even though I used to.
I’m just curious — what language do you prefer to use for programming these days?
FWIW I don’t do real-time applications. I tend to use R as a data handling environment. For scientific programming I use Fortran. Once upon a time the choice of Fortran was justified by performance considerations, nowadays other languages have pretty good optimizing compilers, so I have to say it’s just habit now.
I suppose it’s an interesting question why aerospace in general lags behind the commercial electronics industry. I recall being baffled in the mid-1980s that the engine companies had not gone over to completely digital engine controls. The hydromechanical engine controls of the day were absolutely Byzantine. Aerospace as a culture seems to be incapable of changing on the timescales typical of consumer electronics. (Imagine a sci-fi world where NASA upgrades the electronics suite of the shuttle every year, in order to incorporate the fastest and most capable processors!) Do you think NewSpace companies will be different? Or do you think we will someday think of the Falcon 9 as antiquated because it uses chips designed in the mid-2000s?
bbbeard,
[[[I suppose it’s an interesting question why aerospace in general lags behind the commercial electronics industry.]]]
The need to document changes and show they will not result in increase risk of failure. For example in terms of electronics in space, you need to prove in will survive radiation, that is why the chips used in space always lags those found in consumer electronics. That is why many of the systems on the Shuttle are “outdated”, because of the cost and time needed to prove the new ones will at least as safe.
George,
No, it doesn’t. Like all aerospace systems, the Shuttle has undergone regular modernization as the technology advances. Today’s Shuttle carries more payload then the version first launched in 1981 which is an improvement. And it would be capable of doing more if Congress had funded additional upgrades like the Liquid Flyback Boosters proposed in the 1990’s.
Like The B-52 the Shuttle just kept on trucking doing jobs that no other vehicle is capable, or will be for many, many years.
Trent,
One slide is interesting, the one should how the Dragon could be used to crash the Hubble into the ocean. And this illustrates the difference between the new generation of capsules and the Shuttle Orbiter. All the Dragon is able to do with a malfunction satellite like the Hubble is dump it in the ocean. By contrast the Shuttle has the capability to return Hubble to Earth, for research and then to be placed in a museum. It already capability demonstrated that capability with a pair comsats in the 1980’s.
These days it’s not merely survivability – often the geometries are designed with redundancies and physical separations to avoid single-event effects (e.g: if your ram gets 4x denser, a neutron can take out 4x data…): a whole new expensive set of microcircuits that take years to develop and qualify. The commercial sector will have moved-on to the next big thing in the meantime and your newly-qualified component is now “legacy.”
I feel obligated to point out how unreliable consumer electronics are. It’s the elephant in the room if you are considering incorporating bleeding-edge technology in your rocket ship. It’s not exactly why aerospace lags by decade-scale times, but it is indicative that there are different requirements for stuff that flies and stuff that crashes.
@cthluhu: I did originally say switching to all electric would be better, but going halfway would be simpler, since it wouldn’t involve re-engineering the connections to the flight control surfaces and such.
Using an APU sounds simple, until you try to design one that works in space. The fuel has to be heated so it doesn’t freeze (so dual redundant fuel heaters) and pressurized with a nitrogen system. And of course the fuel system has to have fill ports, drain ports, vents, and valves. The lubricating oil has to be heated so it doesn’t freeze, too. It also has to be cooled, which requires a water spray boiler. The boiler has to have a big water tank, which has to be heated so it doesn’t freeze. After the water boils it has to be vented. So more dual redundant heaters, nitrogen supplies, valves, pressure controllers, fill ports, drain ports, vents, etc. And of course you also have to keep the hydraulic fluid warm. All of this has to work in high-G, zero-G, hard vacuum, extreme heat, extreme cold. It’s one of those things where the idea was simple (just use an APU to run the hydraulics), but the required complexity to implement the solution necessarily exploded on them.
Google up some of the APU system diagrams. Going all-electric would replace not a simple APU system that you’d find on a jet, but a system with layers and layers of complexity to manage four different fluids (fuel, oil, hydraulic fluid, and water) and three gases (nitrogen pressurization, turbine exhaust, and steam). As I recall, even the water boiler quits working as they get low enough in the atmosphere, so there’s yet another cooling system somewhere in the loop.
This is a very unfair comparison, the Shuttle may look the same on the outside but it’s changed a lot. It may have the same old flight frame but at least they got the flight rate up… Err, at least they got the reliability up… Err, at least they got the delays under control… Err, at least they lowered the overall operating costs…
On second thought, carry on.
P.S. Somewhat relevant: http://www.thedoghousediaries.com/?p=1251
Mr. Goodfellow, if the final landing goes well, you could say they finally got it to stop exploding…
A few notes from someone who programmed SPL then HAL/S then Ada then …
We went through a stupid phase when the language was thought to be necessary for the coding of reliable software. Just like when we thought we needed to build secure computing systems from the gate level on up.
We learned the hard way that it was methodology of programming and test that mattered, and that the volume of use meant that specialized *anything* in computer technology actually increased cost and decreased reliability.
Many of us pointed this out back then and were overridden. None of that time could remotely imagine how omnipresent computer systems would become, so they couldn’t see the scaling. This is related to the problem afflicting aerospace overall.
You should listen to George Turner’s remarks very carefully. I don’t agree with some of his rational, but he’s dead-on in his review of Shuttle “improvements”.
The more substantive ones are the ones that we not done, including what Thomas Matula brings up with the Liquid Flyback Boosters. The reason these didn’t happen was because of the “dead man walking” view of Shuttle by Congress (both aisles) – they planned on the obsolescence of Shuttle while still flying it, without regard for anything else. Why invest when the Shuttle might kill another crew and then be permanently be shutdown, not allowing the return to be garnered on the investment?
The root of the problem was overspending on the original Shuttle and under-justification – the military use was a political contrivance that forced an insane series of decisions that lead directly to Shuttle conclusion – it was dead once it was finished. It was kept alive solely by political accommodation, and barely improved to keep from being too obviously deadly to crews. All along the same political forces were/are scheming to inherent the next HSF boondoggle that replaces it.
The reason we haven’t done better is greed and power wrestles with fate over next steps with no one getting an upper hand. Meanwhile, the world has changed, and the knife in the dark that ends it is the senescence of the technology and primes that it is linked to.
-nooneofconsequence.
PS. nemo is mostly right to my knowledge about Shuttle use of HAL/S. HAL/S was used on other avionics and space projects with other considerations. I remember mostly “**” ala Fortran. Too much converted Fortran-4 code.
Once you get this kind of software running and refined, it never ever can be changed. In some parts of the aerospace/military, there are virtual machines / emulators that run other vm/emulators … that run old codes. There once was a IBM 7094 at the Army’s BRL kept for 40 years, because of certain codes for weapons. Many, many things like this.
We learned the hard way that it was methodology of programming and test that mattered, and that the volume of use meant that specialized *anything* in computer technology actually increased cost and decreased reliability.
There is a lot of truth in that. I’m wondering if you have any theories about why commercial software these days is so crappy? I don’t just mean Windows and Windows programs. Just in the last six months, I have suffered through forced “upgrades” in my DVR and Android phone software. The DVR upgrade pushed the “closed captioning” setup option to an inscrutable location (“Subtitles”) requiring about 20 button presses instead of 3 — and they also broke the picture-in-picture feature. The Android upgrade completely broke voice recognition, which was one of the neat features of Android that really worked… until June. And yeah, Windows, too — just bought a netbook that came with Windows 7 Starter, which is about as crippled an operating system as I’ve used since, oh, 1983 and the CDC Cyber 205 with NOS/BE. And the list keeps growing….
@bbbeard:
Since the 1980’s, until fairly recently, I’ve probably used ‘C’ 90% of the time for PC projects, whether on DOS, Windows, or Unix boxes. Other projects have required assembly language on microcontrollers or the occassional oddball customer requirement. For general factory automation, machine control, or robotics projects I am almost always using a PLC. I used to do a lot of those project on PC’s, when those started becoming commonplace, but like the rest of the automation industry, we found that the lead-time, start-up, and long term support in any algorithmic language was much more difficult than going with a PLC, where writing the logic to control thousands of inputs and outputs is trivial, as is migrating the code to a different PLC, troubleshooting on the fly, and making changes while the code is still running.
It’s not at all uncommon to get called to some factory I’ve never heard of to fix an issue or make a change, hook up to the PLC, make significant program changes, verify them, and be done in the same evening. With a PC it would generally take several weeks just to figure out the code and get the compiler working, even if I was the one who’d written the program years ago. Since the late 80’s it’s been commonplace to rip out old PC controlled systems and replace them with PLC’s, which we can usually get up and running over a weekend.
I have no idea why aerospace hasn’t gone that route, especially since their factories are PLC controlled, so they have to know that a whole plant can be programmed in a couple months and will then run 24/7 for decades without a glitch, and the program can be expanded or rewritten while the factory is still in operation without any serious risk of downtime, although I did once shut a Dell plant down for 20 minutes on a fundamental configuration change that was a beyond state of the art issue, one that even surprised Allen-Bradley. PC’s have mostly been relegated to a middle layer of monitoring, communications, interfacing to mainframes, and of course user interfaces.
NASA is probably afflicted with some of the same problems that affect IBM, vast books of knowledge and rules about how software must be written. I’m sure many here have some hair-raising IBM stories. I know I do.
A management system can stifle creativity, innovation, and in such places making even tiny program changes to fix irritating and obvious bugs becomes harder than amending the Constitution. As a new graduate adapts to that environment, the code becomes a holy relic instead of a blackboard reflecting a work in progress. Every change has institutional ramifications, cadres of managers to consult, manuals that have to be rewritten, exceptions that must be granted. The easiest part of the vehicle to modify (with no weight penalty) becomes the hardest, since an airfoil change can be modeled and quantified, but a trivial software change could just blow up, killing everyone on board, since software can do anything.
One of the advantages of getting away from direct PC control in critical applications is that a software bug can’t do anything imaginable. If the PC gets stuck in an infinite loop then the PLC stops getting updates from the PC, but all of the buttons, switches, sensors, modes, a critical logic is still working. If you’ve partitioned the system’s functions accordingly, the ship will have lost some higher-level navigation functions (GPS, radar) but still has all its lower brain functions and survival instincts intact, and can perform a safe abort or just hang on for a reboot of the PC. That provides a huge advantage to the PC programmers, because their code goes from being mission critical LVAC that they can’t touch without an act of Congress to the inflight equivalent of Facebook access. They are freed to constantly improve, innovate, and move back out on the cutting edge.
Meanwhile the PLC functions, where changes to running, critical systems are routine (lots of factory automation can rip a human in half) allows the other set of programmers to ask the pilot, “Hey, they TACOM switch isn’t being used for anything now. Can you think of something you’d like that switch to do?” Five minutes later the switch will do something else, That function will get noted and the switch will get a new label. It could be anything from pizza-preheat to half-gain on the re-entry controls. It’s just bits, the things programmers play with to keep users happy.
Anyway, in answer to your original question, lately I’ve been using Visual Basic express 2010. I think the lack of big institutional users and the lack of a defined standard has left some folks at Microsoft free to keep updating the languages fundamentals to reflect what programmers really need to accomplish. It’s odd for me because I always used to laugh and snort at VB, but I suppose I’m in the same boat as Rand defending Obama’s commercial space policy.
… why commercial software these days is so crappy?
Good software development organizations tend to do themselves out of business because they make it look too easy. HR, which as usual is “dumb as a stump”, often has too much say in the staffing up of new parts of the organization. Management only figures this out when the support cycle surges, and then they’re bailing water attempting to rectify a total CF that they let happen because they didn’t do their jobs in over-sighting the hiring process in the first place.
I have no idea why aerospace hasn’t gone that route, especially since their factories are PLC controlled
The software talent they hire is very parochial – they only know X86 and C++. When all you know is a hammer, everything else looks like a nail.
We had a hard time introducing PLC to the original mainframe crowd. Only the CDC folk understood PPU’s, and could bridge to controllers and DSPs – the IBM guys didn’t. Only through military projects like radar did we get good DSP skills brought in.
On the science size, much of the Fortran programming persists still as IDL. The newbie students I work with develop in Python and translate into “accepted” languages on demand to satisfy requirements. Its a very different world.
The best embedded systems work involves elaborate test frameworks and regression testing – that’s how you tell the maturity of a critical software development organization.
In my corner of software development (my PhD is in computational nuclear physics) there is a lot of cultural inertia favoring Fortran. Not so much because of legacy code, but just because it gets the job done while giving you explicit control of memory allocation, etc…. I have to admit I never quite got the hang of figuring out what object-oriented code is doing in terms of compute cycles, memory allocation, etc., which are things you have to control when you are developing new algorithms. I never had a Fortran code develop a memory leak or bring down the operating system.
I have on occasion developed programs that used Visual Basic to provide a graphical front end while a Fortran DLL was used for the number-crunching. But the life cycle of VB is such that code becomes obsolete really quickly. I would spend all my time updating and maintaining code if I tried to make that my standard. I have Fortran applications from 30 years ago that I could dust off and compile with g77 and run today. But I have no idea how to update my VB4 code to work with VBExpress 2010 — and no guarantee that the code would work in 2012. Add to that the headache of perpetually-changing IDEs and I just can’t afford the investment in time to re-learn.
Yes, Fortran is very stable. Years ago I tried to convert a bunch of Fortran finite element analysis routines to C and C just can’t handle arrays quite as well, in terms of allocation. I eventually gave up on it. On top of memory leaks and blown stacks, C is also horrible about exception handling and is happy with overflows and such, which is why lots of Unix programs are easily crashed with unexpected inputs. As The Unix Haters Handbook (PDF) puts it, Unix isn’t a real operating system in part because C isn’t a real language.
If you have VB4 code you might as well start over. Heck, a few months ago I was beating my head against a wall trying to get VB2003 code to compile in VB2005. They’d decided that different threads shouldn’t be able to interact as intimately as I’d been doing, and it took a major rewrite to work around the issue. But VB is great for simple little front ends that you won’t have to maintain indefinitely.
I also have to agree about some of the fancy object oriented code. Who knows what’s really going on at run time?
I have to admit I never quite got the hang of figuring out what object-oriented code is doing in terms of compute cycles, memory allocation, etc., which are things you have to control when you are developing new algorithms.
Across the room I’m watching a undergraduate student take apart a science satellite’s sensor test code, most of which is written in horribly bad Fortran code. I know because I wrote horribly bad Fortran code at NASA, and was compelled to make it work on various inconsistently implemented floating point processors prior to IEEE floating point standardization, some of which I was involved with.
She takes apart the code, re-implements it in Python, proves it, puts it into a object oriented structure, numerically qualifies it with exhaustive correlations, then translates it into the final accepted language (which has changed) and reruns the correlations.
Large portions of the Fortran code have been found to be simply wrong. Some parts are partial, inconsistent copies of earlier versions of similar codes. Often there are offsetting errors, where the “original” program author knew there was an error but couldn’t find it in the code, so simply added in more to get the appropriate result.
It is slow and tedious work. In the end, a reliable, well-documented and understood code emerges. The scientists involved subject it to even more proofs, but in the end, rely on it more than what they started with.
They do not use, however, the version developed directly. That is because the massive number of related software that it must be used in concert with, is still in obsolete forms – there is no budget to address those.
I never had a Fortran code develop a memory leak or bring down the operating system.
I remember fixing a bug in the memory allocator for the Fortran runtime for LLNL decades ago on a “non critical code”. They do happen. It was in a massively parallel processor. There’s nothing magic here – just how much something is used.
Google routinely runs >1,000x larger processor clusters running Python – its an even larger tested environment than LLNL ever has had. And its growing at a rate that means it will always be larger. And yes, the data driven parts use floating point rather extensively.
Fortran “runtime” is relatively unreliable. You are simply measuring inertia.
nooneofconsequence, I knew that if a person got good enough at programming then watching female undergrads would become part of the job! I knew it!