A huge breakthrough. Or as The Donald would say, a yuge breakthrough. Yuge.
It’s a very clever solution, too.
[Via Geekpress]
A huge breakthrough. Or as The Donald would say, a yuge breakthrough. Yuge.
It’s a very clever solution, too.
[Via Geekpress]
Comments are closed.
I still want to know technical details, myself.
I don’t disbelieve they’ve got something – a way to improve on TCP failure retries is a plausible thing.
But I want more than the vague “we use algebra!” explanation before I dive in.
(Also, their “Code-on” licensing company has the most pathetic web presence I’ve seen in a while…
Really, MIT? You’re actively trying to license this stuff, and that’s the best you can do?)
An extraordinary claim.
As best I can make out, it uses some kind of foreward error correction applied to groups of packets, allowing entire lost packets to be recovered. How that produces a net improvement in throughput, or doesn’t kill latency, escapes me.
It doesn’t if there are no lost packets, but it seems lost packets are a big problem.
So, their algorithm is a lot cleverer than this, but I can give you the general flavor; there’s an error-correcting code at the inter-packet level. Think of RAID-5 applied to packets and you have the general idea. As long as the packet loss doesn’t exceed some specified rate it can rebuild the lost packet from the others.
This doesn’t actually help with bandwidth per se – no free lunch – but it might help a lot in situations where raw bandwidth isn’t the limiting factor and pauses for retransmission of lost data happens frequently (mostly wireless networks?).
The New York Times came up with its own bandwidth solution. In its original form, it consisted of simply reclassifying bad data as good. A breakthrough occurred recently, however, in which it was found that one could simply make up data to fill in for any missing pieces. More recently, it was found that simply making up the entire data stream not only resulted in a huge advance in bandwidth, but also seemed to result in news that Times reporters liked much better…
Newsweek has pioneered a further advance: Stop sending any data. Not quite fully implemented yet, but the bandwidth drop of going entirely online has to astronomical. (The magazines that were published and distributed to stores but never ‘picked up’ should count, IMNSHO)
Meanwhile it looks like a breakthrough has been made on creating a tractor beam.
http://www.huffingtonpost.com/2012/10/26/tractor-beam-real-life_n_2018045.html?utm_hp_ref=science
Tractor Beam: NYU Physicists Build Real-Life Working Model Of Sci-Fi Staple
Posted: 10/26/2012 2:14 pm EDT
If that works, it would be quite helpful. Lost packets are a major contributor to high latency because blow the pipeline. The cost isn’t the extra bandwidth for the retransmit (although that matters) it is having to stop a smooth flowing sequence and restart, then wait two round trip times (once to get the alert to the source, then the source to send it again). Packet loss costs go up faster as a network gets busy because lost packets are more likely and it’s more expensive to retransmit.