Someone is learning how to take it down.
Ironic, considering that the original purpose of Arpanet was to prevent exactly this sort of thing. https://t.co/mYDqxbWGaA
— Apostle To Morons (@Rand_Simberg) October 21, 2016
[Update a few minutes later]
Oh, goody, a vulnerability in the Linux kernel that’s been there for nine years.
Will we see a rise of acceptance that hosts delivering DDoS attacks are carriers of a plague and need to be isolated, or even shut down by force?
May be already happening, Hack-a-day comment
Our company is moving to cloud solutions, and it just seems so stupid. I understand the capability of having open systems, but they are way too vulnerable. It is much easier and safer to have your own ecosystem (suck it Obama) with just a few easily defend gateways to external data. Instead, we are placing each employee out on the Internet and putting their data in a housed system that can be cut off.
On a different note, critical energy infrastructure needs to be managed via a separate grid. I blame Gore’s Reinventing Government for building that vulnerability. Of course he did so on the whims of his boss.
Actually, security typically improves dramatically with cloud servers.
Security is something that is extremely hard to do right, but extremely easy to replicate. So you need to spend $1B on security, regardless of the server count. Amazon can afford to spend $1B on security. Your company can’t. Even if it could, that would be a pretty dumb plan – Amazon’s already spent the money, and is offering you the results at essentially no cost.
That’s probably the main driving force behind cloud adoption – server security/management costs are fixed regardless of scale if done properly. In all the other aspects, the costs are approximately linear with scale.
If you’re willing to trust them with your data, from privacy standpoint.
Sorry Rand, in this day and age “privacy” is becoming more and more like your placebo button. Something to be splashed on a Terms of Agreement form that you don’t have any choice but to accept or not use the service.
Which is why I tend to not use the service.
A false sense of security. It only takes one exploit to put the chill on it. However, there is a degree of effort involved. I’m certain cloud services are not immune from malicious state actors. Perhaps the individual hacker not so much, unless a state secret escapes to the wild….
I will give credit where it is due and that is the on-line banking industry, where paranoia helps greatly…
“Even if it could, that would be a pretty dumb plan – Amazon’s already spent the money, and is offering you the results at essentially no cost.”
Nothing is no cost.
Then the down side here is that the high population count of the cloud – hundreds of thousands of people’s files – makes Amazon a lucrative target. That’s where the bad guys will go.
And they will succeed because no security system is foolproof – as we have seen. And every time someone touts their system as the greatest and most secure – that’s where the bad guys go (human nature) to see if they can crack it.
And they will.
And when they do (not if), hundreds of thousands of people are damaged.
If my stuff is not on the cloud, my stuff is not hacked when Amazon is hacked.
I think there’s a trade-off here you are ignoring:
My system at home might be less secure than Amazon’s, but few people are working on cracking my system specifically.
Lots of people are working on cracking Amazon.
But you seem like a security savvy guy so let me ask you a question out of curiosity:
Is it possible for the US government to create it’s own internet with it’s own satellites and it’s own infrastructure such that no government packet gets on the world’s general internet or comes in from the world’s general internet. This way the number of users is pretty limited to a few 10’s of thousands.
Would that system be any less hackable?
You are looking at the security of the data. I’m looking at inaccessibility to the data by those who created it and use it. Labor inefficiency increases.
server security/management costs are fixed regardless of scale if done properly
The clients not so much.
So you need to spend $1B on security, regardless of the server count. Amazon can afford to spend $1B on security.
There’s a huge difference between “need” and “can” here.
I share readers concerns over the Internet and over reliance on “cloud” services that require a functional Internet.
There are myriad ways to exploit network server vulnerabilities. Enough for grave concern. The exploits don’t stop there. The intelligence services of the US and UK no longer allow certain foreign product to be used in house. And if they don’t allow it, it is probably because they know of useful subterfuges that they have already deployed. The level to which it can extend would be surprising to a lot of people, esp. when one considers the ubiquity of so-called “smart phones” and the next generation of networked appliances, the so-called Internet of Things.
Not to mention targeted assassination via the self-crashing car….
I share readers concerns over the Internet and over reliance on “cloud” services that require a functional Internet.
Leland touched on this too.
It isn’t just hosting security. There are many ways the internet can go down, not just from hacking and ddos attacks. Do you want to keep your company working when there is no internet?
but unless the US decides to make an international incident over this, we won’t see any attribution.
Its only attack on the very existence of the internet in an effort to cripple our country in a time of war but nothing to do with the DNC so we wont be seeing any talk about going to war over it.
The design goal of the Internet protocols was survive loss of links or nodes as long as the network wasn’t completely partitioned. It was and is very good at that.
It was also a goal to ensure that even overloaded links would get at least some of each party’s data through the link. It’s still pretty good at that.
What it isn’t good at is dealing with extreme cases like this where every fire hydrant in the city gets opened at once and the water pressure goes to zero. (Actually the reverse, but the analogy isn’t completely insane, I think.)
The original environment was a small community of mostly-trustworthy actors, which wasn’t true for long, and we’re still dealing with a lot of fallout from that. DNS, HTTP (rather than HTTPS), Certificate Authorities, the list is quite long…
We need to fix people, but you see how the clockwork orange solution worked out.
The problem here, as I understand it, was that many companies shared a single DNS server which was taken down by a DDOS attack from ‘Internet Of Stupid’ devices that were insecure by design.
DNS has always been a weak point in the Internet, precisely because it’s centralized…. and it’s a thousand times worse when you then centralize your own company’s DNS service on the same server as thousands of others, with no redundancy.
But, hey, some manager probably got a big bonus for saving money by eliminating backups.
What happens if the Internet goes down on Nov. 8?
…We won’t get to watch streaming video…
I can’t help but snark a little. I’ve been hearing for 20 years now how superior Linux was and it’s been lugging around a major exploit for nine years!? Wow.
Somewhat more serious question: I don’t think OS X uses the kernel (it’s freeBSD based, yes?) but is this a concern for Mac owners?
Finally I’m surprised only Edward has mentioned the internet of “things” which allowed the DDOS to occur in the first place. There’s so darn many “smart” devices out there it’s much easier to mount an overwhelming attack. The “smart” devices don’t have any security worth a durn, so it was (relatively) easy get things rolling.
Instead of hand-wringing, I would suggest a “lessons learned” approach citing what went wrong, and how, but I suspect we already know. It’s applying the fix that will be the problem.