The Dyn attack, one of the largest DDoS attacks ever seen, is back in the headlines after the three culprits pleaded guilty to creating the botnet which was used in an attack that crippled the internet throughout North America.
Before the attack, cybersecurity experts had long raised alarms that internet-of-things connected devices could be marshalled into a botnet army. But few foresaw that such an army would be turned towards DNS servers at Dyn with such devastating effect.
One year later, the majority of online businesses appear to still be vulnerable from the same attack.
One Year Later
One of the ways for website operators to protect themselves from this type of attack is to use more than one provider for DNS. When you set up DNS for your domain, you have the ability to specify the authoritative name servers for the domain – that is, which servers are the authoritative source of DNS for the domain.
After the Dyn attack, more companies began using secondary and tertiary DNS providers. I was curious how many companies had adopted this model. Using Alexa’s rankings of web traffic and page rankings, I pulled a list of the top 100 U.S. websites. I fed this list into a small script that I wrote to give me the authoritative name servers for each domain in the list.
What I found was that 64 of the top 100 websites still use only one DNS provider, including major companies directly affected by the Dyn attack. Quite often, that provider was Amazon.
If website operators fail to take measures to fix known problems, then attackers, who have shown the means to evolve and integrate new tools, already have the upper hand
In theory, Amazon has several advantages as a DNS provider. Amazon Web Services does more than $4 billion in business per quarter, and has the infrastructure to back it up. At the same time, most of its traffic is outbound. It certainly has the bandwidth to defend against inbound attacks. And yet, any network operator that takes the job seriously knows that a single provider is a single point of failure. It’s big, but still vulnerable. API attacks and human error resulting in cascading automated failures are concerns in large-scale networks.
What is the Best Solution?
This topic would be important to just a handful of IT security professionals if it was merely about DNS providers. All of us, from website operators to corporate network operators, and everyone in between must continue to think about the impact of availability in every aspect of their online presence, including DNS.
Having a second, or a third DNS provider could keep an e-commerce site up during an attack.
Many DNS companies spread their services across different Top Level Domains (TLDs) too, which protects against a root-level DNS outage or attacks against a particular TLD, like “.com,” “.net,” or “.org.”
And in the case of Amazon itself? Well, they do use diverse providers and none of them are their own DNS service.
Could it Happen Again?
The short answer is yes. Will it happen exactly the same way? Probably not.
Although the quantity of connected IoT devices continues to grow and the number of Mirai botnets continues to grow, those armies are now splintered. What was once a single botnet of 380,000 devices is now many botnets with much smaller botcounts.
What’s interesting about Mirai is that it’s incredibly versatile and customizable. Technically, the bots don’t have to be IoT devices. There was a Windows variant reported in the wild and if you’re like me, maybe you run the bot in your lab on top of traditional Linux.
There are also active Mirai bots that are actually powerful servers with big uplinks, rather than tiny, low powered devices sprinkled around the network. But there’s a broader story here. If website operators fail to take measures to fix known problems, then attackers, who have shown the means to evolve and integrate new tools, already have the upper hand.