11:00 PM3 min read

These two days are generally the biggest single sales days of the year for retail companies -- interestingly enough this phenomenon spans the globe and isn’t just isolated to North America. In the past 10 years of living in Sweden, I’ve seen a sharp uptick in customers concerned with capacity planning around these few sales days as they become more important to the company's bottom line. In many ways, they can make or break the yearly figures for a company, no matter where in the world they are based.

At times like this, with systems and staff being pushed to the upper ends and the sales rolling in, many retailers may find that this ‘best day ever’ is also the time they are the most vulnerable.

The core problem here is that the category of incidents that fall under the broad ‘Denial of Service’ umbrella aren’t inherently good or bad. You can have a DDoS from malicious actors just as easily as you can from your customers.

Yes, it’s possible to have a ‘friendly DDoS’. Nobody will argue against the idea that, on Cyber Monday, it’s a GOOD thing to have tons of traffic to your website. If your website is offering the hottest toy of the year for 10% less than everyone else, the marketing blast email you send may be the trigger for just such a ‘friendly DDoS’ if your systems aren’t equipped and scaled in anticipation.

On the other side, a malicious DoS attack will be easier to execute on these days. With all the friendly traffic, it will take less effort on the attacker's side to push your site past its limits. And this takes us to the main challenge with protecting a website on high traffic days -- Discerning the Friends from Foes. From a network perspective, access to your web page, downloading of a file, and so forth look the same no matter the source. So how do you tell a botnet accessing your website from 10000 compromised hosts from 10000 customers with credit card in hand? This kind of detection is complex, and something we spend a lot of time and effort on here at Baffin Bay Networks. When all the connections seem to be the same, as is the case in a typical layer 7 attack, you need to have some insight into the intent in order to make a good determination on whether you can safely block the connection or not. Far too often the solution is to do some sort of rate limiting -- ie: limit the connections to the server to some sustainable value. This should only ever be used as a last resort, as it punishes the innocent users and makes no determination. It’s clumsy and inelegant. So what is the alternative?

A colleague once told me that ‘to fight robots, you need better robots’. I always think of this video of a robot playing rock, paper, scissors. It wins by being able to track intent though small movements. The only way to win the game is with a faster robot. Similarly when you’re up against a massive botnet, you can’t rely on manual event parsing to resolve the issue, and you surely don’t want to block your customers.

What we’ve developed at Baffin Bay Networks is a solution that leverages a combination of proven legacy technologies, such as blacklisting and whitelisting, antivirus, and other signature based technologies to filter the known malicious stuff out backed up by advanced statistical analysis and machine learning -- essentially we’ve built a bigger, better robot that works for you.

This means that our solution does intent analysis for each incoming connection. We have designed it to automatically mitigate the malicious stuff, the bad guys are kept out, and the customers are allowed in. That way the operations and security people can relax a bit and get their holiday shopping done as well.

James Tucker

Director of System Engineering

Related posts