Behind the scenes of the cleanest ISP in the world
by Mirko Zorz - Tuesday, 17 April 2012.
When a customer has his box hacked, the faster we're able to mitigate the situation, the better service we provide. When we're talking about consumers, cutting the connection or pushing the connection to the walled garden is really not about removing our customers ability to use the Internet but rather removing the criminals access to our customers system. We feel that we're essentially providing a service to our customers and if we make the world a better place while providing that service, it's a nice additional benefit. I know some ISPs share this view while others don't. That's life.

There's also the fact that a lot of the malware-infected customers have a worse “Internet experience” and they are likely to blame their ISPs for it. Take something like DNSChanger for example. For us, all customers infected with it have their DNS servers on the opposite side of the planet. That causes some latency issues. Also, when I read something like the “FBI’s Internet Blackout Postponed from 8 March to 9 July”, it makes me chuckle a bit. I checked our share on 7 March and we had two (2) customers reported to us that day, so “Internet Blackout” doesn't really apply to our customers.

What were the most significant challenges in setting up a system that will closely interact with users and warn them about infections? What were the major obstacles your encountered during this process?

Every new thing we manage to get running is aimed at making our work load a bit easier.

Creating a system with a group of people that had never created any systems whatsoever was a challenge in itself. Had there been a commercial solution available at the time, we would have probably taken that route, which would have been a terrible decision that we would most likely still regret. The thing we have come to realize in the last 10 years is that our system will never be finished – we constantly need new features more or less immediately in order to react to new threats or new products of our own. If we had a commercial system, we would have to wait for an update for who knows how long. We started building the system one step at the time. Sometimes we got the features to production within hours while some others took years to accomplish.

To give an example, back in 2002 before we had anything, the most cumbersome task was when we had, say, a spammer, and we might have 100 complaints in the abuse box in the morning. I would take a single email, check the IP address, search the inbox for that single IP address and move all emails containing it to a temp folder. Because new complaints would be pouring in steadily I would shout over my cubicle that “I took 1.2.3.4, don't touch that”. Going through the radius logs, DHCP logs, or whatever was applicable for that case to determine the right customer took around 20 minutes.

Even when we had already shut the customer, complaints of previously sent spam would keep on coming for weeks to come, so we had to remember that 1.2.3.4 had already been handled. Then we had the additional problem of dynamic IP addresses. We had to go through the 20 minute trouble multiple times only to find our that the same customer was behind a lot of the different addresses.

So, the first thing we did was automate the log browsing part by putting DHCP, radius and other logs into a database that our system could use to resolve the customers behind the IP addresses, then we opened up APIs to our customer management systems to get the customer information, choosing credible sources of intel to automate and integrate to our system and creating a webGUI for the handling part. When we had that done, we only had a single case instead of a hundred emails with multiple IP addresses in a messy mailbox. I would have a single row in our handling system saying Customer ID12345 and all those emails would be behind that link.

The next step was to notify the customer and/or shut the connection. Notifying the customer was easy because we already got the customer information automatically. Shutting the customers with a button was a bit trickier, but we started with the connection types where we had the most incidents.

Spotlight

Leveraging network intelligence and deep packet inspection

Posted on 26 November 2014.  |  Tomer Saban, CEO of WireX Systems, talks about how deep packet inspection helps with identifying emerging threats, the role of network intelligence, and more.


Weekly newsletter

Reading our newsletter every Monday will keep you up-to-date with security news.
  



Daily digest

Receive a daily digest of the latest security news.
  

DON'T
MISS

Fri, Nov 28th
    COPYRIGHT 1998-2014 BY HELP NET SECURITY.   // READ OUR PRIVACY POLICY // ABOUT US // ADVERTISE //