Until now, this was possible only by scanning all potentially affected network nodes or address spaces for clues - a process that is simply too costly and takes way too much time to be considered globally applicable.
But things are about to change, as Swiss researcher Pedro Pinto and his team from the École Polytechnique Fédérale de Lausanne have revealed a new strategy for localizing the source of diffusion in complex networks.
It consists of applying a specific algorithm to measurements collected via only a small fraction of nodes (i.e. connections) throughout the network, and they successfully proved that even by choosing 25 random observers or sensors, they could determine the source of the "infection" with 90 percent confidence.
If they chose well-connected observers, that percentage of confidence was achieved by using only 5 percent of the nodes within a network.
Originally devised to pinpoint the source of real-world epidemics, the technique can easily be applied to computer networks - no matter what their size is. And given that the Internet is a global system of interconnected computer networks, the application of this strategy seems only natural.
The researchers tested the technique against for different types of network structures, and the results were satisfactory every time. Of course, the more connections the chosen nodes had, the smaller percentage of them had to be monitored and pumped for information.
They tested the effectiveness of the algorithm on real data from a South African cholera outbreak and, according to H-Online, on information from the 9/11 terrorists' publicly released data communications.
The paper the researchers released on Friday before last has garnered a lot of attention in various circles, but Pinto confirmed to Computerworld that computer security companies are the only ones who have contacted them so far, asking for additional information and gauging the ways the technique can be used to localize infection sources on the Internet.