Your research lead you to the discovery of a high-profile vulnerability. Give us some details.
In late 2004, Arnold Yau (a PhD student in the group) and I began an investigation into IPsec security, in particular the security of the "encryption only" configuration of IPsec. The relevant standards are pretty clear that this configuration should be avoided, but they also mandate it be supported, mostly for reasons of backwards compatibility.
We also found quite a bit of anecdotal evidence, mostly in the form of on-line tutorials, that people might be using it in practice as well. So we decided to do an analysis of the Linux kernel implementation of IPsec, to see how it handled the encryption-only configuration and what, if any, weaknesses it might have. Arnold mostly worked on analyzing the source code, and I worked more on the cryptanalysis side, seeing how features of the code might be exploited in attacks.
By April 2005, about 6 months after starting, we had a fully-implemented attack client which showed the encryption only mode of IPsec to be very weak indeed against certain kinds of active attack. In fact, we were able to break the IPsec encryption in a matter of seconds, even when 128 bit AES keys were in use!
In your opinion, what is the appropriate approach to take when announcing a vulnerability? What important lessons have you learned during your vulnerability disclosure process?
We worked through NISCC, a UK government agency, and they were able to put us in touch, through their channels, with a large number of vendors and consumers of IPsec. We also discussed things with people in the IETF, to make sure our understanding of the standards was correct. This approach gave all parties some time to assess the impact of our work for their products and deployments ahead of the official vulnerability announcement from NISCC and the release of our research paper describing the work.
We found the vendors to be largely responsive and cooperative, and I think they appreciated the opportunity to work things through in advance. For some vendors, there was no problem: their products didn't allow the encryption only setting to be selected; others had more work to do.
At the same time as this, we were getting useful feedback on the real-world implications of our research. That ultimately helped to make our research paper a better informed piece of work. This benefit was a bit unexpected for us: so one valuable lesson was not to underestimate the value of working with the community of implementors and users before going public with your research. The proof that this worked in our favour is that our paper has been accepted for presentation at Eurocrypt 2006, a major international conference in cryptography (that was held in St. Petersburg in May).
In general, what is your take on the full disclosure of vulnerabilities? Should the vendors have the final responsibility?
This is a hard one for me, as I don't have direct experience of working on the vendor side. However, software should be a product like any other, and I think the seller of any product ultimately has the responsibility to make sure its fit for purpose. Most software companies understand that perfectly nowadays and big strides have been made in recent years.