'Responsible Disclosure' and Security Vulnerabilities

A friend questioned my publicly announcing a security vulnerability at the Ontario Science Centre website today. I ended up writing a long email about the ethics - such as they are - of the disclosure of security vulnerabilities, and thinking a lot about what I did and how I might have done it better. What I wrote came entirely from memory, from things I'd read and internalized during a 22 year career in System Administration and Dev/Ops: it's a subject I've thought about a lot. But I did a bit of research tonight: plenty has been written on the subject, but the term I was reminded of that I favour is Responsible Disclosure (Wikipedia).

Here's what happens: a hacker finds a security flaw, by accident or on purpose. Now we find out if they're a white hat or a black hat (or one of the multifarious shades of gray), because a white hat tries to report the flaw to the programmer responsible, while the black hat attempts to siphon private and hopefully lucrative information (such as credit card numbers and their attached names) from the breach.

But how do you responsibly report a security flaw? Consensus is that you should let the responsible party know, giving them time to create and distribute a fix. Ideally, they will then announce this known and now fixed issue as a warning to others, crediting the notifying party with assisting them. Some large companies offer bug bounties (monetary rewards) for exactly this behaviour, and have a prescribed protocol for handling every step of the process.

Being a white hat is rarely so simple. Some companies attempt to sue the white hat who notified them, and short-sighted laws like the American DMCA make it possible for them to occasionally win (the DMCA makes it illegal in many circumstances to do reverse engineering or security research on software - which means that the only people who will look for security flaws are already criminals). Some companies ignore you. Some companies take months and months to issue a fix, while there's a known security flaw in their software in the wild.

What to do then if the company doesn't respond to the notification of a security vulnerability? There are many (contentious) opinions on this, but I agree with what appears to be the general consensus: the white hat/researcher should eventually publicly reveal the flaw so that users of the software will be aware of the problem and either mitigate the issue or stop using the product. The amount of time to wait is also contentious, and dependent on the type of product, the severity of the problem, and whether or not there are known exploits in the wild.


As mentioned in my blog entry about the OSC, I make - and secure - websites for a living. For reasons I no longer remember (although I was going to the Science Centre shortly after), I ran their site through the Qualys SSL Labs SSL Server Test, and it failed spectacularly. They were using incredibly old protocols, and the site was susceptible to both the DROWN and POODLE attacks (known vulnerabilities) that would allow bad actors to break their security. So I attempted to notify them immediately. And the ethics immediately get a little hazy, because I use a web browser that's half crippled by my security-madness, with Javascript disabled on many sites ... so when I tried to notify them with their "Contact Us" form, I may never have actually sent that form. Which would explain their lack of response. But I think I did send it, and the ensuing six week silence caused me to get self-righteous and pissy, and I eventually wrote up their failure on my blog.

Did I do right? I think so - although I wonder if I actually sent that message. I hope I was in the ballpark. The OSC did right: they fixed the problem once the right person heard about it, and I'm very happy about that. So I guess it worked out.