I read a thought-provoking article yesterday by Derrick Ashong entitled “The Truth About Transparency – Why WikiLeaks is Bad For All of Us“. In the article, Ashong argues that transparency of sensitive information is a good thing, in specific circumstances. Exposing war crimes, for example, is a good use of “forced openness”. However, releasing TOO much information about things like email communications, private thoughts of world leaders, etc is going overboard in his opinion. In other words, some stuff is better left unexposed. This got me to thinking about similar debates we’ve had in infosec about openness, whether related to vendors, code, data breaches, general information sharing, etc. Here are a few examples of common “transparency debates” in our field:
- Full Disclosure: This is likely the most prevalent example of transparency in infosec, and there are strong arguments on both sides. Should “security researchers” actually publish information about software or other flaws, purportedly to keep the vendors honest and on their toes, or to drive the community to fix things in a timely manner? Many would argue that this has been somewhat effective over the last 10-12 years, but others may disagree. Another common occurrence that clouds this issue is security researchers who grasp at their 15 minutes of fame to present flaws and information publicly, regardless of whether the issue is really fixed or not.
- Data breach disclosure: This is legally mandated in many cases now, depending on who is affected and what data is actually breached. The general thought here is twofold – first, make sure affected people know what has happened. Second, bring some public awareness to the issue. On the first count, we’re succeeding. But on the second? Other than fines and penalties related to compliance or industry regulations, people do not seem to care too much.
- Security community data sharing: Sites like the SANS Internet Storm Center come to mind. By sharing information about things we see, attacks that are happening, malware and other malicious code, we’re better able to prepare for things headed our way. In general, I think most agree this is a “good thing”. Many organizations are hesitant to share data, though, even anonymously. A new (and awesome) entrant here is the recently unveiled Verizon VERIS framework. This aims to gather data about specific incidents (anonymously, of course), and the Verizon Risk team then crunches this data to provide some interesting stats to the community, akin to the work they’ve done in past years with their Data Breach reports. Will this hold water? Time will tell, but I’m hoping so.
- Audit/Assessment statements: As more and more organizations look to use outsourced providers, ranging from simple outsourcing to cloud-based services, we’re being asked to extend our security policies and standards out into 3rd-party organizations. To feel at all comfortable doing this, security professionals are routinely asking for some verification of security controls in place that they can either touch/feel/assess themselves, or (more commonly) have a recognized format that indicates assessment by an objective auditor. These often come in the form of SAS70 audits or ISO 27000 certifications. There is HUGE skepticism around these from the community, however, particularly around SAS70. Why? Well, we usually don’t get to define the controls or the scope, so it’s less than “open” in many ways. And that tends to make us nervous.
So, back to the initial question – is “transparency” good or bad for infosec? Well, although I hate to give a stock consulting answer, I think “it depends”. In this case, I think what influences the answer is the ultimate goal of the openness – is it shaming vendors or trying to self-promote? Probably bad. Is it helping people protect themselves, or arming the security community to better look out for issues? Probably good. I think anything that teaches us to learn from mistakes is a good thing, too, and we need more sharing and openness that leads to that.