Archive

Archive for November, 2010

Transparency in Infosec: Good or Bad?

November 30th, 2010 1 comment

I read a thought-provoking article yesterday by Derrick Ashong entitled “The Truth About Transparency – Why WikiLeaks is Bad For All of Us“. In the article, Ashong argues that transparency of sensitive information is a good thing, in specific circumstances. Exposing war crimes, for example, is a good use of “forced openness”. However, releasing TOO much information about things like email communications, private thoughts of world leaders, etc is going overboard in his opinion. In other words, some stuff is better left unexposed. This got me to thinking about similar debates we’ve had in infosec about openness, whether related to vendors, code, data breaches, general information sharing, etc. Here are a few examples of common “transparency debates” in our field:

  • Full Disclosure: This is likely the most prevalent example of transparency in infosec, and there are strong arguments on both sides. Should “security researchers” actually publish information about software or other flaws, purportedly to keep the vendors honest and on their toes, or to drive the community to fix things in a timely manner? Many would argue that this has been somewhat effective over the last 10-12 years, but others may disagree. Another common occurrence that clouds this issue is security researchers who grasp at their 15 minutes of fame to present flaws and information publicly, regardless of whether the issue is really fixed or not.
  • Data breach disclosure: This is legally mandated in many cases now, depending on who is affected and what data is actually breached. The general thought here is twofold – first, make sure affected people know what has happened. Second, bring some public awareness to the issue. On the first count, we’re succeeding. But on the second? Other than fines and penalties related to compliance or industry regulations, people do not seem to care too much.
  • Security community data sharing: Sites like the SANS Internet Storm Center come to mind. By sharing information about things we see, attacks that are happening, malware and other malicious code, we’re better able to prepare for things headed our way. In general, I think most agree this is a “good thing”. Many organizations are┬áhesitant┬áto share data, though, even anonymously. A new (and awesome) entrant here is the recently unveiled Verizon VERIS framework. This aims to gather data about specific incidents (anonymously, of course), and the Verizon Risk team then crunches this data to provide some interesting stats to the community, akin to the work they’ve done in past years with their Data Breach reports. Will this hold water? Time will tell, but I’m hoping so.
  • Audit/Assessment statements: As more and more organizations look to use outsourced providers, ranging from simple outsourcing to cloud-based services, we’re being asked to extend our security policies and standards out into 3rd-party organizations. To feel at all comfortable doing this, security professionals are routinely asking for some verification of security controls in place that they can either touch/feel/assess themselves, or (more commonly) have a recognized format that indicates assessment by an objective auditor. These often come in the form of SAS70 audits or ISO 27000 certifications. There is HUGE skepticism around these from the community, however, particularly around SAS70. Why? Well, we usually don’t get to define the controls or the scope, so it’s less than “open” in many ways. And that tends to make us nervous.

So, back to the initial question – is “transparency” good or bad for infosec? Well, although I hate to give a stock consulting answer, I think “it depends”. In this case, I think what influences the answer is the ultimate goal of the openness – is it shaming vendors or trying to self-promote? Probably bad. Is it helping people protect themselves, or arming the security community to better look out for issues? Probably good. I think anything that teaches us to learn from mistakes is a good thing, too, and we need more sharing and openness that leads to that.

Categories: Information Security Tags:

Infosec: Not Just Technology, Thank You

November 25th, 2010 Comments off

So, it’s Thanksgiving, and you’re supposed to talk about what you’re thankful for. Well, I had a bit of an epiphany today while mulling this topic over, and given that I’m avoiding some incredible (not) forced in-law activity for the day, seems as good a time as any to throw it out there.

People commonly ask me how I got into information security. I have a pretty good story there, but too long to get into here. The other question that always seems to accompany this one, at least when I tell people how passionate I am about my chosen field, is “What do you enjoy about it?” or “Why do you like it?”. Today, perhaps for the first time, I figured that out.

It’s not the technology. Although I’m a proud geek, who enjoys breaking things and failing to put them back together properly, that’s not enough to sustain passion, for me at least. It’s the people. To be more specific, it’s the nature of people that infuses everything infosec, and that, coupled with the technology element, is what gets me fired up in the morning to go to work.

I’ve blogged before about how infosec differs from many other common disciplines within the sphere of IT, especially in the sense that we have real attackers trying to cause us harm all the time. This is one side of things. We get to constantly strategize about how enemies will try to manipulate our systems, what they are after, how they operate, etc. The flip side of this is also fascinating – human nature. We have to contend with people’s innate desire to click things, open doors, answer phones, give away or lose information, etc. This is a shared burden between us and all of IT, really, but it’s never going to go away. We will always have people trying to scam us or steal from us, and we will always have people who can be exploited for that very purpose. My first degree is in Psychology. I over-analyze things, to say the least, and people are fascinating to analyze.

So today, on Thanksgiving, I realized that I have a perfect field for my personality, talents, and interests. Technology changes constantly, and is challenging for that reason. People are exactly the same as they’ve always been, and this is exactly why they’re challenging. Together? A perfect storm of things to work on, likely forever. And for this, and the fact that I fell into this field, I am thankful.

Categories: Information Security, Musings Tags:

“That’s Too Hard” Syndrome

November 19th, 2010 6 comments

How many things DON’T we do because they’re “too hard”? For years, we’ve all thrown around “security best practices,” some of which are deeply embedded in the common infosec psyche (separation of duties) and others totally subjective in nature (Web app firewalls). I’ve had lots of discussions with smart people recently, though, that make me wonder – what “big gaps” are left? Have we solved most of the fundamental technical issues that plague security? I’ll go out on a limb and say yes. I think we have.

Keep in mind, there’s a big difference between solving the fundamental issue and implementing the solution. For example, we can inspect network traffic of any kind – there are plenty of tools (proxies, deep application-inspection firewalls like Palo Alto, etc.) However, getting those tools in place, working properly, and tuned correctly is a different issue. What about really high speeds? Easy to miss things there, too. That aside, though, what are we *capable* of doing that most of us are not, simply because it’s a lot of work? There’s a few areas I can think of right off the bat:

  • Application whitelisting: Even in a limited fashion (ie NOT replacing A/V entirely, just augmenting), most people aren’t doing this. Reasons I’ve heard include “too many different profiles”, “politics”, “maintaining changing applications” and too many others to count. Everyone in infosec acknowledges that A/V is not working well anymore (if it ever really did). We have solutions that could shore up our malware defenses, protect us from client-side exploits, etc. But…too hard.
  • Secure coding: A big part of the problem here is that developers are not security people. They don’t think the same way most of the time, and no amount of effort will probably change that. However, there are some fundamental things that keep showing up, time and time again. Buffer overflows and lack of bounds checking. Lack of input validation. Excessive privilege use. Dumb-ass comments and info in code. Josh Corman and crew are trying valiantly to instill some sense of responsibility with the whole Rugged movement, but it’s still a difficult thing to pull off in multiple ways. First, security people usually hate dealing with developers. Reason? Developers typically don’t like us. Second, developers are focused on functionality and speed. NOT security. Changing those priorities may take more than random acts of kindness. Random acts of ass-kicking may be more warranted.
  • Encryption: Encrypting hard drives with sensitive data should not be optional. We have SO MANY TOOLS to do this with now, this should be viewed as a mandatory exercise for organizations who give even the semblance of a %&*#. However, the deployment and management of this (and of course cost) often puts this one into the “too hard” bucket as well.
  • Outbound network ACLs and filtering: This is not, in fact, hard. But in order to do this, people will have to get off their ASSES, put DOWN the %*&#(% MOUNTAIN DEW, and go put a few rules in the ASA, Check Point, Fortinet, or whatever they’re running. “But it will break things!” Uhhhh, really? You’re going to “break” outbound IRC traffic? You really NEED people connecting outbound to SSH? If nothing else, filter on all IP ranges other than your own NAT’d space, so no one can spoof from inside. Filter out file sharing stuff, and unused or unallocated address space. Do NOT just allow everything out, that is bordering on irresponsibility, if you ask me.
  • Anal-retentive change management: Yeah, there should be an exception process. Shit happens. But unless you support a cowboy culture in IT, this should be at the very top of security’s priority list. You need approval workflow, audit trails, and details on what will happen, when it will happen, why it is happening, and what folks plan to do if things do not go as planned. Again, anything less than this in a production environment is flat-out pathetic.

There are plenty of others. What am I missing? What other areas do people write off as “too hard” that we should fight harder to get done?

Categories: Information Security Tags: