Home > Information Security > Quick Thought: Monitoring Data Exfiltration to the Cloud

Quick Thought: Monitoring Data Exfiltration to the Cloud

September 10th, 2010

Truth be told, this thought was sparked by my friend Rob Rounsavall at Terremark while we were presenting at the SANS Virtualization and Cloud Computing Security Summit in DC last month.

The question is simple – with the concerns we have surrounding cloud security, whether providers meet our basic policies and practices, let alone compliance requirements, can we allow business unit IT teams, developers, and others to use “pay with a credit card and get started now” types of cloud services? The answer is likely no…but this is much like telling people they shouldn’t speed in their cars. Without some enforcement mechanism, they’ll do it. So if we’re concerned about this, just creating a “policy” may not help us, especially in large, distributed organizations.

So what kinds of outbound detection/blocking are folks doing (if anything)?

1. Snort or other IDS rules for sites or specific content elements associated with these cloud services? Something like:

alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg: “Cloud Madness!”; uricontent: “terremark.com\xyz”; flow: to_server, established; classtype: cloud_is_bad; sid: 31337; rev:1;)

Yes, I know this is more of a “pseudo” rule. Just a thought.

2. More traditional content filtering like Websense?

3. Proxy or DLP filtering?

Rob asked the crowd if they were doing this, and no one really had much to say, so I’m assuming it’s not something many are thinking of…yet.

Categories: Information Security Tags:
  1. September 21st, 2010 at 05:27 | #1

    Hi Dave –

    In a perfect world (where’s that, will we ever get there?), all my OB packets would match a “whitelist”, or drop to the floor.

    In an imperfect world (a much more interesting place, don’tcha think?), web content categorizers/filters are used primarily for legal purposes, the “reasonable man” defense, since any URL can be a misleading indicator 99.9% of the time.

    Similarly, DLP filters can be a good story to tell, but do little to address the mega-threat of what I’ve been calling “outside-out data leakage” but what the popular press refers to as social networking tools.

    Which brings me to the much vaunted topic of proxies (especially the reverse style). I’m a big fan of PULL, and of reflecting out only that which is requested (be it app or data). Any kind of VPN solution feels more and more to me like an attack tool nowadays. IMO, the world is still waiting on a truly excellent proxying device however.
    Microsoft’s ISA, which began with such promise, fell rapidly into a stale irrelevance. The jury is still out on appliances from Cisco and the other well capitalized big names. We’ll see, but I’m skeptical a truly good tactical solution ever emerges for enterprise implementation, for one very compelling reason: the cloudfrastructurers and ISPs (and their big brother handlers) want to sell us this service (probably on a monthly basis and for obscene sums in the aggregate). One could argue, reluctantly, they are best positioned for performing this function.

    There, I got through all that without resorting to use of the word “sustainable” once.

    – Guy

Comments are closed.