Keeping an eye on those in power has always been a staple of relatively open governments and well-organized IT shops. Let’s focus on the latter, given that a discussion of the former could easily lead to rants. Visit EFF for more info on THAT area.
Keeping an eye on IT people with greater privilege levels has always been a challenge. Obviously, this could extend to NON-IT staff as well (Enron, anyone?), but in the information security division, we’re often dealing with abuse of privileges related to something or someone in IT. I really see four distinct levels of privilege monitoring that need to be considered:
- The System Level: This is the realm of SysAdmins, who actually manage systems and make changes to them. Often, these teams will have Administrator or Root privileges to groups of platforms.
- The Application Level: This level pertains to the DBAs and Developers of the world, who may have some degree of control over systems by extension of their control over the critical apps *running* on the system.
- The Network-Infrastructure Level: This level relates to the network “plumbing”, or pieces and parts that hold the environment together. Network admins fall into this category.
- The Backbone or Service Provider Level: Plumbing on a “macro” scale.
Most of us tend to focus in our organizations on the first three levels. We’re all using the traditional mechanisms to accomplish this, too – tools like “su” and “sudo” for *nix systems, UAC and “RunAs” for recent Windows varieties, and logs, logs, logs. Applications and network device OSs have their own mechanisms, too, most similar in nature to tools like “su” and “sudo”.
What about the backbone level, though? What can we do to exert “control” over what passes through? We’ve certainly got end-to-end encryption, but that may not be practical for everything. Simply monitoring Web browsing habits can reveal a lot about us, and much of this traffic is totally open. Recently, this very issue came up in Europe, as reported by BBC News. With all the talk about Cloud Computing, and sending more data and transactions outside our traditional IT infrastructures, we should all be concerned with what access people have to our private and sensitive data, habits, etc. Another issue: how do I know for certain that my private data is deleted after I request that it be removed from some Web site/service? All good questions. There are good and bad aspects of “watching” – for example, I don’t particularly care for my government spying on me (especially in the name of “anti-terrorism”. Sheesh.) But keeping an eye on those who are in positions of trust and authority? All for it.
***Just gave my “History of Hacking” talk here in Tyson’s corner, and it reminded me that I really liked this post from last year on a different site. So I am re-posting it here…enjoy.***
So I just read this:
In the article, Ronald Bartels brings the old school h@x0rama flooding back with his discussion of hacking voice and voice services. I liked his analogy of Kevin Mitnick and Steve Jobs (aka “Berkeley Blue”), where he makes the point that system hackers seem to get in more trouble than those who abuse the phone system. Probably true, except all the early hackers messed with the phone system and now no one gives much of a damn since the crown jewels are usually elsewhere.
Overall, though, he is right in saying that lots of today’s most common infrastructure voice gear, PBXs and the like, are essentially sitting ducks right out of the gate. Well, of course! They’re complicated computer systems more than anything else, and we all know about those dang complicated computers – they’re just not secure! Not out of the box, anyway. This easily leads to a Shackleford Soapbox rant about the merits of basic system hardening and lockdown – everybody pays lip service to it, few do it, even less do it well. BUT….
We won’t go there today. Instead, the most interesting parts of Bartels’ posting is the discussion on detecting fraud and other criminal activity through Voice IDS/IPS and behavioral rules. This, to me, is where the real intellectual stimulation happens. I am a sucker for any discussion of behavioral monitoring, because a) we are NOT good at it in corporate America yet, and b) there are MANY complicated facets to consider. Let’s break this down a bit:
- Could a simple pattern of “war dialing” or “demon dialing” be easily detected? You bet. A sequential series of inward dial attempts from one or very few sources would be highly suspect, and this is obviously NOISE that you do not want. Even if you know the probability of anything answering is minimal, still a good idea to block noise if you can.
- Voice speaker recognition signatures? OK, I get it. Record the voice, match the recorded profile to a set of origin numbers, yeah. Cool! Practical? Ummmm, no. Dude, there are like 4 companies in the world that give a damn about Voice profiling. With signatures to boot. Geez – big market!
- Pure behavior patterns make sense. Junk coming from a number or series of numbers? Got it. Strange dial times/directions? Yep, got that too. Vulnerability/exploit signatures specific to the phone network and equipment? OK, still with you. But here is the $29 million question: who the hell is going to LOOK at any of it? And analyze it? And baseline it, track it, diff it, etc? Simple answer: NO ONE. That’s right, 99.9% of companies do not have the incident planning and response capabilities needed to even begin to do this, let alone analyze it and understand it.
In too many penetration tests to count, I’ve busted into some system or another using war dialing techniques. There really is too much crap out there that isn’t secure, left open for vendor support staff or remote applications or whatever else. But given the proliferation of behavior-based security analysis on traditional data networks (sarcasm- there is not a whole lot of this being done), the notion that security managers will even RANK this on the budget-spend-o-meter is not realistic.