Home > Information Security > “I’m Not a Coder” may not fly forever

“I’m Not a Coder” may not fly forever

June 23rd, 2011

So, I’ve spent the past month traveling all over the place, teaching and working with clients. I’ve taught two groups of people in the Middle East and Europe how to demolish Web applications. And it has been unbelievably fun, trust me. 🙂 However, I’ve become attuned to something that I think could be a problem in the infosec space in the near future: most security people are not coders. Now, I’m not admonishing those of you that are network types. Hell, I’ve really got more of a network background than anything. But I know C++, spent some time writing it, can crawl through Java, and can whip up some Perl and Python when I need it. And JavaScript? Yeah, I got that too. What I’m finding, though, when working with infosec teams around the globe, is that there’s a bit of apathy toward coding skills. Well, you heard it here, folks:

90% of your security problems are related to bad code, somewhere down the line.

And being a paranoid type, and a bit of a worrier about THINGS, I fear we’re losing some Kung Fu. What does the next generation of security folks look like? From what I can see, they’re even LESS inclined to code. This, in my opinion, is a problem. The 2011 Verizon DBIR mentions malware and hacking, all of which usually comes down to a patch, a flaw, a vulnerability. A piece, or pieces, of bad code. The number of Web application-related flaws is going up and up, particularly XSS (SQLi is steady, even down a slight bit, yay). We need to understand code, period. Here’s a few reasons why:

  • Your organization’s developers need help. Think convincing the rank and file of your organization that security is important? Coders are under WAY more pressure to deliver projects in many cases, so security almost always takes a back seat. Help them.
  • You need to understand what vulnerabilities mean, and what exploits are doing. That may include a bit of code.
  • You need to crank out some scripts, or write a few simple programs, during security assignments (particularly pen tests).

These are just some ideas to get you started. But if you’re one of those security folks that routinely convinces yourself that you don’t need any coding skills, you really need to develop some. This is, in fact, a career development thing. Forget that latest shiny vendor widget. Learn some fundamentals. Here’s a few suggestions to get you started if you are new to this, or maybe even just rusty:

There’s plenty more. I have books on Ruby, Perl, and lots of other languages. Pick one you like! These are just some that are easy to work with and may help you ease back into the world of programming. I, for one, am not a talented programmer, and never claim to be. But I can pull it off, and I *get* code. There’s a solid chance you need to, as well.

 

Categories: Information Security Tags:
  1. June 23rd, 2011 at 13:13 | #1

    Spot on. Over time, I’ve come to the opinion that what clients/people REALLY need from their security consultants/staff is not someone to point out the problem, but to help them understand WHY it is a problem, and HOW to fix it. They need to be educated! After all, that’s why they made the security mistakes in the first place, right?

    If we can’t give them that, we’re not doing a whole lot more than regurgitating, rephrasing, or interpreting the results of our scans, pentests and hacks.

  2. June 23rd, 2011 at 22:16 | #2

    Good one! This is how I think; –

    Developer = {Writing, documenting, debugging} source code.

    Security Engineer = Scan (Developer) + Scan (Tester)

    Scan => Security, Context, Audit, ……..and a lot

  3. Mike
    June 24th, 2011 at 04:54 | #3

    Couldn’t agree more. I’m old school having started with Trash 80′ then Apple back in the 80’s. I can contribute my success in security to having a solid understanding of code. Those early days gave me roots in low level progamming (assembler, etc). I wouldn’t call myself a coder either but concepts like bit shift, offsets, and memory allocation are pervasive in exploits.

    My own observation has been that there is too much singular emphasis on networking in the security field. I have seen the extremes where it appears that there is a belief of “if it aint IPS or firewall, it’s not security”.

    The network is important as it’s usually the conduit, but as you note, most issues are around bad code. Protecting the network with multiple devices is not defense in depth.

    Just as our field has multiple domains, it’s important for the practitioners to be grounded in the fundamentals of both networking and software.

    Q

  4. Clint
    June 24th, 2011 at 21:20 | #4

    I’ve been thinking similar things for some time now. In particular for web applications. In my role at Symantec I lead our Pen Testing team. I also get involved in helping Symantec respond to some of the major breaches we see. Many customers, it seems, have difficulty addressing vulnerabilities for which there is no vendor “patch”. Take standard SQL Injection for example. Info Sec teams who learn their companies custom web apps are vulnerable seem to approach the dev teams with a requirement to “patch” them because they don’t understand the fundamental coding issues that lead to these. Instead of talking about proper input validation in internally developed apps, or possibly using bound parameters or stored procedures, they look to see that a version number was incremented by a vendor patch.
    For this reason I would also add things like .asp and .php to your list of things for security folks to target get familiar with.

  5. June 25th, 2011 at 08:55 | #5

    “I used to code” won’t fly either. Not even “I know some Java and object orientation” in my opinion.

    I wrote on this topic in February:
    http://appsandsecurity.blogspot.com/2011/02/security-people-vs-developers.html
    http://appsandsecurity.blogspot.com/2011/02/another-owasp-paperware-project-anyone.html

    Either you know how to code for real or not. If you know coding and software development you’ll be able to see where security fits in and to communicate with other developers on how to fix holes and avoid them in the future.

    Knowing modern software development isn’t about for loops, inheritance, static void main() or how to pick good variable names (I know you didn’t think so either). You have to dig into dependency injection and mixins, unit testing, refactoring, API design, version control, configuration mgmt and modern IDEs. You don’t have to be _good_ at those things but not knowing about then (== not having used them in practice) means you’ll be spotted as a non-coder in 10 seconds and – *bam* – developers don’t

    So after reading the excellent intro books suggested above, develop a web app, put it into production, then read “97 Things Every Programmer Should Know”:
    http://programmer.97things.oreilly.com/wiki/index.php/Contributions_Appearing_in_the_Book

  6. June 25th, 2011 at 10:24 | #6

    Security Consultant/Pentester that cannot code? -> WHITEHAT SCRIPKIDDIE <- (this is allready a 5M7X trademark!) ;P

  7. June 25th, 2011 at 11:09 | #7

    Amen brother.

  8. June 25th, 2011 at 13:19 | #8

    I proposed a can-you-code test a few months ago. Put the person alone in a room with a clean install of a modern OS. A day/week/month later, check if he/she has produced a working system (time depends on complexity of assignment). Nobody to call, no document deliverables, no Powerpoint deliverables either – just him/her, the computer, the assignment and teh Internetz. 🙂

  9. admin
    June 26th, 2011 at 08:36 | #9

    @John Wilander
    Hey John, thanks for posting, and you certainly make some good points. I disagree with you on “used to be a coder”, though. I, in fact, “used to be a coder”. I have no desire to be a coder today, but I know HOW to code. I read code, I can write code, etc. Is it good to understand the operational aspects of Dev and QA? Sure. But I don’t think most security people will be current “coders”, nor do I necessarily think that’s the right thing to be. I tell developers right off I am not a coder. What I am, however, is a badmotherf**ker who a) will spot defects, and b) cannot be bullshitted.

  10. June 26th, 2011 at 20:40 | #10

    @admin Bullshit detection is paramount here IMO. If you can’t look at code and realize passwords are being hardcoded, the developer is encoding instead of encrypting, and/or notice whether auth tokens are being checked by ALL the necessary pages, not just the default one after login or a subset – you’re going to miss some important stuff when you stumble upon code in the course of your job.

    By the way, I think all this applies equally to databases. Bad code is letting the bad guy in, but the data they’re after is in a database. Gotta learn some SQL also. For example, even if the code looks great, the webapp DB account should never be able to select CCN from transactions if the webapp only needs to perform inserts and updates! Code & DB go hand-in-hand, folks.

  11. June 28th, 2011 at 07:13 | #11

    Google’s Python class is also badass: http://code.google.com/edu/languages/google-python-class/

  12. TheOtherGeoff
    June 29th, 2011 at 14:48 | #12

    err… maybe I’m just restating it, but I take issue that security is something ‘layered’ on by some other person (“I am a coding GOD, you mere code reviewer”).

    90% of your security problem is a QA problem.
    – Either you didn’t define the security requirement well enough for the coder to understand
    – or you didn’t have a testing methodology in place to identify defects DURING development.

    Then we as non-developer security people (who should be considered the ‘owners’ of the security requirements), really just need to review the process (USDA PRIME), and sample during UAT (pen test, dynamic scan, whatever) a deep subset.

    HOWEVER,

    THE OTHER 90% of security problem is a people problem.
    – You have the wrong (or improperly educated/motivated) people coding and testing (yourself included),
    – The operations team is not doing things in a secure manner
    (“what, actually _look_ at these logs?, sorry, we can’t do that and operate the system! and make the 5pm tee time”)
    – The user of the code is doing something insecure (passwords suck…I’ll write this one down. or use the defaults, or use the same as my email admin account, which is the same as my register.com account)
    – The user of the system is actually the bad guy, or working for the bad guy
    (“For $200 a person, can you get me a list of all people who bought at least 10 frobits from your company last year?”)

    This is the harder problem to solve… and often times the bigger issue.

  13. admin
    June 30th, 2011 at 02:15 | #13

    @TheOtherGeoff
    I call bullshit. Sorry that you “take issue” about security being layered on, but welcome to reality. If coders could build in security themselves more often, we wouldn’t have to layer, now would we? Also, pushing this to QA undermines the notion that developers could just, um, GIVE A SHIT and try doing coding right, which includes little things like validating and filtering input and defining buffers well.

    I have nothing against coders at all, and I get that they are motivated differently and have pressures. And maybe the wrong people are coding/testing (I am not a skilled coder. I freely admit this. I just know how to read and write it enough to be a good security professional). I do agree with the people problem, and I’m starting to feel like preaching and teaching is increasingly hopeless. But that’s just me being negative – I will keep on trying, since I think it’s the right thing to do until someone comes up with a “dumbass-proof” IT and application infrastructure.

  14. TheOtherGeoff
    June 30th, 2011 at 18:52 | #14

    you can call bullshit. I don’t take issue with it. I just realize that security is requirements, and insecurity is requirements not met. Closing that gap is a measure of quality, and including security decisions (threat/risk management) with Quality management (what bugs get fixed first), is the basically how business works. Or should work. I’m just saying that if you don’t play in their process (SDLC), you’ll be viewed as a pedantic outsider.

    I didn’t say push this to QA (the group), I said this is a ‘QA Problem’. I should have been more clear, but from ‘enlightened’ SW Management point of view… all things bad are qa (lower case) problems not addressed before release.

    I’m actually in violent agreement with you (read my comment again). most developers do give a shit, but in the absense of security requirements (they are under pressure to deliver to requirements, in most well managed SDLC orgs), there is no requirements owner. If you’re saying that the developer own the requirements of security, that’s fine and good, but most are not allowed to (separation of duties, GLBA/SOX sort of risk controls, independent execution and testing sort of thing). I’m saying coders are forced by management (people problem) to deliver key function first, then fix the bugs. If the bugs are security bugs… ‘who’ owns them? If there is no requirements owner, there is no requirements/defect prioritization, at the management meeting saying ‘what do we code first?’ That is my reality… and yes, I got a real job as part of a team ‘securing’ some $300Billion under management and a million bank account, where the CTO will occasionally send me and the developer a 4 line note: “Security Requirement understood; Risk Understood; Requirement waived to make launch deadline… I own the risk.”

    I understand reality… I live it every day too. I’m just saying it’s better for us to give coders requirements then to try to teach them security… the good ones will get it anyway, and the bad ones are a sinkhole of effort to teach… best to let the team get requirements, and those coders who can’t code securly will be outed by the team, as they understand which coders make the look bad on the big (defect tracking) board.

    Just sayin’.

Comments are closed.