Today’s post is all about Control 6 of the CSIS 20 Critical Security Controls – Application Software Security (the last post pertained to Control 5).  Here I’ll explore the (50) requirements I’ve parsed out of the control (I used the PDF version, but the online version is here) and offer my thoughts on what I’ve found [*].

Key Take Aways

  1. Implement a Software Development Lifecycle (SDLC).  This might be better posed as a Software Procurement Lifecycle, because the Control pertains to operating application-level software as well as acquiring (i.e. buying or building) application-level software.
  2. Add security attributes to your SDLC. It’s not enough to have an SDLC.  You need to ensure that your SDLC is 1) performing the right activities with, 2) qualified personnel.  You can do static code analysis as part of your automated build/release process, but that’s not a substitute for eyes-on code reviews.  Further, eyes-on code reviews by unknowledgeable personnel won’t catch what you want it to.
  3. Enlist QA to test for basic application security holes. How many of you organizations out there have QA personnel who have been trained to attack application-level software?  How many can analyze your SSL implementation?  Your app-specific PKI deployment?  Your input sanitization?  Train up your QA personnel to handle the “basics” of security testing your applications and, if the organization is so inclined, get a team of security assessors to do the heavy lifting.  I suspect most organizations don’t need heavy lifting, but do need trained personnel to handle the basics.

Potential Areas Of Improvement

  • Split and refocus. There are two facets to this Control, and I think they would be better called out separately.  The first facet is the operational perspective that applies to all application-level software, not just the software you may have developed in-house.  The second facet pertains to how you develop in-house software.  If you create a Software Procurement Lifecycle, you can use that against your in-house development and then have a Software Development Lifecycle to support in-house development.
  • Yet again, define terms. After reading through the control requirements a couple of times and really thinking about what it means, I still don’t: third-party-procured.  If anyone has a clue, I’m all ears.
  • Provide examples on using standards or pointers to examples. Both CWE and CAPEC were referenced in the requirements as being beneficial for development and test-tracking.  The CWE use case is straightforward – we have a good taxonomy in the CWE dictionary at MITRE and more shops should use it.  Using CAPEC to for test-tracking probably isn’t as straightforward to most who will be held to this Control.  Thus, an example would be useful.  Additionally, and perhaps most important, is that if CAPEC is good to use for test-tracking, is there a tool that leverages CAPEC?  Development shops aren’t likely to track things in raw XML.

Requesting Feedback On

  • Requirement 1: What Web Application Firewalls do you find useful, if any, and why?
  • Requirement 6: Non-Web Application Firewalls – know of any?  If so, are they decent?  Or is this just a pitch at categorizing the application layer as something new?

Requirement Listing

  1. Description:Protect web applications by deploying web application firewalls (WAFs) that inspect all traffic flowing to the web application for common web application attacks
    • Notes: This is a prescription for a specific security tool category. Mileage may vary between vendors, and I wonder if anyone out there has any advice as to what does and does not work? Remember to add your solution to your asset inventory and configuration assessment tasks.
  2. Description:Web application protection must include cross-site scripting attacks.
    • Notes: If you’re creating a checklist to meet the requirement above, then you should add this to the list. Of course, XSS is something that’s critical to look for. A mitigation to this requirement, though might exist through securing your SDLC.
  3. Description:Web application protection must include SQL injection attacks.
    • Notes: If you’re creating a checklist to meet the requirement above, then you should add this to the list. Of course, SQL injection is something that’s critical to look for. A mitigation to this requirement, though might exist through securing your SDLC.
  4. Description:Web application protection must include command injection attacks.
    • Notes: If you’re creating a checklist to meet the requirement above, then you should add this to the list. Some examples here would be nice (again, post-RSA, I’m limited on research time). A mitigation to this requirement, though might exist through securing your SDLC.
  5. Description:Web application protection must include directory traversal attacks.
    • Notes: If you’re creating a checklist to meet the requirement above, then you should add this to the list. A mitigation to this requirement, though might exist through securing your SDLC.
  6. Description:For applications that are not web-based, specific application firewalls should be deployed if such tools are available for the given application type.
    • Notes: Does anyone actually have a list of non-Web-based application firewalls? There are probably some out there, but given that this post is happening the week after RSA, I – quite honestly – haven’t had the time to do this bit of research.
  7. Description:If the traffic is encrypted, the device should either sit behind the encryption or be capable of decrypting the traffic prior to analysis.
    • Notes: I’m going to assume that this applies to both Web and non-Web applications. The most secure architectures will be unable to decrypt everything, but that should still be OK. The goal here is to find the middle ground appropriate for the application and mission that balances security with usability/function.
  8. Description:If neither option is appropriate, a host-based web application firewall should be deployed.
    • Notes: In other words, the Web/non-Web application firewall should not reside on the host, but may if there is no other solution.
  9. Description:At a minimum, explicit error checking should be done for all input.
    • Notes: This should be a common-sense activity by now, but input validation is still a problem today. I think that is, in part, because it’s not necessarily straightforward and reference architectures are not widely available. I am aware of Shiro for the Java world, and I’m sure an analogous feature set exists in the .Net world as well. I’ve noticed that XCode provides easy methods of performing input validation.
  10. Description:Whenever a variable is created in source code, the size and type should be determined.
    • Notes: A guard against overflows. It’s interesting to me that buffer overflows and integer overflows exist today, but they do. This is something you should include in your SDLC – perform the checks using code reviews, and static and runtime analysis.
  11. Description:When input is provided by the user it should be verified that it does not exceed the size or the data type of the memory location in which it is stored or moved in the future.
    • Notes: Again, interesting to know that these sorts of things are problematic, but they’re here to stay – we can’t get rid of them entirely given the langauges we use and the fact that humans are, well, human – we make mistakes. The best mitigation against this is a formalized SDLC with good code-review and automated analysis.
  12. Description:Test in-house-developed Web applications for common security weaknesses using automated remote web application scanners prior to deployment.
    • Notes: If I’m not mistaken, this requirement is making an argument for QA to learn a thing or two about penetration testing. How many of your QA people have, say, a SANS pen-test certification? Probably few, if any. Invest in this, and you’ll be well on your way to making better software over time, which is important to everyone – including your organization.
  13. Description:Test third-party-procured Web applications for common security weaknesses using automated remote web application scanners prior to deployment.
    • Notes: If you’re not inclined to do this – say you don’t have in-house QA that would live up to the previous requirement – then you can always ask to talk to the vendor’s development manager about their SDLC. Ask point blank how they perform security testing, whether they do code reviews, how often they find and fix vulnerabiltiies before they release, and so on.
  14. Description:Test in-house-developed Web applications for common security weaknesses using automated remote web application scanners whenever updates are made to the application.
    • Notes: This is speaking directly to regression testing for security issues. There’s really no difference beween this and traditional regression testing, in fact the theory is the same. If you update your system, please ensure that you’ve not introduced any new issues.
  15. Description:Test third-party-procured Web applications for common security weaknesses using automated remote web application scanners whenever updates are made to the application.
    • Notes: Again, if you’re not inclined to do this or don’t have in-house QA that can do it for you, get on the phone with your vendor and grill them about how they do regression and whether that regression includes security-specific tests. If it doesn’t always include security-specific tests, find out why and what the threshold for inclusion is.
  16. Description:Test in-house-developed Web applications for common security weaknesses using automated remote web application scanners on a regular recurring basis.
    • Notes: New things are being done all the time in the attacker space. Provided you can leverage automated solutions, you will be well served to use a scanner that is periodically updated with new attacks. Otherwise, performing this scan and reviewing outputs is wasteful.
  17. Description:Test third-party-procured Web applications for common security weaknesses using automated remote web application scanners on a regular recurring basis.
    • Notes: I’m honestly not sure what this means. Third-party-procured Web applications could be cloud providers or Web applications that you use in-house that have been procured by a third-party, or Web applications that you’ve inherited from a merger or acquisition. The bottom line is that if you’ve got Web applications upon which your business processes rely, then you should test them appropriately.
  18. Description:Organizations should understand how their applications behave under denial of service attacks.
    • Notes: Do you have a good understanding of how your Web applications can fall prey to DOS attacks? Is the application Internet-facing or internal-only? This is really a requirement that says: Test your service for load and have an executable plan in place for when something goes wrong.
  19. Description:Organizations should understand how their applications behave under resource exhaustion attacks.
    • Notes: Again, this is a requirement that says: Test your service for load/resource constraints and have an executable plan in place for when something goes wrong.
  20. Description:System error messages should not be displayed to end-users (output sanitization).
    • Notes: A simple data leakage mitigation, which can be troublesome for your support group. If you need to meet in the middle, create an error code mapping from the platform/internals to something you can share with an end-user who will likely call support at some point anyway.
  21. Description:Maintain separate environments for production and nonproduction systems.
    • Notes: I find myself wondering how this works with DevOps – is there such a thing as nonproduction systems in most cases? Maybe. Certainly for critical systems you’re going to want to have some non-production system you can use to test. Of course, this, too, is riddled with potential problems, especially if you have a system that processes sensitive information – your mock data needs to resemble the real-world without needing to be real data.
  22. Description:Developers should not typically have unmonitored access to production environments.
    • Notes: I again find myself wondering how this works in DevOps – really need to get some time to learn more about this – but the idea is that your developers shouldn’t have access to development that isn’t monitored and the converse is true as well. Don’t let your ops people have access to the source. It’s separation of duties designed to force collusion to game the system.
  23. Description:Test in-house-developed web and other application software for coding errors and malware insertion prior to deployment using automated static code analysis software.
    • Notes: A lot of software development shops leverage open source software, which is a point I want to get across here. Do not assume the open source community has found everything it needs to find – open source ideology aside, if you have the source and a static analysis tool, scan the source! Your SDLC should inform the scanner about what to look for, so you should probalby have a good SDLC in place first. And, don’t just take the scanner and go to town – understand what it is you’re doing before you make matters worse.
  24. Description:Test third-party-procured web and other application software for coding errors and malware insertion prior to deployment using automated static code analysis software.
    • Notes: Obviously, I broke one sentence into several requirements to be precise. Everything I said for the previous requirement applies to this one and explictly covers open source software (but there’s that third-party-procured term again).
  25. Description:Such testing must include back doors.
    • Notes: This requirement is related to the previous two. If you’re looking at scanners, determine whether they can detect back doors. If they don’t or you’re told they can be configured to do what you want, you may be in need of a security professional who knows what to look for. Consider a consultant if this is not a FT gig.
  26. Description:If source code is not available, these organizations should test compiled code using static binary analysis tools.
    • Notes: While not necessarily as good as source scanning, binary scanning can be fruitful and is, perhaps, better than nothing. Again, you’re going to need to get in the weeds (or hire someone who can) to get the job done appropriately.
  27. Description:In particular, input validation and output encoding routines of application software should be carefully reviewed and tested.
    • Notes: To me, this is a restatement of previous requirements, though it might speak more to code review. Validate your input, sanitize your output, use an SDLC with checks and balances.
  28. Description:For applications that rely on a database, organizations should conduct a configuration review of both the operating system housing the database and the database software itself, checking settings to ensure that the database system has been hardened using standard hardening templates.
    • Notes: This requirement has been common sense for some time and probably not needed here. If you’re performing configuration management (i.e. Control 3), then you’re already ensuring that the OS and database are appropriately configured.
  29. Description:All systems that are part of critical business processes should also be tested.
    • Notes: This one seems oddly out of place. But, to parse through it anyway, you’re going to need to first identify your critical business processes and prioritize them. In priority order, you can associate assets with your critical business processes, and then determine the applicaiton assets you need to test. It just seems that this would be covered by other controls in some way.
  30. Description:Ensure that all software development personnel receive training in writing secure code for their specific development environment.
    • Notes: This is absolutely critical. It is easy to create applications these days – most of the time it’s a matter of wiring components and extending frameworks. But, if your developers aren’t understanding how they can create insecure code, they’re just going to. Create a culture of security-mindedness around your SDLC and give them the training they need. If you’re strapped for resources, try a train the trainer approach.
  31. Description:Sample scripts, libraries, components, compilers, or any other unnecessary code that is not being used by an application should be uninstalled or removed from the system.
    • Notes: This is really basic configuration management, so I’m not sure that it really belongs here. This requirement seems redundant to that which we’ve already covered in Control 3 – know the purpose of your assets and get rid of anything they don’t need to support business process.
  32. Description:Organizations can also use CWE to determine which types of weaknesses they are most interested in addressing and removing.
    • Notes: This and the following requirement are good nods to effective use of existing standards. CWE is a good taxonomy to use if you’re developing software, and I recommend taking a look at it. At the same time, I have no idea how many SDLC tools speak CWE. Hopefully more than a few.
  33. Description:When evaluating the effectiveness of testing for these weaknesses, MITRE’s Common Attack Pattern Enumeration and Classification can be used to organize and record the breadth of the testing for the CWEs and to enable testers to think like attackers in their development of test cases.
    • Notes: This (and the previous) requirement is a good example of how standards can be leveraged by any development organization. CAPEC will, at first, appear daunting. Take it slow, get your QA folks to look at it, and see just how effective it can be to track what you’ve done in the past and how effective the testing has been.
  34. Description:The system must be capable of detecting and blocking an application-level software attack
    • Notes: This is not a good metric. By this metric, you can (though not reasonably) have a WAF in place that detects but one attack and pass. Letter of the law matters from time to time, and more often than not with auditors, from what I understand. If you’re looking for a better type of metric, consider thresholds of detection over time. For example, your system should detect any XSS, SQLi, or command injection attack immediately, and should detect new attacks within 30 days of disclosure (or something like that).
  35. Description:The system must generate an alert or send e-mail to enterprise administrative personnel within 24 hours of detection and blocking.
    • Notes: Your WAF and other detection mechanisms need to alert the appropriate personnel, which means that you’re going to have to tell the system which personnel are appropriate (this should, by now, be familiar). The best way to do this is to have an up-to-date user directory with appropriately defined roles. I’d rather do that work once, wouldn’t you?
  36. Description:All Internet-accessible web applications must be scanned on a weekly or daily basis.
    • Notes: You might, at first, think that scanning this frequently and in the absence of any system updates is out of place. It’s not. The idea here is that attackers will leverage your system if they can, and you have some responsibility to detect when this happens. Are you going to want to perform static code analysis? Probably not – you shouldn’t have source on the production server anyway. You’re going to want to scan for things like XSS regularly and in as automated and non-intrusive manner possible.
  37. Description:All Internet-accessible web application scans must alert or send an e-mail to administrative personnel within 24 hours of completing a scan.
    • Notes: Every scan you run needs to send an alert. This does two things: 1) It ensures you know when something fails, and 2) it ensures you know the scan is happening. I think there are probably better, less communication intense methods of knowing that a routine scan is running (check your configurations and validate with log reviews, for example).
  38. Description:If a scan cannot be completed successfully, the system must alert or send e-mail to administrative personnel within one hour indicating that the scan has been unsuccessful.
    • Notes: This seems to be common sense. If the scanner is broken, you want to know. Treat that as though it’s an incident, by the way, because it – strictly speaking – is an incident. What you need to do is determine whether it’s benign or hostile.
  39. Description:Every 24 hours after that point, the system must alert or send e-mail about the status of uncompleted scans, until normal scanning resumes.
    • Notes: This is the nag requirement, and it’s a good one. Ensure that your system is configured to nag you to correct the problems that need to be corrected – and this is an important monitoring control.
  40. Description:Additionally, all high-risk vulnerabilities in Internet-accessible web applications identified by web application vulnerability scanners must be mitigated (by either fixing the flaw or implementing a compensating control) within 15 days of discovery of the flaw.
    • Notes: Here you’re going to be required to keep track of what you’re doing to address discovered vulnerabilities. You can probably use the same process you do for Vulnerability Assessment (Control 4). Depending on your solution, 15 days might be tight, so it’s a good thing if you define and document this process ahead of time. Also, keep track of who needs to do what so you can tally costs of mitigation over time. This information can contribute to business decsions on new, similar application implementations as part of maintenance costs.
  41. Description:Additionally, all high-risk vulnerabilities in Internet-accessible web applications identified by static analysis tools, must be mitigated (by either fixing the flaw or implementing a compensating control) within 15 days of discovery of the flaw.
    • Notes: See my comment for the previous requirement.
  42. Description:Additionally, all high-risk vulnerabilities in Internet-accessible web applications identified by automated database configuration review tools must be mitigated (by either fixing the flaw or implementing a compensating control) within 15 days of discovery of the flaw.
    • Notes: See my comment for the previous requirement.
  43. Description:To evaluate the implementation of Control 6 on a monthly basis, an evaluation team must use a web application vulnerability scanner to test for each type of flaw identified in the regularly updated list of the “25 Most Dangerous Programming Errors” by MITRE and the SANS Institute.
    • Notes: Don’t necessarily limit yourself to the list provided by MITRE and SANS. It’s a good list, but understand what’s appropriate for your organization before blindly using the list. Note also that this requirement speaks of a pen testing team. This is another cost of maintaining the application over time.
  44. Description:The scanner must be configured to assess all of the organization’s Internet-accessible web applications to identify such errors.
    • Notes: None.
  45. Description:The evaluation team must verify that the scan is detected within 24 hours and that an alert is generated.
    • Notes: None.
  46. Description:In addition to the web application vulnerability scanner, the evaluation team must also run static code analysis tools and database configuration review tools against Internet-accessible applications to identify security flaws on a monthly basis.
    • Notes: None.
  47. Description:The evaluation team must verify that all high-risk vulnerabilities identified by the automated vulnerability scanning tools or static code analysis tools have been remediated or addressed through a compensating control (such as a web application firewall) within 15 days of discovery.
    • Notes: None.
  48. Description:The evaluation team must verify that application vulnerability scanning tools have successfully completed their regular scans for the previous 30 cycles of scanning by reviewing archived alerts and reports to ensure that the scan was completed.
    • Notes: None.
  49. Description:If a scan was not completed successfully, the system must alert or send e-mail to enterprise administrative personnel indicating what happened.
    • Notes: None.
  50. Description:If a scan could not be completed in that timeframe, the evaluation team must verify that an alert or e-mail was generated indicating that the scan did not finish.
    • Notes: None.

Footnotes

A method and format explanation can be found at the beginning of Control 1.

Editor’s Note: This article was written by a former contributor to The State of Security who now resides with a non-profit group with an excellent reputation. We thank him for his opinions and perspective, and wish we could acknowledge him directly for his outstanding efforts on this series.

Categories: , IT Security and Data Protection,

Tags: , , , , , ,


Leave a Reply

Cindy Valladares

Cindy Valladares has contributed 144 posts to The State of Security.

View all posts by Cindy Valladares >