Archive for the ‘Posts’ Category

Speaking at AppSec USA 2011!

without comments

I’ll be speaking at AppSec USA 2011, the national OWASP conference, tomorrow, 9/22/2011! If you’re here in Minneapolis, come down to the convention center and you can see me and many other more illustrious speakers. I’ll be discussing Behavioral Security Modeling, the first tool to be developed for Behavioral Information Security.

AppSec USA 2011

Written by JohnB

September 21st, 2011 at 6:30 pm

Posted in Posts

Introduction to Behavioral Information Security Presentation

without comments

Copyright © 2010 John A Benninghoff. All rights reserved.

I recently gave an early version of my presentation on Behavioral Information Security to a local security group here in Minneapolis. By request, I’m publishing a PDF version of those slides here. Although I have been delayed, I will be publishing parts 2 and 3 of the Behavioral Information Security introduction which will provide additional detail not found in the slides.

Behavioral Information Security: An Introduction

Written by JohnB

June 28th, 2011 at 3:06 am

Posted in Posts

CVSS, Patches, and Vulnerability Management

without comments

Jack Jones posted a critique of CVSS to his RiskAnalys.is blog this morning. I’m a big fan of FAIR, and his criticism of CVSS is valid, but I just don’t see how even a “fixed” version of CVSS will ever be pratically useful.

The CVSS scoring methodology creates a score to measure the “risk” of a vulnerability, presumably to help people or automated systems prioritize a response, usually installing a patch. Jack, who has made measuring risk his career, is well qualified to assess how well CVSS works. He rightly points out problems with the CVSS calculation, including its use arbitrary weighted values, and using math with ordinal scales, and suggests that FAIR might provide an alternative that fixes these problems. Even if the RMI delivers a CVSS alternative, I’m not convinced that a vulnerability scoring tool that accurately measures risk has practical value.

With regard to frequency/probability of loss, CVSS focuses on the likelihood of attacker success from a couple of different angles, but never addresses the frequency/likelihood of an attack occurring in the first place.

Maybe I’ve just lead a sheltered life, but I’ve never found CVSS terribly useful. Having been both a Windows administrator and the guy responsible for reviewing the latest vulnerabilities and patches, I can say that for most IT staff, vulnerability management can be broken down into 2 basic steps:

  1. Wait for the monthly Microsoft patch release
  2. Deploy the patches

CVSS doesn’t even matter in the most basic case. Small companies without a deadicated IT staff don’t need CVSS, since their vendors will tell them which patches are important. If they’re good, they’ll deploy Microsoft patches automatically, and if they’re really good, they’ll even have reporting on how well systems are patched. Even so, the majority will still be vulnerable since the non-OS vendors’ applications (Adobe Reader and Flash) won’t be up to date, partly because can’t take of Microsoft’s (or Apple’s) built-in updating mechanism. (Although, Apple is changing this with the Mac App Store)

For those companies fortuanate enough to have security staff deadicated to running a vulnerability management program, CVSS still doesn’t help. The more advanced version of VM really breaks down to 4 steps:

  1. Wait for the monthly Microsoft (or other vendor’s) patch release
  2. Determine how quickly the patches need to be deployed
  3. Deploy the patches
  4. Scan your systems to find systems that aren’t patched

CVSS might be able to help with step 2, but in practice it doesn’t matter. At most, there are really four different speeds to deploy patches, Emergency, Accelerated, Normal, and Eventually. Emergency deployments are typically in response to an attack; as in, “Drop everything and put everybody on it, SQL Slammer has taken down our network!” No help from CVSS here, you’ll know when it’s an emergency. Which leaves us with Accelerated (let’s put forth an extra effort to get the patch deployed faster), Normal, (deploy on the normal schedule) and Eventually (security doesn’t care when this patch gets deployed). CVSS in theory helps decide which of these three to pick, but in my opinion, it fails to answer the key questions which are most helpful in determining how hard to push on the gas.

There’s a cost associated with each patch we track within our VM system. Each patch means we spend more time on reviewing, deploying, scanning, and re-deploying. To manage this cost effectively, we need to remember why we’re managing vulnerabilities in the first place. The bad guys are trying to break in. For the majority of internal systems, (desktops and servers) all we really care about is whether or not the bad guys, most often represented by malware, can get on to the system. Attacks that matter after you’re already in don’t really need to be fixed, since once the enemy has a foothold, it’s pretty much game over; there are too many ways, especially on Windows systems, to take over, and it’s really expensive to fix. Information leakage vulnerabilities do matter, but again, if you’ve already got an attacker on your internal network, you’ve got bigger problems. Focusing on what’s actually exploited reduces vulnerabilities to two classes: the attacks the bad guys use to break into systems, (unauthenticated network attacks and client-side desktop/browser attacks) and everything else. Again, CVSS doesn’t help here. The cost of patching is high enough that “everything else” should be automatically relegated to deploy Eventually (don’t care), leaving a decision on whether or not an Accelerated deployment is called for.

Factoring in risk in deciding whether or not to push a patch faster than normal is a good idea, but CVSS leaves out the single most important factor in judging the risk: the likelihood that an attacker will exploit the vulnernability. This omission is excusable, since predicting how likely an attack will happen is an educated guess at best. Predicting is hard enough that it’s best to use a simple rule of thumb; if there are exploits in the wild – the bad guys are actively exploiting the vulnerability – then do an Accelerated deployment. Otherwise, go with Normal.

I may be missing other use cases for CVSS, or missing the point entirely, but for what seems to be it’s main use case, vulnerability management, CVSS fails to deliver practical value. Instead of building complicated scoring systems, simple rules based on knowledge of the attackers nicely solve the patch management prioritization problem.

Written by JohnB

February 11th, 2011 at 6:51 pm

Posted in Posts

Behavioral Information Security Part 1: The Failure of Contemporary InfoSec

without comments

The different areas of knowledge and expertise within Information Security can fairly be described in four categories: Physical, Technical, Policy, and People.

Four Realms of Security

Physical security is very well understood, as it has been part of human knowledge for as long as there have been people, we’re generally good at it, and it hasn’t fundamentally changed since the days when computer security was the guards, gates, and guns.

The story is similar for Technical security – firewalls, anti-virus, access controls, tokens, etc., has been around since the advent of timesharing systems created the need for the password. We’re not as good at technical security as physical security, but we’re pretty good. There’s always room for improvement, of course, but this is clearly an area of strength for the profession – just look at the skills listed for a typical security job posting: Firewall, IDS/IPS, Network, Administration, Active Directory, Anti-Virus, etc.

On Policy, which includes written policy, governance, program organization, etc., we are weaker; most companies now have a security policy, although not all do, and as we know, the policies aren’t always well written or well implemented. Still, we do have established policy frameworks, like the ISO 27000 series and other tools to address problems of policy and governance.

For the final category, people, we’ve largely failed as a profession. Historically, we’ve tried to force people to adapt to the technology we built, and then blame the user when they fail to use it properly – the talking point is, “people are stupid, and you can’t fix stupid.” Security Awareness training, one of the few tools we have to address people problems, has been and continues to be poorly executed. At best, Awareness explains security rules well enough so that we can fire people when they break them, and at worst is a series of posters asking people to “do good things,” with no evidence that it is even effective. Although we have started to improve, our understanding of human/computer interaction is poor, and we do little, if any, to understand the motivation and behavior of both external attackers as well as internal personnel.

The failure of InfoSec to address people is understandable, considering the origins of our profession. Before Information Security, there was computer security – the interaction of people, computers, and information. The IT professionals became IT Security professionals, a path that is still typical for many Information Security professionals today. However, by changing from IT Security to Information Security, we expanded our purview to the interaction of people and information, moving beyond the world of Physical and Technical controls.

Computers to People Venn Diagram

Being computer professionals first, our focus remained on what was important in computer security, the information and the technology protecting it. The failures of security to address issues in the people realm are largely due to the information-centric model. A new, people centric model is needed to develop the tools we need.

In the next article, I’ll introduce the Behavioral Information Security model, which places people at the center, a philosophical shift in thinking that can help us tackle the “people problem.”

Written by JohnB

January 27th, 2011 at 1:53 am

Posted in Posts

Why Password Rotation is Bad

without comments

Nearly everyone in the US workforce who uses a desktop or laptop computer as part of their job has now had to face the security requirement to change their password every 90 days, or sometimes more often. Government and financial institutions have generally required password rotation for as long as there have been computer terminals, since it was believed to improve security. Other organizations typically started requiring people to change their passwords in response to external auditors’ demands, originally in response to the Sarbanes-Oxley Act (SOX), and more recently, the Payment Card Industry Data Security Standard (PCI-DSS). Financial auditors required this since it was a generally accepted “best practice,” and PCI specifically requires it. (DSS 1.2: 8.5.9: Change user passwords at least every 90 days) People frequently complained about the new rule, but rarely tried to challenge the assertion that it was “good security.”

Most security practitioners I’ve come across don’t question the value of changing passwords, but a few experienced professionals do quietly challenge the folk wisdom. Gene Spafford, who analyzed the very first Internet worm (in 1987), is one of those professionals, and challenges the notion that password rotation improves security in a 2006 blog post. There are several ways passwords can be compromised today:

  • Phishing; which is just another way of saying that someone tricks you into providing them your password.
  • Sniffing; eavesdropping, either by monitoring passwords as they are sent over a network, (at the local coffee shop) or by installing software on a computer to monitor keystrokes (leveraging viruses or security flaws).
  • Stealing; finding the password somewhere, which could be written down, stored in a word document, or even in (computer) memory.
  • Sharing; when someone voluntarily gives you their password.
  • Guessing; either by being clever and deducing someone’s password (maybe it’s the name & birthday of their only child), or by systematically trying passwords until all possible combinations are exhausted. Guessing at random generally doesn’t work very well, since most password systems are designed to limit successive failures to prevent this type of attack. My iPhone, and most Blackberries, will automatically erase after 10 failed attempts.
  • Password cracking; on well-designed systems, passwords are stored hashed, and while you can’t figure out a password from a hash, if you can get a copy of someone’s hashed password, you can guess as many times as you want, using your own computer to calculate hashes from potential passwords. Finding the right password is only a matter of time.

Changing your password offers no protection against any of these threats, except password cracking  in every other case, someone else has your password. Changing passwords will help protect against cracking, because if you change your password before it can be cracked, it will still be safe. This suggests a possible origin of this folk wisdom, as explained by Gene: “As best as I can find, some DoD contractors did some back-of-the-envelope calculation about how long it would take to run through all the possible passwords using their mainframe, and the result was several months. So, they (somewhat reasonably) set a password change period of 1 month as a means to defeat systematic cracking attempts.”

Rotating passwords every month made sense 30 years ago, when trying to break a password took several months, but that’s no longer true. Modern tools can crack a password in much less time. For passwords that meet typical requirements, say, 8 characters long, with at least one upper case letter, lower case letter, and a number, there are 218 trillion combinations. That seems like a lot, but with a $500 graphics card and a free cracking tool, a modern desktop can check over 600 million passwords per second, covering all possible combinations in just over 4 days. More sophisticated attacks that take advantage of weaknesses in how people choose passwords, or how some systems store passwords, can crack many passwords in seconds.

One response would be to require still longer passwords, with requirements that force them to be more random, which would increase cracking time. How long is long enough for current computing power? Well, today NIST recommends a minimum encryption key length of 128 bits. To match the same level of strength, a random password of upper & lower case letters and numbers would have to be 22 characters long; if you include all of the characters on the keyboard, you can shrink this to 20. Since people don’t do a good job of picking random passwords, getting to 128 is much harder if you let them pick their own. Using a NIST estimate, a “strong” password would have to be over 100 characters long. Even matching a 64 bit key is hard for people; it would still require a 48 character password.

My complaint with password rotation goes beyond the fact that rotating passwords offers little or no security benefit. The traditional defenses against password cracking; requiring longer passwords, more frequent rotation, and greater complexity all make passwords harder for people to use, which can actually be harmful. Take into consideration how people respond to stronger password requirements; memorizing passwords is a hard task for the human brain. Longer passwords and less time to memorize them before being forced to change only make the task harder.

To make things easier, some will employ helpful tricks, like turning a phrase into a random-looking letters & numbers, and others will use tricks that make passwords easier to guess. “Password1” is more than 8 characters long, has at least one upper case letter, lower case letter, and a number, so it will meet most password policies, but is a really bad password. To cope with the requirement to change passwords, people would pick a base password, and then just change the last number… “Password1, Password2, Password1, …” Adding the new requirement that new passwords couldn’t match the past 10 (or more) passwords just forced people to become a little more clever; “Password9, Password10, Password11, …” Over the past few years, I’ve asked people what they do when they have to change their password, and nearly everyone, including many security people, use some method of a base password with a rotating number at the end (some are more clever and put it at the beginning). Only one non-security person picked a completely new password every time, and as far as I can tell, that was mainly because she believed it was important in keeping her job. Of course, some will give up and just write their password down. This isn’t such a bad thing, if they keep it in a safe place, like their wallet, but that doesn’t always happen. Security policies can make things worse by telling people never to write down their password  they should instead encourage them to keep it safe if they must.

All of these things make passwords weaker, and more vulnerable to attack. People use computers to get work done, and if the security gets in the way, they will find a way around it. Instead of forcing people to change their behavior to fit the technology, password systems could be redesigned to take advantage of human strengths and idiosyncrasies. For example, although memorizing a password is initially difficult, it becomes easier if it is used frequently over a long period of time. Over time, as repetitive tasks, like typing your password, or driving your car, are mastered, they can be done without thinking. I once forgot my ATM PIN, and had to go to the cash machine and watch how I entered it on the keypad to remember. Typing a well-known password follows a predictable rhythm, and one that is unique to an individual, much like a telegraph operators’ “fist.” A system that measured the timing of keystrokes in addition to the password itself would be much more effective at identifying a person, which is the goal of the password. At first, the system would have to accept the password alone, but over time, could learn a person’s unique password signature. Allowing for some variation and change over time would reduce failures. Sudden changes in typing speed, perhaps due to injury, would still require a password reset, but most systems are already set up to allow for that.

Such a system would not only be easier to use, but would also be stronger: it would add enough randomness to the password to prevent cracking and guessing, sharing passwords would be stopped, as well as some forms of password stealing  knowing only the password text would not allow you to authenticate. It wouldn’t completely prevent sniffing and phishing, but it would make such attacks more difficult, and would defeat most current methods. The system would also lower costs, primarily by reducing forgotten password calls to the help desk. Not surprisingly, someone else already thought of this idea; the patent was filed by Lucent in 1999.

Unfortunately, Lucent’s patent has never been implemented. For a new password authentication mechanism to be effective, it would need to be integrated in to the Operating System (ie Windows), something that Microsoft has little economic incentive to do. Open source projects might want to add the feature, but they tend to avoid patented technology. So, until the patent expires, and the folk wisdom changes, we’re probably stuck with what we have. In the mean time, the next time an auditor tells you to rotate your passwords, ask “why?”

Written by JohnB

October 23rd, 2010 at 3:55 pm

Posted in Posts

Freeware vs Payware: pick the product that best meets your needs

without comments

Is Open Source software more secure or less secure than Closed Source software? Usually, when people ask or answer this question, they are comparing free, open source software that is developed by a team of mostly volunteer collaborators against commercial software developed by for-profit companies; I will use the terms Open Source and Commercial to distinguish the two.

he relative security of open source and commercial software was a topic of considerable debate within the security community starting in 1999-2000. The proponents of open source software typically claimed that their software was more secure because it was free to be reviewed by anyone on the internet; volunteer security researchers and programmers could find and fix security problems better than traditional software publishers could. The commercial software publishers more or less argued the reverse; that criminals would find and take advantage of security flaws in open source software because it was freely available. By 2004, however, the debate was generally settled within the security community, and neither side won.

What security professionals found, was that there were security advantages and disadvantages of both open source (free) software and commercial (closed source) software. Starting in 2001 with the release of the “Code Red” worm, vandals, and later, criminals, began to take advantage of security flaws in both commercial software and open source software on a large scale. Looking at the attacks since 2001, it’s clear that there are advantages to both open source and commercial software, but what’s most important is how we manage the risk of software vulnerabilities. In general, open source software has more vulnerabilities made public, because they are easier to find, but they are typically patched more quickly, since the development process allows for faster changes. Commercial software has fewer public vulnerabilities, but it can take much longer for fixes to be developed and released. For both open source and commercial software, the best thing you can do to protect against attacks is to quickly deploy software fixes when they are released. Proper configuration & maintenance has proven more important to security than how the software is developed.

In almost all cases, security fixes for both commercial and open source software are released before criminals start taking advantage of the flaws they fix. Good security practices that reduce the impact of security flaws, and good maintenance practices that deploy fixes quickly provide the best protection against attack. “Zero-day” attacks, named because they happen ‘0 days’ after the vulnerability is discovered, and before the flaw can be fixed, are still uncommon, and affect both open source & commercial software. The biggest factor in 0-day attacks seems to be the number of people using the software, without regard to how it is developed. (This does tend to favor open source software, but only because it is usually not in widespread use) And if you have good general security, there’s not much more you can do to protect against a 0-day attack.

Publishers can improve their software development to reduce how frequently flaws are found, and also make it less likely attackers can take advantage of the flaws, but these practices are well known and can be used by both commercial and open source projects. OpenBSD, a free UNIX operating system, has followed strict development and design standards for many years, and as a result has had very few flaws. Microsoft started the Security Development Lifecycle in 2004, and largely as a result, the number of flaws in Vista, and now Windows 7 has steadily declined.

What’s really important is to buy the product you need. Unless you’re buying a security product, like a firewall, you’re buying something to meet a business need; security is only a secondary concern. I was recently asked if there are any security showstoppers when purchasing software. My response was, “no, not really, unless they do something stupid.” When comparing products, some will have better security than others, but most of the time, security weaknesses aren’t bad enough to stand in the way of picking the best product, and usually, better products have better security. The best way to make sure you understand products’ security weaknesses is to ask a security expert before you purchase, so you know the security costs for both installing and maintaining the system.

After you’ve purchased the product, spend time to understand & configure the security features of what you’ve bought, following the advice of your security expert, unless what they tell you would prevent you from using the product – in that case, find a new expert. Ongoing maintenance is just as critical, if not more so. Be sure to commit time for applying critical updates, including receiving update notifications, as well as any security administration. Configuration errors or missing patches affect all software, and good maintenance practices will prevent both.

If you outsource part or all of your IT, the decision remains the same. When you’re hiring a vendor to provide and support an application or other technology, it’s most important to find the vendor that best meets your business needs and practices. As with products, usually better vendors have better security, and good vendor management practices will also mean better security. Setting clear expectations of your businesses’ security requirements and due diligence are key, as, after all, security requirements are really just a specific type of business requirement. For software security, whether your vendor chooses open source or commercial packages, the question remains the same: how well does your vendor maintain the software? Are they monitoring for and regularly applying security updates? Are they configuring the software properly? Again, have your security expert review the vendor’s security program, and if they don’t meet your standards, find a new vendor.

For both open source and commercial software, the key to success is proper configuration and maintenance, and proper system management, or vendor management will keep your applications and systems secure.

Written by JohnB

October 16th, 2010 at 12:00 am

Posted in Posts