I started Transvasive Security in 2008 and offered consulting services until taking a full time job in 2012. This site is an archive of my work prior to 2015; I currently blog at information-safety.org and provide security consulting through my new company, Security Differently.
Jack Jones posted a
critique
of CVSS to his
RiskAnalys.is blog this morning. I’m a big
fan of FAIR, and his
criticism of CVSS is valid, but I just don’t see how even a “fixed”
version of CVSS will ever be practically useful.
The CVSS scoring methodology creates a score to measure the “risk” of a
vulnerability, presumably to help people or automated systems prioritize
a response, usually installing a patch. Jack, who has made measuring
risk his career, is well qualified to assess how well CVSS works. He
rightly points out problems with the CVSS calculation, including its use
arbitrary weighted values, and using math with ordinal scales, and
suggests that FAIR might provide an alternative that fixes these
problems. Even if the RMI delivers a CVSS alternative, I’m not convinced
that a vulnerability scoring tool that accurately measures risk has
practical value.
With regard to frequency/probability of loss, CVSS focuses on the
likelihood of attacker success from a couple of different angles, but
never addresses the frequency/likelihood of an attack occurring in the
first place.
Maybe I’ve just lead a sheltered life, but I’ve never found CVSS
terribly useful. Having been both a Windows administrator and the guy
responsible for reviewing the latest vulnerabilities and patches, I can
say that for most IT staff, vulnerability management can be broken down
into 2 basic steps:
Wait for the monthly Microsoft patch release
Deploy the patches
CVSS doesn’t even matter in the most basic case. Small companies without
a dedicated IT staff don’t need CVSS, since their vendors will tell
them which patches are important. If they’re good, they’ll deploy
Microsoft patches automatically, and if they’re really good, they’ll
even have reporting on how well systems are patched. Even so, the
majority will still be vulnerable since the non-OS vendors’ applications
(Adobe Reader and Flash) won’t be up to date, partly because can’t take
of Microsoft’s (or Apple’s) built-in updating mechanism. (Although,
Apple is changing this with the Mac App Store)
For those companies fortunate enough to have security staff dedicated
to running a vulnerability management program, CVSS still doesn’t help.
The more advanced version of VM really breaks down to 4 steps:
Wait for the monthly Microsoft (or other vendor’s) patch release
Determine how quickly the patches need to be deployed
Deploy the patches
Scan your systems to find systems that aren’t patched
CVSS might be able to help with step 2, but in practice it doesn’t
matter. At most, there are really four different speeds to deploy
patches, Emergency, Accelerated, Normal, and Eventually.
Emergency deployments are typically in response to an attack; as in,
“Drop everything and put everybody on it, SQL Slammer has taken down our
network!” No help from CVSS here, you’ll know when it’s an emergency.
Which leaves us with Accelerated (let’s put forth an extra effort to
get the patch deployed faster), Normal, (deploy on the normal
schedule) and Eventually (security doesn’t care when this patch gets
deployed). CVSS in theory helps decide which of these three to pick, but
in my opinion, it fails to answer the key questions which are most
helpful in determining how hard to push on the gas.
There’s a cost associated with each patch we track within our VM system.
Each patch means we spend more time on reviewing, deploying, scanning,
and re-deploying. To manage this cost effectively, we need to remember
why we’re managing vulnerabilities in the first place. The bad guys are
trying to break in. For the majority of internal systems, (desktops and
servers) all we really care about is whether or not the bad guys, most
often represented by malware, can get on to the system. Attacks that
matter after you’re already in don’t really need to be fixed, since once
the enemy has a foothold, it’s pretty much game over; there are too many
ways, especially on Windows systems, to take over, and it’s really
expensive to fix. Information leakage vulnerabilities do matter, but
again, if you’ve already got an attacker on your internal network,
you’ve got bigger problems. Focusing on what’s actually exploited
reduces vulnerabilities to two classes: the attacks the bad guys use to
break into systems, (unauthenticated network attacks and client-side
desktop/browser attacks) and everything else. Again, CVSS doesn’t help
here. The cost of patching is high enough that “everything else” should
be automatically relegated to deploy Eventually (don’t care), leaving
a decision on whether or not an Accelerated deployment is called for.
Factoring in risk in deciding whether or not to push a patch faster than
normal is a good idea, but CVSS leaves out the single most important
factor in judging the risk: the likelihood that an attacker will exploit
the vulnerability. This omission is excusable, since predicting how
likely an attack will happen is an educated guess at best. Predicting is
hard enough that it’s best to use a simple rule of thumb; if there are
exploits in the wild – the bad guys are actively exploiting the
vulnerability – then do an Accelerated deployment. Otherwise, go with
Normal.
I may be missing other use cases for CVSS, or missing the point
entirely, but for what seems to be it’s main use case, vulnerability
management, CVSS fails to deliver practical value. Instead of building
complicated scoring systems, simple rules based on knowledge of the
attackers nicely solve the patch management prioritization problem.
The different areas of knowledge and expertise within Information
Security can fairly be described in four categories: Physical,
Technical, Policy, and People.
Physical security is very well understood, as it has been part of human
knowledge for as long as there have been people, we’re generally good at
it, and it hasn’t fundamentally changed since the days when computer
security was the guards, gates, and guns.
The story is similar for Technical security – firewalls, anti-virus,
access controls, tokens, etc., has been around since the advent of
timesharing systems created the need for the password. We’re not as good
at technical security as physical security, but we’re pretty good.
There’s always room for improvement, of course, but this is clearly an
area of strength for the profession – just look at the skills listed
for a typical security job posting: Firewall, IDS/IPS, Network,
Administration, Active Directory, Anti-Virus, etc.
On Policy, which includes written policy, governance, program
organization, etc., we are weaker; most companies now have a security
policy, although not all do, and as we know, the policies aren’t always
well written or well implemented. Still, we do have established policy
frameworks, like the ISO 27000 series and other tools to address
problems of policy and governance.
For the final category, people, we’ve largely failed as a profession.
Historically, we’ve tried to force people to adapt to the technology we
built, and then blame the user when they fail to use it properly – the
talking point is, “people are stupid, and you can’t fix stupid.”
Security Awareness training, one of the few tools we have to address
people problems, has been and continues to be poorly executed. At best,
Awareness explains security rules well enough so that we can fire people
when they break them, and at worst is a series of posters asking people
to “do good things,” with no evidence that it is even effective.
Although we have started to improve, our understanding of human/computer
interaction is poor, and we do little, if any, to understand the
motivation and behavior of both external attackers as well as internal
personnel.
The failure of InfoSec to address people is understandable, considering
the origins of our profession. Before Information Security, there was
computer security – the interaction of people, computers, and
information. The IT professionals became IT Security professionals, a
path that is still typical for many Information Security professionals
today. However, by changing from IT Security to Information Security, we
expanded our purview to the interaction of people and information,
moving beyond the world of Physical and Technical controls.
Being computer professionals first, our focus remained on what was
important in computer security, the information and the technology
protecting it. The failures of security to address issues in the people
realm are largely due to the information-centric model. A new, people
centric model is needed to develop the tools we need.
In the next article, I’ll introduce the Behavioral Information Security
model, which places people at the center, a philosophical shift in
thinking that can help us tackle the “people problem.”
Nearly everyone in the US workforce who uses a desktop or laptop
computer as part of their job has now had to face the security
requirement to change their password every 90 days, or sometimes more
often. Government and financial institutions have generally required
password rotation for as long as there have been computer terminals,
since it was believed to improve security. Other organizations typically
started requiring people to change their passwords in response to
external auditors’ demands, originally in response to the Sarbanes-Oxley
Act (SOX), and more
recently, the Payment Card Industry Data Security Standard
(PCI-DSS).
Financial auditors required this since it was a generally accepted “best
practice,” and PCI specifically requires it. (DSS 1.2: 8.5.9:Change user passwords at least every 90 days) People frequently
complained about the new rule, but rarely tried to challenge the
assertion that it was “good security.”
Most security practitioners I’ve come across don’t question the value of
changing passwords, but a few experienced professionals do quietly
challenge the folk wisdom. Gene
Spafford, who analyzed the
very first Internet worm (in 1987), is one of those professionals, and
challenges the notion that password rotation improves security in a
2006 blog
post.
There are several ways passwords can be compromised today:
Phishing; which is just another way of saying that someone tricks
you into providing them your password.
Sniffing; eavesdropping, either by monitoring passwords as they are
sent over a network, (at the local coffee shop) or by installing
software on a computer to monitor keystrokes (leveraging viruses or
security flaws).
Stealing; finding the password somewhere, which could be written
down, stored in a word document, or even in (computer) memory.
Sharing; when someone voluntarily gives you their password.
Guessing; either by being clever and deducing someone’s password
(maybe it’s the name & birthday of their only child), or by
systematically trying passwords until all possible combinations are
exhausted. Guessing at random generally doesn’t work very well,
since most password systems are designed to limit successive
failures to prevent this type of attack. My iPhone, and most
Blackberries, will automatically erase after 10 failed attempts.
Password cracking; on well-designed systems, passwords are stored
hashed, and while you can’t figure out a password from a hash, if
you can get a copy of someone’s hashed password, you can guess as
many times as you want, using your own computer to calculate hashes
from potential passwords. Finding the right password is only a
matter of time.
Changing your password offers no protection against any of these
threats, except password cracking – in every other case, someone else
has your password. Changing passwords will help protect against
cracking, because if you change your password before it can be cracked,
it will still be safe. This suggests a possible origin of this folk
wisdom, as explained by Gene: “As best as I can find, some DoD
contractors did some back-of-the-envelope calculation about how long it
would take to run through all the possible passwords using their
mainframe, and the result was several months. So, they (somewhat
reasonably) set a password change period of 1 month as a means to defeat
systematic cracking attempts.”
Rotating passwords every month made sense 30 years ago, when trying to
break a password took several months, but that’s no longer true. Modern
tools can crack a password in much less time. For passwords that
meet typical
requirements, say, 8
characters long, with at least one upper case letter, lower case letter,
and a number, there are 218 trillion combinations. That seems like a
lot, but with a $500 graphics
card and
a free cracking tool, a modern
desktop can check over 600 million passwords per second, covering all
possible combinations in just over 4 days. More sophisticated attacks
that take advantage of weaknesses in how people choose passwords, or how
some systems store passwords, can crack many passwords in seconds.
One response would be to require still longer passwords, with
requirements that force them to be more random, which would increase
cracking time. How long is long enough for current computing power?
Well, today NIST recommends a minimum encryption key length of 128 bits.
To match the same level of strength, a random password of upper & lower
case letters and numbers would have to be 22 characters long; if you
include all of the characters on the keyboard, you can shrink this to 20.
Since people don’t do a good job of picking random passwords,
getting to 128 is much harder if you let them pick their own. Using a
NIST estimate, a “strong” password would have to be over 100 characters
long. Even matching a 64 bit key is hard for people; it would still
require a 48 character password.
My complaint with password rotation goes beyond the fact that rotating
passwords offers little or no security benefit. The traditional defenses
against password cracking; requiring longer passwords, more frequent
rotation, and greater complexity all make passwords harder for people to
use, which can actually be harmful. Take into consideration how people
respond to stronger password requirements; memorizing passwords is a
hard task for the human brain. Longer passwords and less time to
memorize them before being forced to change only make the task harder.
To make things easier, some will employ helpful tricks, like turning a
phrase into a random-looking letters & numbers, and others will use
tricks that make passwords easier to guess. “Password1” is more than 8
characters long, has at least one upper case letter, lower case letter,
and a number, so it will meet most password policies, but is a really
bad password. To cope with the requirement to change passwords, people
would pick a base password, and then just change the last number…
“Password1, Password2, Password1, …” Adding the new requirement that
new passwords couldn’t match the past 10 (or more) passwords just forced
people to become a little more clever; “Password9, Password10,
Password11, …” Over the past few years, I’ve asked people what they do
when they have to change their password, and nearly everyone, including
many security people, use some method of a base password with a rotating
number at the end (some are more clever and put it at the beginning).
Only one non-security person picked a completely new password every
time, and as far as I can tell, that was mainly because she believed it
was important in keeping her job. Of course, some will give up and just
write their password down. This isn’t such a bad thing, if they keep it
in a safe place, like their wallet, but that doesn’t always happen.
Security policies can make things worse by telling people never to write
down their password – they should instead encourage them to keep it safe
if they must.
All of these things make passwords weaker, and more vulnerable to
attack. People use computers to get work done, and if the security gets
in the way, they will find a way around it. Instead of forcing people to
change their behavior to fit the technology, password systems could be
redesigned to take advantage of human strengths and idiosyncrasies. For
example, although memorizing a password is initially difficult, it
becomes easier if it is used frequently over a long period of time. Over
time, as repetitive tasks, like typing your password, or driving your
car, are mastered, they can be done without thinking. I once forgot my
ATM PIN, and had to go to the cash machine and watch how I entered it on
the keypad to remember. Typing a well-known password follows a
predictable rhythm, and one that is unique to an individual, much like a
telegraph operators’
“fist.” A system that
measured the timing of keystrokes in addition to the password itself
would be much more effective at identifying a person, which is the goal
of the password. At first, the system would have to accept the password
alone, but over time, could learn a person’s unique password signature.
Allowing for some variation and change over time would reduce failures.
Sudden changes in typing speed, perhaps due to injury, would still
require a password reset, but most systems are already set up to allow
for that.
Such a system would not only be easier to use, but would also be
stronger: it would add enough randomness to the password to prevent
cracking and guessing, sharing passwords would be stopped, as well as
some forms of password stealing – knowing only the password text would
not allow you to authenticate. It wouldn’t completely prevent sniffing
and phishing, but it would make such attacks more difficult, and would
defeat most current methods. The system would also lower costs,
primarily by reducing forgotten password calls to the help desk. Not
surprisingly, someone else already thought of this idea; the
patent was filed by
Lucent in 1999.
Unfortunately, Lucent’s patent has never been implemented. For a new
password authentication mechanism to be effective, it would need to be
integrated in to the Operating System (ie Windows), something that
Microsoft has little economic incentive to do. Open source projects
might want to add the feature, but they tend to avoid patented
technology. So, until the patent expires, and the folk wisdom changes,
we’re probably stuck with what we have. In the mean time, the next time
an auditor tells you to rotate your passwords, ask “why?”