Transvasive Security

the human factor

Some random ideas from RSA 2012

This was my first time to RSA, I had always managed to find an excuse to avoid it, especially because it always seemed to be a really big conference. It is. Really big, the largest vendor floor I’ve seen at a security conference. One of the speakers, Misha Glenny, mentioned that Information Security is a $100 Billion industry worldwide, and despite the recession, is growing at 6-8% annually in the developed world, and 10-15% in the developing world. I feel fortunate to have ended up in a field that is both interesting and in demand. By some counts, attendance was in excess of 20,000 people, although many of those were likely free “expo only” passes. “Big Data” was the most-hated buzzword of the conference, eclipsing “APT.” My overall impression: we’re all still struggling with mostly the same issues.

I spent my first day at RSA (I arrived early) at Mini-Metricon 6.5, which was originally started by Andrew Jaquith, who literally wrote the book on security metrics. It was an all-day pre-conference session with a good group of interested security professionals. Talks were short, but led to some of the best highlights of the conference.

Highlights from the talks:

  • Bob Rudis and Albert Yin of Liberty Mutual, and John Streufert, DHS (formerly State Dept) spoke on their experiences with vulnerability reporting – more on that later.
  • Steve Kruse and Bill Pankey spoke on Assessing User Awareness. I liked their approach of testing awareness by presenting mock security scenarios and scoring them based on appropriate behavioral responses.
  • Jennifer Bayuk’s survey of Security SMEs provided a good consensus on what’s important in Information Security.
  • Andrew Jaquith talked about What We Can Learn from Everyday Metrics. Now I know why Perimeter has such great reports!
  • The day was capped off with the awards for the Best and Worst Data-Driven Security Reports of 2011. Aligning perfectly with how I would have voted, the “Best” winner was the Verizon DBIR, and the “Worst:” Ponemon Institute 2010 US Cost of a Data Breach. Larry, Larry, Larry.

By far the biggest idea of the day, and of the conference, was seeing again, for the first time, the work John Streufert’s team at the US Department of State did developing iPost, the centerpiece of their Continuous Risk Monitoring program. I believe I saw John’s presentation once before, but for whatever reason, I missed the point the first time around. Seeing his presentation at Metricon, especially after Liberty Mutual’s Bob Rudis and Albert Yin spoke about “Using Peer Pressure to Improve Security KPIs,” I understood the value of iPost.

Bob and Albert spoke briefly about their experiences with reporting metrics on vulnerability scans: while at first they weren’t very successful, when they changed their reporting approach to show 2 key factors, they were much more successful:

  • Show how vulnerability scores change over time, and,
  • Show the relative performance of different departments.

While Liberty Mutual demonstrated good reporting, the State Department took it to a whole new level of sophistication. John has been honored as a security leader for his work, an honor well deserved. State created a “risk market,” by weighting all vulnerabilities with carefully chosen values, scored each embassy, and rated each embassies’ score with a letter grade, A-F, grading on a curve. The iPost reporting tool allowed individual embassies to quickly drill down to identify the vulnerabilities that were the largest contributors to their scores.

The effects were dramatic: in the first 12 months of the program, State saw a 89% reduction in vulnerabilities in domestic sites, and a 90% reduction in foreign sites. The beauty of their method is that through the risk market, the security staff were able to communicate both the vulnerabilities that needed to be fixed, as well as the relative importance of fixing them, through the weightings, while giving full discretion to the teams on when and how they fixed the problems. State even used this to their advantage; during the Aurora attacks, they raised the score of MS10-018 to 40 times normal, which drove patch compliance from 20 to 85% in 6 days. As an economist, I was struck by how an engineered marketplace could drive results more effectively than central planning.

Bottom line: when comparing departments to each other, the social pressure had a big effect on patching rates. Although I’ve lost the reference, this was the approach that NASA took in the late 90’s when they started one of the first vulnerability management programs. NASA security staff reported on vulnerability rates by department, which led to competition to see who could get the lower score. The State Department approach was similar, delivering a report broken down by department (embassy) to everyone, so ambassadors could see their performance relative to their peers. I credit the success of the vulnerability program I started in 2001 in large part to the report we developed, which was also by department.

The experiences of Liberty Mutual, the State Department and my own all share some additional key factors that I believe led to our success: we worked with the teams responsible for fixing the vulnerabilities before launching the management report, and made sure they understood how they could improve their “score,” and that they were able to do so. In our program, I spent considerable time working with the engineers and even generated two reports: an early report for the engineers, and a later report (after a second scan) that went to senior management, giving the engineers time to fix issues before the report went to their boss. I firmly believe that this is the formula for vulnerability management.

Moving on to the actual conference, the keynotes on day one were about what I expected, and largely an idea-free environment. The afternoon was better, and I attended the Risk Management Smackdown II and Vulnerability panels, but didn’t get much new material.

Always a good speaker, I attended Dan Kaminsky’s random ideas talk on Wednesday morning. I liked his point that passwords actually work very well (thank you very much), partly because they’re so cheap to implement. I really liked his analysis of DNSSec vs. SSL: he hypothesizes that DNSSec will eventually replace the SSL Certificate Authorities in validating website addresses, because there’s fewer trust relationships for companies to manage – with DNS, there’s only one entity that we need to trust for .com, which is much easier (and therefore cheaper) than trusting every CA.

The rest of the day was less memorable, but I enjoyed the B-Sides panel with Amit Yoran, Kevin Mandia, Ron Gula and Roland Cloutier – although nobody could really answer my question on how to set up a cyberintelligence capability, I did learn about Mandiant’s OpenIOC, which is promising.

David Brooks gave the final keynote of the day, and spoke about topics from his new book, The Social Animal. David is an entertaining and engaging speaker, and while he didn’t directly relate the ideas from his book to security, I bought the book on the strength of his talk, (I haven’t read it yet) both of which draw from the same contemporary research in brain science, cognitive theory, and behavioral economics that have heavily influenced my work in Behavioral Information Security.

I spent the bulk of Thursday in a small group discussing risk management. It’s both an old and new area for Information Security, and what I noticed most is how difficult the field is. We’ve come a long way from ALE, but there’s still no shortage of problems to solve. If you’re interested in helping, I would encourage you to join SIRA, the Society of Information Risk Analysts, and get involved. The first SIRA conference, SIRACon, will be held in St Paul, MN the day before Secure360.

Friday was a short day. I liked Dave Aitel’s talk on organizations as cyberweapons: think Pirate’s Bay and Wikileaks, and Misha Glenny’s commentary on “Understanding the Social Psychology of Hackers,” where he made the case that there is a difference between “Hackers” who are largely motivated by solving puzzles and follow an escalating path into criminal activity and “Social Engineers” who are only motivated by criminal financial gain. The final keynotes were Hugh Thompson and Tony Blair. Tony said virtually nothing about security, but was an excellent speaker, and Hugh did two interviews: one with Daniel Gardner, the author of The Science of Fear, (a book I’ve read and highly recommend) and one with Frank Luntz, the master of word-manipulation, an interview that can only be described as bizarre.

And that was it for RSA 2012. If all goes well, you can look forward to a recap of SIRACon and Secure360 in May!

On Money Mules and Credential Theft

A threatpost article, “Money Mules, Not Customers, The Real Victims of Bank Fraud” and the paper it references caught my attention today. The premise of the paper is that due to banking regulations and how banks react to fraudulent online transactions affecting consumer accounts, the criminals are effectively stealing not from consumers, but from the “money mules” they recruit to move the stolen money. Brian Krebs, a journalist and blogger who writes about the online criminal underground and information security issues on his blog, Krebs on Security, posted a comment criticizing the authors’ conclusions, specifically calling out that the main victims of theft of banking credentials are small and mid-size business owners, who are liable for losses, and have lost significant amounts of money. I’ve reposted my reply in part below. I largely agree with Brian, however, I do think the authors raise good points about the difficulty of moving money through the banking system, and about the critical role mules play in online bank fraud.

@Brian,

Your point on the fraud losses to small and mid-size business owners with corporate banking accounts is spot-on, and while the paper makes it clear they are mainly addressing the consumer problem, it’s a fair criticism that they’re glossing over a significant portion of online banking fraud, and that they misrepresent the facts by citing the instances in which fraudulent transactions on commercial accounts and not the transactions that couldn’t be reversed.

However, I do believe the paper raises an excellent point about online consumer banking fraud, and online banking fraud in general. It is difficult to transfer money out of accounts, and the mules really do bear much of the risk, and (as you have noted) rarely get paid, and sometimes may not realize what they’re doing is illegal. Their point on the low black market value of stolen credentials relative to account value does indicate that extracting money is difficult, and unlikely to succeed. Even though their rationale on how banks resolve fraudulent transfers means that attackers are effectively stealing from the mules only applies to consumers, I welcome the suggestion that we attack the problem at other points in the chain, and not just passwords. We may do better to disrupt online banking fraud by putting more efforts into making mule recruitment harder.

I would also raise a point not yet covered in the article or the comments: I take issue with the authors’ comments on liability; the auto rental and identity theft insurance markets have little bearing on banks’ decision to offer zero-dollar liability; the reality is, when the consumers’ liability is limited by regulation to $50, offering the extra $50 is trivially inexpensive. When Banks aren’t legally obligated to bear liability, they quite willingly shift it to the account holder, as is the case for US commercial bank accounts. I for one would very much like to see regulators force the issue and limit liability for at least small and mid-size business, since they’re simply not equipped to handle this type of fraud on their own.

Threat Modeling

Recently, I read and commented on a series of posts at The New School blog: Threat Modeling Fails In Practice, On Threat Modeling, and Yet More On Threat Modeling: A Mini-Rant. After reading both sides of the argument, I concluded that while threat modeling can be helpful, but we need to find a better way that doesn’t require us to brainstorm. Imagining the threats begets imaginary threats. I strongly believe that because of our cognitive errors in estimating risk, brainstorming threats is a mistake, and will inevitably lead to guessing what the threats will be, guesses that are at best only slightly better than random chance.

To that end, I believe that some of my recent work in Behavioral Security Modeling (BSM) may be part of the solution. Threat modeling needs to be deconstructed and integrated directly into the software development life cycle (SDLC). Some of the benefits provided by threat modeling in general, and STRIDE specifically include identifying missing requirements and potential quality/safety issues, something that BSM is designed to help with, and I’ve got some ideas on how to address the other elements.

Work is slowly progressing on the BSM white paper that I am using to develop and refine the ideas from my original Behavioral Security Modeling presentation, and I’ve enlisted a collaborator with strong application development experience. We’ve already discussed threat modeling, and if it’s not directly addressed in our white paper or the presentation, (we’ll be speaking at Secure 360!) it certainly will be in the framework we’re building behind the scenes.