Transvasive Security

the human factor

Some random ideas from RSA 2012

This was my first time to RSA, I had always managed to find an excuse to avoid it, especially because it always seemed to be a really big conference. It is. Really big, the largest vendor floor I’ve seen at a security conference. One of the speakers, Misha Glenny, mentioned that Information Security is a $100 Billion industry worldwide, and despite the recession, is growing at 6-8% annually in the developed world, and 10-15% in the developing world. I feel fortunate to have ended up in a field that is both interesting and in demand. By some counts, attendance was in excess of 20,000 people, although many of those were likely free “expo only” passes. “Big Data” was the most-hated buzzword of the conference, eclipsing “APT.” My overall impression: we’re all still struggling with mostly the same issues.

I spent my first day at RSA (I arrived early) at Mini-Metricon 6.5, which was originally started by Andrew Jaquith, who literally wrote the book on security metrics. It was an all-day pre-conference session with a good group of interested security professionals. Talks were short, but led to some of the best highlights of the conference.

Highlights from the talks:

  • Bob Rudis and Albert Yin of Liberty Mutual, and John Streufert, DHS (formerly State Dept) spoke on their experiences with vulnerability reporting – more on that later.
  • Steve Kruse and Bill Pankey spoke on Assessing User Awareness. I liked their approach of testing awareness by presenting mock security scenarios and scoring them based on appropriate behavioral responses.
  • Jennifer Bayuk’s survey of Security SMEs provided a good consensus on what’s important in Information Security.
  • Andrew Jaquith talked about What We Can Learn from Everyday Metrics. Now I know why Perimeter has such great reports!
  • The day was capped off with the awards for the Best and Worst Data-Driven Security Reports of 2011. Aligning perfectly with how I would have voted, the “Best” winner was the Verizon DBIR, and the “Worst:” Ponemon Institute 2010 US Cost of a Data Breach. Larry, Larry, Larry.

By far the biggest idea of the day, and of the conference, was seeing again, for the first time, the work John Streufert’s team at the US Department of State did developing iPost, the centerpiece of their Continuous Risk Monitoring program. I believe I saw John’s presentation once before, but for whatever reason, I missed the point the first time around. Seeing his presentation at Metricon, especially after Liberty Mutual’s Bob Rudis and Albert Yin spoke about “Using Peer Pressure to Improve Security KPIs,” I understood the value of iPost.

Bob and Albert spoke briefly about their experiences with reporting metrics on vulnerability scans: while at first they weren’t very successful, when they changed their reporting approach to show 2 key factors, they were much more successful:

  • Show how vulnerability scores change over time, and,
  • Show the relative performance of different departments.

While Liberty Mutual demonstrated good reporting, the State Department took it to a whole new level of sophistication. John has been honored as a security leader for his work, an honor well deserved. State created a “risk market,” by weighting all vulnerabilities with carefully chosen values, scored each embassy, and rated each embassies’ score with a letter grade, A-F, grading on a curve. The iPost reporting tool allowed individual embassies to quickly drill down to identify the vulnerabilities that were the largest contributors to their scores.

The effects were dramatic: in the first 12 months of the program, State saw a 89% reduction in vulnerabilities in domestic sites, and a 90% reduction in foreign sites. The beauty of their method is that through the risk market, the security staff were able to communicate both the vulnerabilities that needed to be fixed, as well as the relative importance of fixing them, through the weightings, while giving full discretion to the teams on when and how they fixed the problems. State even used this to their advantage; during the Aurora attacks, they raised the score of MS10-018 to 40 times normal, which drove patch compliance from 20 to 85% in 6 days. As an economist, I was struck by how an engineered marketplace could drive results more effectively than central planning.

Bottom line: when comparing departments to each other, the social pressure had a big effect on patching rates. Although I’ve lost the reference, this was the approach that NASA took in the late 90’s when they started one of the first vulnerability management programs. NASA security staff reported on vulnerability rates by department, which led to competition to see who could get the lower score. The State Department approach was similar, delivering a report broken down by department (embassy) to everyone, so ambassadors could see their performance relative to their peers. I credit the success of the vulnerability program I started in 2001 in large part to the report we developed, which was also by department.

The experiences of Liberty Mutual, the State Department and my own all share some additional key factors that I believe led to our success: we worked with the teams responsible for fixing the vulnerabilities before launching the management report, and made sure they understood how they could improve their “score,” and that they were able to do so. In our program, I spent considerable time working with the engineers and even generated two reports: an early report for the engineers, and a later report (after a second scan) that went to senior management, giving the engineers time to fix issues before the report went to their boss. I firmly believe that this is the formula for vulnerability management.

Moving on to the actual conference, the keynotes on day one were about what I expected, and largely an idea-free environment. The afternoon was better, and I attended the Risk Management Smackdown II and Vulnerability panels, but didn’t get much new material.

Always a good speaker, I attended Dan Kaminsky’s random ideas talk on Wednesday morning. I liked his point that passwords actually work very well (thank you very much), partly because they’re so cheap to implement. I really liked his analysis of DNSSec vs. SSL: he hypothesizes that DNSSec will eventually replace the SSL Certificate Authorities in validating website addresses, because there’s fewer trust relationships for companies to manage – with DNS, there’s only one entity that we need to trust for .com, which is much easier (and therefore cheaper) than trusting every CA.

The rest of the day was less memorable, but I enjoyed the B-Sides panel with Amit Yoran, Kevin Mandia, Ron Gula and Roland Cloutier – although nobody could really answer my question on how to set up a cyberintelligence capability, I did learn about Mandiant’s OpenIOC, which is promising.

David Brooks gave the final keynote of the day, and spoke about topics from his new book, The Social Animal. David is an entertaining and engaging speaker, and while he didn’t directly relate the ideas from his book to security, I bought the book on the strength of his talk, (I haven’t read it yet) both of which draw from the same contemporary research in brain science, cognitive theory, and behavioral economics that have heavily influenced my work in Behavioral Information Security.

I spent the bulk of Thursday in a small group discussing risk management. It’s both an old and new area for Information Security, and what I noticed most is how difficult the field is. We’ve come a long way from ALE, but there’s still no shortage of problems to solve. If you’re interested in helping, I would encourage you to join SIRA, the Society of Information Risk Analysts, and get involved. The first SIRA conference, SIRACon, will be held in St Paul, MN the day before Secure360.

Friday was a short day. I liked Dave Aitel’s talk on organizations as cyberweapons: think Pirate’s Bay and Wikileaks, and Misha Glenny’s commentary on “Understanding the Social Psychology of Hackers,” where he made the case that there is a difference between “Hackers” who are largely motivated by solving puzzles and follow an escalating path into criminal activity and “Social Engineers” who are only motivated by criminal financial gain. The final keynotes were Hugh Thompson and Tony Blair. Tony said virtually nothing about security, but was an excellent speaker, and Hugh did two interviews: one with Daniel Gardner, the author of The Science of Fear, (a book I’ve read and highly recommend) and one with Frank Luntz, the master of word-manipulation, an interview that can only be described as bizarre.

And that was it for RSA 2012. If all goes well, you can look forward to a recap of SIRACon and Secure360 in May!