Archive for the ‘Posts’ Category

Behavioral Security Modeling Secure360 Presentation

without comments

Copyright © 2012 Brophey Consulting and Transvasive Security. All rights reserved.

Here are the slides from our talk at Secure360 on our recently published white paper, Behavioral Security Modeling: Functional Security Requirements.

We’ll provide a link to the video when it becomes available.

Written by JohnB

May 8th, 2012 at 11:47 am

Posted in Posts

Behavioral Security Modeling: Functional Security Requirements

without comments

Copyright © 2012 Brophey Consulting and Transvasive Security. All rights reserved.

In my Behavioral Security Modeling talk at OWASP AppSec USA 2011, I promised a white paper on BSM. Since then, I enlisted the aid of Karl Brophey, a friend who has a wealth of experience in software development and architecture, and the result of our collaboration is finally complete! I’m pleased to formally announce the release of the first BSM white paper, “Behavioral Security Modeling: Functional Security Requirements.” Karl and I will be speaking about the paper today at Secure360 in St Paul. Hope to see you there!

Abstract:

Defining functional security requirements is a key component of Behavioral Security Modeling, a method to improve security through accurately modeling human/information interactions in social terms. The paper proposes a practical, SDLC agnostic method for gathering functional security requirements by establishing limits on interactions through a series of questions to identify, clarify, and uncover hidden constraints. Five categories of constraints are presented, along with advice and “requirement patterns” to facilitate discussions with stakeholders and translate business needs into unambiguous security requirements. General advice on improving constraints, implementation considerations, security actions, quality assurance, and documenting post conditions are also discussed.

Version 1.0 disclaimer: this white paper attempts to formally capture our collective knowledge on how to effectively define functional security requirements. The next step is to test the theory by implementing the approach in a number of application development environments.

Paper:

Behavioral Security Modeling: Functional Security Requirements

Written by JohnB

May 8th, 2012 at 5:31 am

Posted in Posts

SIRACon: Organization of Risk Management Programs

without comments

Copyright © 2012 Transvasive Security. All rights reserved.

I spoke today (May 7, 2012) at SIRACon, the first ever conference of the Society of Information Risk Analysts. Here is the description I submitted for the talk – it is fairly close to the final product:

Effective, established Risk Management practices fall into two major categories: management of risk due to accidental damage (safety) and management of risk due to threats (protection). This talk will present the case that these are two distinct methodologies, and all information risk management should be divided into protection functions (like the Secret Service) and safety functions (like the Aviation Industry), staffed by different people if possible, due to the differences in approach, available data, threat behavior, and the cognitive biases of the risk analysts themselves.

I’ve uploaded copies of the talk to my site: Organizing Risk Management Programs, Or, What I learned from the Aviation Industry and the US Secret Service.

I really enjoyed the day’s talks, and appreciated all the different perspectives, they all help with our still-immature business of information risk analysis and information risk management.

I believe there will also be a video of my talk as well, I’ll post a link to that once it becomes available.

Written by JohnB

May 7th, 2012 at 12:00 am

Posted in Posts

Upcoming Talks in 2012

without comments

Copyright © 2012 Transvasive Security. All rights reserved.

I’m pleased to announce three upcoming speaking engagements in 2012!

First, I’ve been busy working with Karl Brophey on the Behavioral Security Modeling whitepaper I promised back in September 2011 at OWASP AppSec USA here in Minneapolis. Karl has a wealth of experience in software development and architecture, and we will be publishing the paper and giving a presentation at Secure360 in St Paul on May 8. If you are going, make sure to register for the Secure360 Run/Walk for ECHO!

Second, I’ll also be speaking the day before (on May 7) at SIRACon, the first-ever conference of the Society of Information Risk Analysts, on “Organizing Risk Management Programs, or, What I Learned from the Secret Service and the Aviation Industry,” where I will make the case for splitting up risk management into two separate functions: information protection (like the Secret Service), and information safety (like the airline industry). While I’m excited to be speaking, I’m even more exited to see the other talks, given by Risk Management thought leaders from around the country.

Finally, I just learned today that my proposal for the ISC2 Security Congress in Philadelphia was accepted, and I’ll be speaking on September 10 on “Defending Against Attacks by Modeling Threat Behaviors,” which will demonstrate how knowledge of attacker behaviors can be used to evaluate and improve application and infrastructure design. It’s my attempt to improve upon traditional threat modeling. The ISC Security Congress is co-located with the ASIS International conference, and I’m looking forward to attending talks from the world of physical security.

Written by JohnB

April 20th, 2012 at 12:19 am

Posted in Posts

Some random ideas from RSA 2012

without comments

This was my first time to RSA, I had always managed to find an excuse to avoid it, especially because it always seemed to be a really big conference. It is. Really big, the largest vendor floor I’ve seen at a security conference. One of the speakers, Misha Glenny, mentioned that Information Security is a $100 Billion industry worldwide, and despite the recession, is growing at 6-8% annually in the developed world, and 10-15% in the developing world. I feel fortunate to have ended up in a field that is both interesting and in demand. By some counts, attendance was in excess of 20,000 people, although many of those were likely free “expo only” passes. “Big Data” was the most-hated buzzword of the conference, eclipsing “APT.” My overall impression: we’re all still struggling with mostly the same issues.

I spent my first day at RSA (I arrived early) at Mini-Metricon 6.5, which was originally started by Andrew Jaquith, who literally wrote the book on security metrics. It was an all-day pre-conference session with a good group of interested security professionals. Talks were short, but led to some of the best highlights of the conference.

Highlights from the talks:

  • Bob Rudis and Albert Yin of Liberty Mutual, and John Streufert, DHS (formerly State Dept) spoke on their experiences with vulnerability reporting – more on that later.
  • Steve Kruse and Bill Pankey spoke on Assessing User Awareness. I liked their approach of testing awareness by presenting mock security scenarios and scoring them based on appropriate behavioral responses.
  • Jennifer Bayuk’s survey of Security SMEs provided a good consensus on what’s important in Information Security.
  • Andrew Jaquith talked about What We Can Learn from Everyday Metrics. Now I know why Perimeter has such great reports!
  • The day was capped off with the awards for the Best and Worst Data-Driven Security Reports of 2011. Aligning perfectly with how I would have voted, the “Best” winner was the Verizon DBIR, and the “Worst:” Ponemon Institute 2010 US Cost of a Data Breach. Larry, Larry, Larry.

By far the biggest idea of the day, and of the conference, was seeing again, for the first time, the work John Streufert’s team at the US Department of State did developing iPost, the centerpiece of their Continuous Risk Monitoring program. I believe I saw John’s presentation once before, but for whatever reason, I missed the point the first time around. Seeing his presentation at Metricon, especially after Liberty Mutual’s Bob Rudis and Albert Yin spoke about “Using Peer Pressure to Improve Security KPIs,” I understood the value of iPost.

Bob and Albert spoke briefly about their experiences with reporting metrics on vulnerability scans: while at first they weren’t very successful, when they changed their reporting approach to show 2 key factors, they were much more successful:

  • Show how vulnerability scores change over time, and,
  • Show the relative performance of different departments.

While Liberty Mutual demonstrated good reporting, the State Department took it to a whole new level of sophistication. John has been honored as a security leader for his work, an honor well deserved. State created a “risk market,” by weighting all vulnerabilities with carefully chosen values, scored each embassy, and rated each embassies’ score with a letter grade, A-F, grading on a curve. The iPost reporting tool allowed individual embassies to quickly drill down to identify the vulnerabilities that were the largest contributors to their scores.

The effects were dramatic: in the first 12 months of the program, State saw a 89% reduction in vulnerabilities in domestic sites, and a 90% reduction in foreign sites. The beauty of their method is that through the risk market, the security staff were able to communicate both the vulnerabilities that needed to be fixed, as well as the relative importance of fixing them, through the weightings, while giving full discretion to the teams on when and how they fixed the problems. State even used this to their advantage; during the Aurora attacks, they raised the score of MS10-018 to 40 times normal, which drove patch compliance from 20 to 85% in 6 days. As an economist, I was struck by how an engineered marketplace could drive results more effectively than central planning.

Bottom line: when comparing departments to each other, the social pressure had a big effect on patching rates. Although I’ve lost the reference, this was the approach that NASA took in the late 90’s when they started one of the first vulnerability management programs. NASA security staff reported on vulnerability rates by department, which led to competition to see who could get the lower score. The State Department approach was similar, delivering a report broken down by department (embassy) to everyone, so ambassadors could see their performance relative to their peers. I credit the success of the vulnerability program I started in 2001 in large part to the report we developed, which was also by department.

The experiences of Liberty Mutual, the State Department and my own all share some additional key factors that I believe led to our success: we worked with the teams responsible for fixing the vulnerabilities before launching the management report, and made sure they understood how they could improve their “score,” and that they were able to do so. In our program, I spent considerable time working with the engineers and even generated two reports: an early report for the engineers, and a later report (after a second scan) that went to senior management, giving the engineers time to fix issues before the report went to their boss. I firmly believe that this is the formula for vulnerability management.

Moving on to the actual conference, the keynotes on day one were about what I expected, and largely an idea-free environment. The afternoon was better, and I attended the Risk Management Smackdown II and Vulnerability panels, but didn’t get much new material.

Always a good speaker, I attended Dan Kaminsky’s random ideas talk on Wednesday morning. I liked his point that passwords actually work very well (thank you very much), partly because they’re so cheap to implement. I really liked his analysis of DNSSec vs. SSL: he hypothesizes that DNSSec will eventually replace the SSL Certificate Authorities in validating website addresses, because there’s fewer trust relationships for companies to manage – with DNS, there’s only one entity that we need to trust for .com, which is much easier (and therefore cheaper) than trusting every CA.

The rest of the day was less memorable, but I enjoyed the B-Sides panel with Amit Yoran, Kevin Mandia, Ron Gula and Roland Cloutier – although nobody could really answer my question on how to set up a cyberintelligence capability, I did learn about Mandiant’s OpenIOC, which is promising.

David Brooks gave the final keynote of the day, and spoke about topics from his new book, The Social Animal. David is an entertaining and engaging speaker, and while he didn’t directly relate the ideas from his book to security, I bought the book on the strength of his talk, (I haven’t read it yet) both of which draw from the same contemporary research in brain science, cognitive theory, and behavioral economics that have heavily influenced my work in Behavioral Information Security.

I spent the bulk of Thursday in a small group discussing risk management. It’s both an old and new area for Information Security, and what I noticed most is how difficult the field is. We’ve come a long way from ALE, but there’s still no shortage of problems to solve. If you’re interested in helping, I would encourage you to join SIRA, the Society of Information Risk Analysts, and get involved. The first SIRA conference, SIRACon, will be held in St Paul, MN the day before Secure360.

Friday was a short day. I liked Dave Aitel’s talk on organizations as cyberweapons: think Pirate’s Bay and Wikileaks, and Misha Glenny’s commentary on “Understanding the Social Psychology of Hackers,” where he made the case that there is a difference between “Hackers” who are largely motivated by solving puzzles and follow an escalating path into criminal activity and “Social Engineers” who are only motivated by criminal financial gain. The final keynotes were Hugh Thompson and Tony Blair. Tony said virtually nothing about security, but was an excellent speaker, and Hugh did two interviews: one with Daniel Gardner, the author of The Science of Fear, (a book I’ve read and highly recommend) and one with Frank Luntz, the master of word-manipulation, an interview that can only be described as bizarre.

And that was it for RSA 2012. If all goes well, you can look forward to a recap of SIRACon and Secure360 in May!

Written by JohnB

April 6th, 2012 at 2:22 pm

Posted in Posts

On Money Mules and Credential Theft

without comments

Copyright © 2012 Transvasive Security. All rights reserved.

A threatpost article, “Money Mules, Not Customers, The Real Victims of Bank Fraud” and the paper it references caught my attention today. The premise of the paper is that due to banking regulations and how banks react to fraudulent online transactions affecting consumer accounts, the criminals are effectively stealing not from consumers, but from the “money mules” they recruit to move the stolen money. Brian Krebs, a journalist and blogger who writes about the online criminal underground and information security issues on his blog, Krebs on Security, posted a comment criticizing the authors’ conclusions, specifically calling out that the main victims of theft of banking credentials are small and mid-size business owners, who are liable for losses, and have lost significant amounts of money. I’ve reposted my reply in part below. I largely agree with Brian, however, I do think the authors raise good points about the difficulty of moving money through the banking system, and about the critical role mules play in online bank fraud.

@Brian,

Your point on the fraud losses to small and mid-size business owners with corporate banking accounts is spot-on, and while the paper makes it clear they are mainly addressing the consumer problem, it’s a fair criticism that they’re glossing over a significant portion of online banking fraud, and that they misrepresent the facts by citing the instances in which fraudulent transactions on commercial accounts and not the transactions that couldn’t be reversed.

However, I do believe the paper raises an excellent point about online consumer banking fraud, and online banking fraud in general. It is difficult to transfer money out of accounts, and the mules really do bear much of the risk, and (as you have noted) rarely get paid, and sometimes may not realize what they’re doing is illegal. Their point on the low black market value of stolen credentials relative to account value does indicate that extracting money is difficult, and unlikely to succeed. Even though their rationale on how banks resolve fraudulent transfers means that attackers are effectively stealing from the mules only applies to consumers, I welcome the suggestion that we attack the problem at other points in the chain, and not just passwords. We may do better to disrupt online banking fraud by putting more efforts into making mule recruitment harder.

I would also raise a point not yet covered in the article or the comments: I take issue with the authors’ comments on liability; the auto rental and identity theft insurance markets have little bearing on banks’ decision to offer zero-dollar liability; the reality is, when the consumers’ liability is limited by regulation to $50, offering the extra $50 is trivially inexpensive. When Banks aren’t legally obligated to bear liability, they quite willingly shift it to the account holder, as is the case for US commercial bank accounts. I for one would very much like to see regulators force the issue and limit liability for at least small and mid-size business, since they’re simply not equipped to handle this type of fraud on their own.

Written by JohnB

March 28th, 2012 at 10:10 pm

Posted in Posts

Threat Modeling

without comments

Copyright © 2012 Transvasive Security. All rights reserved.

Recently, I read and commented on a series of posts at The New School blog: Threat Modeling Fails In Practice, On Threat Modeling, and Yet More On Threat Modeling: A Mini-Rant. After reading both sides of the argument, I concluded that while threat modeling can be helpful, but we need to find a better way that doesn’t require us to brainstorm. Imagining the threats begets imaginary threats. I strongly believe that because of our cognitive errors in estimating risk, brainstorming threats is a mistake, and will inevitably lead to guessing what the threats will be, guesses that are at best only slightly better than random chance.

To that end, I believe that some of my recent work in Behavioral Security Modeling (BSM) may be part of the solution. Threat modeling needs to be deconstructed and integrated directly into the software development life cycle (SCDLC). Some of the benefits provided by threat modeling in general, and STRIDE specifically include identifying missing requirements and potential quality/safety issues, something that BSM is designed to help with, and I’ve got some ideas on how to address the other elements.

Work is slowly progressing on the BSM white paper that I am using to develop and refine the ideas from my original Behavioral Security Modeling presentation, and I’ve enlisted a collaborator with strong application development experience. We’ve already discussed threat modeling, and if it’s not directly addressed in our white paper or the presentation, (we’ll be speaking at Secure 360!) it certainly will be in the framework we’re building behind the scenes.

Written by JohnB

February 8th, 2012 at 9:16 pm

Posted in Posts

Video from AppSec USA 2011 now available

without comments

Copyright © 2011 Transvasive Security. All rights reserved.

OWASP has posted video from my talk at AppSec USA 2011. I haven’t yet built up the nerve to watch it yet (who likes to watch themselves?), so I can’t say how good it is, but hopefully it is interesting and informative. Update: it seems the video is just slides & audio – which is probably a good thing. Second Update: I’ve been told I do appear in the video – I probably should watch more of it before updating.

Behavioral Security Modeling Video

I encourage you to peruse the talks list and watch the talks you may have missed (if you were able to attend), or anything that looks interesting (if you were not). This was my first experience with OWASP, and I have to say I was impressed by both the openness and the professionalism. Thanks to everyone in OWASP MSP who helped make AppSec 2011 a great success!

Written by JohnB

November 17th, 2011 at 12:28 am

Posted in Posts

Introduction to Behavioral Information Security Presentation (updated)

without comments

Copyright © 2011 Transvasive Security. All rights reserved.

I spoke yesterday at the local (Minnesota) chapter of ISSA, as a last-minute replacement for David Bryan. I want to thank MN ISSA for the opportunity to speak, I thought the talk generated some good discussion. Here are the slides from the talk, they’re an updated version of what I posted in June.

Behavioral Information Security: An Introduction

I also want to thank Kevin Flanagan from RSA for his excellent talk on the RSA breach. For me, it served as a reminder on the critical security controls needed to protect against attacks, both sophisticated and unsophisticated. It was telling that most of the things on his summary of critical security controls were already in existence 10 years ago.

Updated: MN ISSA has posted a video of my talk

Written by JohnB

November 17th, 2011 at 12:00 am

Posted in Posts

Behavioral Security Modeling Presentation

without comments

Copyright © 2011 Transvasive Security. All rights reserved.

Here are the slides from my talk at AppSec USA 2011.

Behavioral Security Modeling: Eliminating Vulnerabilities by Building Predictable Systems

Written by JohnB

September 26th, 2011 at 8:18 pm

Posted in Posts