One of my dissertation‘s main contributions was a new hazard analysis technique called the “Systematic Analysis of Faults and Errors” or SAFE. Hazard analysis techniques are, as I wrote about in 2014, structured ways of reasoning about and (typically) documenting the ways that things can go wrong in a given system. While traditionally these techniques have focused on safety problems — eg, a part breaking down, or a communication link becoming overloaded — there is a growing recognition that security should be considered as well.
That’s not to say that developers of safety-critical systems hadn’t previously considered security important (although in some frightening cases that was true) but rather that the degree to which safety and security problems can be discovered by the same analysis technique is an active area of investigation. I referenced the idea that this overlap could potentially be addressed by SAFE in the ‘Future Work’ section of my dissertation, and it fit in nicely with some work that was being done at the SEI. As a result, my PhD advisor, one of my committee members and I turned the results of that idea into a new paper that was recently accepted at the 4th International Workshop on Software Assurance (SAW).
The paper has three main ideas:
- It introduces a foundational, unified set of error/effect concepts based on Dolev and Yao’s seminal attacker model. These concepts are mapped to both the (network) security domain and the system safety domain, so we believe they can serve as the basis of any analysis technique that addresses the overlap of security and safety concerns.
- Their use is demonstrated in SAFE’s Activity 1, which considers how — irrespective of cause — a core set of error/effect concepts can be used to guide analysis of a component’s behavior when things go wrong.
- Attacker models (like Dolev and Yao’s) explicitly specify attacker capabilities. We demonstrate how SAFE’s Activity 2, which considers the effects of faults not caused by network-based stimuli, can use these attacker models to parameterize the analysis — making explicit assumptions about attacker/environmental actions that were previously implicit.
The workshop is in Reggio Calabria, Italy, so I’m headed over there at the end of August. I’m really looking forward to the trip, and the chance to talk about this work with other people working in the area.