A Trip To Reggio Di Calabria

This was a view from the conference’s break area — that’s the Mediterranean and Sicily in the distance.

I recently went to Reggio Di Calabria, a city in southern Italy, to present to present a paper I wrote at a workshop / conference. The trip was good, and (predictably, for Southern Italy in August) quite hot.

I spent an afternoon and evening in Rome before taking a train south to Reggio Calabria. While in Rome, I did a little bit of sightseeing and visited the National Gallery of Ancient Art, ate at some local cafes, and tried to nap away the jetlag. Once in Reggio Calabria, I:

  1. Learned about the custom of leaving hotel keys at the front desk;
  2. Tried, without success, to figure out if I actually could drink the tapwater: no one would say it was unsafe, but also everyone recommended just buying bottled water;
  3. Then tried, with some success, to figure out the meanings of the different types of bottled water, or at least “oligomineral” water;
  4. Visited the National Museum of Magna Graecia, which was way more impressive than I had even hoped for; and
  5. Was totally blown away by the horizon every time I looked at it. The Mediterranean with Sicily in the background consistently looked like a postcard.

The conference I was at also had an organized outing to the town of Scilla, where we walked through the fishing village / district Chianalea and climbed to the top of the ancient Castello Ruffo.

The only other notable event was getting my debit card skimmed at an ATM in Rome on my return trip, although I wouldn’t find out about that for a couple of weeks. Despite that, it was a great trip — I watched a number of really interesting talks, met great people, and of course the food (especially the gelato!) and scenery were just incredible.

Outdoors in Rome

National Gallery of Ancient Art

Reggio Di Calabria

Scilla, Italy

National Museum of Magna Graecia

 

A sign directing travelers in Rome’s Leonardo da Vinci to the international check-in

Simultaneous Analysis of Safety and Security of a Critical System

This post was originally published on the SEI Insights Blog.

As computers become more powerful and ubiquitous, software and software-based systems are increasingly relied on for business, governmental, and even personal tasks. While many of these devices and apps simply increase the convenience of our lives, some—known as critical systems—perform business- or life-preserving functionality. As they become more prevalent, securing critical systems from accidental and malicious threats has become both more important and more difficult. In addition to classic safety problems, such as ensuring hardware reliability, protection from natural phenomena, etc., modern critical systems are so interconnected that security threats from malicious adversaries must also be considered. This blog post is adapted from a new paper two colleagues (Eugene Vasserman and John Hatcliff, both at Kansas State University) and I wrote that proposes a theoretical basis for simultaneously analyzing both the safety and security of a critical system.

Why is this an issue now?

One common way of determining the safety of a critical system is to perform what’s called a hazard analysis. There are a number of traditional hazard-analysis techniques; two of the most popular are failure modes and effects analysis (FMEA) and fault tree analysis (FTA). The development of these traditional techniques predates modern levels of interconnectivity, so most versions of the techniques do not explicitly address security concerns. This omission has been a problem in a number of domains ranging from industrial control systems, attacked by Stuxnet; to the smart power grid, which faces a number of challenges; to even personal medical devices that face attacks now that they expose functionality over wireless radios and the Internet.

What’s being done about it?

This rise in security threats has led some researchers to adapt traditional hazard-analysis techniques to include or focus on security-related issues. Christoph Schmittner and his colleagues showed how FMEA can be used for security analysis and Philip Brooke and Richard Paige have demonstrated the use of FTA for secure system design.

But the field of hazard analysis research is moving in other directions as well. In addition to the inclusion or exclusion of security concerns, a second dimension of hazard analysis is the incorporation of systems theory.

Nancy Leveson from the Massachusetts Institute of Technology is perhaps the biggest proponent for the use of systems theory, which advocates a more holistic approach. Systems theory—as opposed to the analytic-reduction style of analysis used in the traditional scientific method—has been integrated into a new causality model and hazard analysis technique:

  • Systems-Theoretic Accident Model and Processes (STAMP) – A new causality model (initially presented here) that incorporates systems theory and shifts the focus from preventing safety problems to enforcing safety constraints.
  • Systems Theoretic Process Analysis (STPA) – A new hazard analysis technique based on STAMP. Note that while STPA did not address directly security concerns in its initial version, Leveson’s student William Young has begun work on a security derivative known as STPA-Sec.

Others are working in this area as well: Friedberg et al. developed their own security-and-safety derivative of Leveson’s technique called STPA-SafeSec. And, as part of my Ph.D. research before I joined the SEI, I created a refinement of STPA that’s focused on hardware- and software-based subsystems called the Systematic Analysis of Faults and Errors (SAFE).

Table 1: Safety and Security Analysis Techniques

Table 1: Safety and Security Analysis Techniques
Safety-Focused Techniques Security-Aware / Focused Techniques
Traditional Causality FMEA, FTA FMEA (Sec), FTA (Sec)
System Theoretic STPA STPA-Sec, STPA-SafeSec, SAFE

What’s New in our Approach?

For this paper, our approach differed from previous efforts in that we were not attempting to prescribe an exact series of steps for analysts to follow when analyzing their systems. Rather, we examined the basis of one of the key elements shared by most analysis techniques: their use of guidewords. Guidewords are high-level concepts or terms that guide analysts to consider particular ways that a system can fail. These exist in both safety-analysis techniques—STPA has its Step 1 and Step 2 terms, FMEA has failure modes, and HAZOP is based around finding deviations from design intent using terms like Late and More—and security-analysis techniques (STRIDE is centered around the terms that make up its eponymous acronym). When most techniques were created, though, the guidewords were also created in an ad hoc manner, rather than being directly traceable to existing literature.

Our work showed how SAFE, which is a guideword-agnostic analysis technique, could be used with a set of terms derived from one of the classic adversary models used in security. That is, guidewords can be supplied to SAFE at runtime as parameters (a concept we refer to in the paper as parametricity), rather than being ad hoc and essentially inseparable. The classic security model we used is the one proposed in 1983 by Danny Dolev and Andrew C. Yao that describes the actions an adversary could potentially take if the analyzed system communicates over a non-trusted network. For many systems, even those that do not use the Internet, this is a reasonable assumption: keeping an entire network perfectly secure is often prohibitively hard. What’s more, the attack types that arise from the Dolev-Yao model are so foundational that they can map cleanly (if informally, in this work) to concepts from both system safety and network security. Table 2 shows this mapping:

Table 2: Mapping between system safety, Dolev-Yao, and network security concepts

Table 2: Mapping between system safety, Dolev-Yao, and network security concepts
System Safety Dolev-Yao Network Security
None Read Violate Privacy
Corrupt Value Modify Existing Craft Arbitrary Packets
Late/Dropped Message Delay/Drop Increased Latency/Packet Loss
Early Message Craft and Send Impersonate, Deny Service

The adversary described by Dolev and Yao’s model controls a compromised component on a network. It can read any message, modify messages before they are received by their intended recipient, delay those messages (possibly indefinitely, effectively dropping the message), and craft/send custom messages to impersonate legitimate users of the network.

Payoffs

We believe that there are a number of benefits to a guideword-based, safety- and security-aware component-focused analysis like SAFE.

State space compression diagram
Figure 1: Compressing the input space

 

  1. Merging safety and security—Perhaps most obviously, a safety and security co-analysis like STPA-SafeSec or SAFE benefits from simultaneously considering both aspects in a couple of key ways:
    1. Reduced overhead—Rather than perform separate analyses for safety and security, performing a single co-analysis means less rework. Many basic problems should be discovered by both analysis techniques, and that duplicated effort can mean less time is spent finding more subtle problems.
    2. Fewer problems get missed—As Friedberg et al. discuss, a system’s safety and security properties often depend on and interact with one another; considering them separately can mean these interactions get missed.
  2. Analysis space reduction—A SAFE analysis moves backwards through a system, starting with actuation points (i.e., the components a system uses to affect its environment), and then considers the sources of the inputs to those actuation points. As there are myriad ways inputs to a component can fail, considering them all can be prohibitively hard. Instead, we advocate and explain how classifying inputs—using an effects-focused guideword set like the one derived from Dolev and Yao’s model—can reduce the number of errors to analyze, effectively compressing the analysis space. Figure 1 shows this graphically for a time- and value-sensitive input: either or both the input value and time can be wrong. Rather than focusing on the precise magnitude of the delay or value error, however, simply classifying the input as “too late” or “too high” produces a manageable number of failure cases.
  3. Independence of effects-based analyses—Note that the input failure classifications from Figure 1 say nothing about the cause of the failure. That is, if an input message is delayed by a network failure, or active adversary, or even a problem in a component’s hardware, the result—delayed input—is the same. This means a component can, to some degree, be analyzed independently of the components that produce its input. In other words, regardless of what causes a failure, we can typically guarantee that its effects will manifest in one of the ways contained in our guideword set.
  4. System safety and formal methods—There is an exciting connection between our work and similar work by John Rushby taking place in pure software systems using bounded model checking. At a high level, both techniques allow designers to derive the assumptions that their systems/components rely on for safe operation.
  5. Explicit incorporation of security model—Explicitly incorporating an adversary model has two benefits:
    1. Knowing what to expect—Most obviously, it guides system designers to consider the behaviors that the adversary is allowed to do. That is, since the Dolev-Yao model says an adversary can snoop on messages sent on the network, the need for encryption immediately becomes clear.
    2. Knowing what not to expect—Less obvious, but just as important, is what an adversary model rules out. The Dolev-Yao model, for example, describes a network-based adversary with no direct access to components themselves. Making this explicit means it can be discharged as a formal environmental requirement (i.e., an assumption) such as “physical security for the component must be provided.”

Evaluation

PCA Interlock Overview
Figure 2: An integrated medical application

In the initial presentation of SAFE, the evaluation was based on an analysis of hazards in a system of interconnected medical devices and governing software. The motivation and details of the distributed medical application aren’t germane to this blog post, but a high-level overview is provided in Figure 2. For this work, we adapted the previous analysis from my dissertation: we selected a single element of the system and repeatedly re-analyzed it using SAFE with different guideword sets derived from a range of sources. These included the following:

Our evaluation was based on the likelihood that an analyst, following the process of SAFE, would detect hazards leading to various design improvements. These possible improvements include

  • Alarming—The system should be able to issue an alarm to notify a clinician of a problem it cannot independently resolve.
  • Timeouts—The critical part of the system should stop to enforce minimum message inter-arrival times to prevent message overload.
  • Timestamps—Messages are time sensitive, so they should be timestamped to prevent late (or early) messages from being acted upon.
  • Hashed/Signed Messages—Messages should be cryptographically signed and hashed to prevent forgery or modification.
  • Encrypted Messages—Messages should be encrypted to prevent snooping on potentially sensitive message values.

Of course, none of these design improvements are particularly novel, but this exercise wasn’t intended to come up with clever or unintuitive solutions to subtle problems in the medical system’s design. Rather, we were interested in finding a set of guidewords that would consistently suggest the broadest set of improvements.

Guidewords aren’t used in a vacuum, and hazard analysis isn’t a computerized process. Our evaluation was thus necessarily somewhat subjective—see Table 3 for the full result. We rated whether an analysis would be likely to suggest an improvement (denoted with a “✓”), might suggest the improvement (denoted with a “?”), or would likely not suggest the improvement (denoted with a “✗”). Of course, a particularly skilled or experienced system designer/analyst might come up with the design improvements regardless of the guideword set used; the terms are used only to guide analysts to think about particular classes of errors.

Table 3: Evaluation

Table 3: Evaluation
Dolev-Yao Avižienis et al. STPA-SafeSec STPA / STPA-Sec
Alarms
Timeouts ? ?
Timestamps
Negative Ticket Values ? ? ? ?
Hashed / Signed Messages ?
Encryption

Next Steps

In the next few months, we’ll explore how some of the foundational ideas from this work can integrate with ongoing projects here at the SEI. One promising direction is the integration of hazard/security analysis and semiformal architecture models such as those built in the SEI’s popular Architecture Analysis and Design Language (AADL). Not only does SAFE have an AADL-based implementation, but the SEI has the Architecture-Led Safety Analysis (ALSA) technique. David Gluch, a fellow SEI researcher, and I are looking at how this technique might be adapted to also address security concerns; we expect to produce a technical note here in a few months that describes what we’ve learned so far.

I’m also particularly interested in automating analyses, so that domain experts can most efficiently leverage their personal expertise and not have to learn a lot of computer science/hazard-analysis theory. To that end, I think the links between this SAFE’s style of backwards-chaining analysis and Rushby’s assumption synthesis are particularly promising, and I want to continue exploring overlaps in that area as well.

Additional Resources

The paper on which this blog post is based, authored by Sam Procter, Eugene Vasserman, and John Hatcliff, was presented at SAW ’17, the 4th International Workshop on Software Assurance.

“This post (“Material”) was originally published as part of the SEI Blog series and is owned by Carnegie Mellon University and used with permission. This Material is provided on an “as-is” “as available” basis without any warranties and solely for your personal viewing and use. By viewing, downloading and/or using this video and related materials, you agree that you have read and agree to CMU’s terms of use (http://www.sei.cmu.edu/legal/index.cfm) for use of this Material.”

I got a job!

The Software Engineering Institute
The Software Engineering Institute

After finishing my PhD, I got a job working at the Software Engineering Institute as an “Architecture Researcher.” I’m on the Architecture Led Modeling and Analysis team, where I’m privileged to work with people like Peter Feiler and others on — as the name implies — (software) architecture-centric modeling tools/techniques and analyses.

In some ways, the work will be very similar to what I did in graduate school —  safety assurance and hazard analysis techniques are a huge part of the work that the group does, so I’ve been able to hit the ground running. I’m working on other projects as well, of course, but nothing too far from the topics I studied at K-State. As the various projects that I work on become more mature, I’ll post more about them both here and on my research page.

Now that I’ve been here for a year (though it’s gone quickly!) I’ve found that there’s a lot to like about this job:

  • We’re part of Carnegie Mellon, which is a world-class university with a particularly strong computer science program. Being near that — attending talks, having the opportunity to take classes, etc — is a great opportunity.
  • There are no teaching responsibilities. I enjoyed the course that I helped teach, and I am definitely interested in teaching more in the future, but really being able to dig into research full time is wonderful.
  • I’ve grown to really like living in Pittsburgh! I have a walking commute, which is pretty great, and it’s a fun town with a lot going on but it’s not so huge that I feel totally lost. I could say a lot more about Pittsburgh — and I hope to in a future post — but the upshot is that I initially wasn’t sure what to expect, but I’ve come to like it a lot.

SAFE and Secure: Deeply Integrating Security in a New Hazard Analysis

One of my dissertation‘s main contributions was a new hazard analysis technique called the “Systematic Analysis of Faults and Errors” or SAFE. Hazard analysis techniques are, as I wrote about in 2014, structured ways of reasoning about and (typically) documenting the ways that things can go wrong in a given system. While traditionally these techniques have focused on safety problems — eg, a part breaking down, or a communication link becoming overloaded — there is a growing recognition that security should be considered as well.

That’s not to say that developers of safety-critical systems hadn’t previously considered security important (although in some frightening cases that was true) but rather that the degree to which safety and security problems can be discovered by the same analysis technique is an active area of investigation. I referenced the idea that this overlap could potentially be addressed by SAFE in the ‘Future Work’ section of my dissertation, and it fit in nicely with some work that was being done at the SEI. As a result, my PhD advisor, one of my committee members and I turned the results of that idea into a new paper that was recently accepted at the 4th International Workshop on Software Assurance (SAW).

The paper has three main ideas:

  1. It introduces a foundational, unified set of error/effect concepts based on Dolev and Yao’s seminal attacker model. These concepts are mapped to both the (network) security domain and the system safety domain, so we believe they can serve as the basis of any analysis technique that addresses the overlap of security and safety concerns.
  2. Their use is demonstrated in SAFE’s Activity 1, which considers how — irrespective of cause — a core set of error/effect concepts can be used to guide analysis of a component’s behavior when things go wrong.
  3. Attacker models (like Dolev and Yao’s) explicitly specify attacker capabilities. We demonstrate how SAFE’s Activity 2, which considers the effects of faults not caused by network-based stimuli, can use these attacker models to parameterize the analysis — making explicit assumptions about attacker/environmental actions that were previously implicit.

The workshop is in Reggio Calabria, Italy, so I’m headed over there at the end of August. I’m really looking forward to the trip, and the chance to talk about this work with other people working in the area.

I finished my PhD!

dissertation-defense-1
My dissertation defense (thanks to Dr. Bill Hsu for the photo)

On June 17, I successfully defended my doctoral research. Last week, I received final approval of my dissertation. It feels pretty weird to say it, but I now have a PhD, and am “Dr. Procter.”

I’m finding it hard to summarize the entirety of my grad school experience, so I won’t try. Rather, I’ll say that it was on the whole a very positive, enriching time and what I will miss the most are the people I met and spent time with while I was in Manhattan. I imagine that sounds sort of cheesy, but it’s true: most of the time I spent in my office reading papers or hammering out code or revising a draft has already started to blur together. Conversations with people — in the graduate school or at conferences or just out and about in Manhattan (or Lawrence or Kansas City, where I frequently found myself) remain quite vivid, though.

dissertation-defense-2
Another view of the defense (thanks to Dr. Bill Hsu for the photo)

In the big scheme of things, it of course hasn’t been very long: only a couple of months have passed since I was in front of everybody trying to explain what I’ve spent the last 4.5 years doing. I’ll write another post soon about what I’m up to now (preview: I got a job and moved east!), but for now, I’ll just say that while I’m happy to be done, I already finding myself missing a number of things about Manhattan (like Varsity Donuts!)

A trip to Delft

Delft and the Oude Kerk (Old Church) as viewed from Nieuwe Kerk (New Church)
Delft and the Oude Kerk (Old Church) as viewed from Nieuwe Kerk (New Church)

In July I wrote about a recent paper covering my lab’s recent work with a concept called “Error Type Refinement.” Then, in late September, I got to travel to the city of Delft in the Netherlands and present the work. I presented at the ASSURE workshop, which was co-located with this year’s SAFECOMP conference. The workshop and conference were great — the presentations were really interesting, and I got to meet a lot of people working in the field (many of whom I knew previously as names on papers that I had read!)

I also had a lot of fun exploring Delft — the city is very old, and has a lot of really cool history, including a couple of neat churches — the Nieuwe Kerk (or New Church, though construction actually started in 1396) and the Oude Kerk (or Old Church, which dates to 1246). I also got to see more modern buildings (like the University library, which is a giant cone built into a hillside) as well as very small ones at Madurodam, which is a theme park that has 1:25 scale replicas of famous Dutch buildings, roads, ships and trains.

It was really nice to have nearly everything in the city easily accessible by foot, though I felt like I was always at risk of getting run over by people on bicycles, which was quite a change coming from the US. The canals (and accompanying ducks and geese) around which the sidewalks and streets were laid out were also a very pleasant change from the more mundane Kansas.

All in all, it was a great trip, and I learned a lot — both about safety critical computing, and about the history of the Netherlands. You can check out the photos I took below!

Delft and its University

Nieuwe Kerk — Exterior

Nieuwe Kerk — Interior

Madurodam

Oude Kerk

Nieuwe Meer in Amsterdam

Error Type Refinement for Assurance of Families of Platform-Based Systems

value-hierarchy
The fault type hierarchy for timing related errors, extended first for MAP Apps in general, and then for the PCA Interlock App

Last time I wrote about my work, I mentioned that we were using the architectural modeling language AADL to describe a particular type of distributed, compositional medical applications called Medical Application Platform (MAP) apps. One neat aspect of AADL is that the core language — which describes hardware and software architectures — can be extended by language annexes to cover other, related aspects of a system like its behavior (via the creatively-named “Behavior Annex”) or failure-related aspects (via the “Error Modeling” annex or “EM”). The EM has a number of neat features, not the least of which is its error type system.

Briefly, the idea behind the EM is that you can describe problems with your system, like a sensor failing and producing values that are wrong, or a communications channel delaying or dropping a message. These problems, termed “errors” in the EM, are made machine-readable by being described in a type system. The EM allows errors to be related to one another by three refinement mechanisms: extension, renaming, and aggregation — so you can actually create a full type lattice, if you’d like. The EM even comes with a standard set of errors, called the error library.

In a new paper, we looked at using the refinement mechanisms in the context of MAPs to specialize these fault types first by a component’s architectural category — the idea being that there are a finite number of categories in a MAP system: things like software application, network controller, medical device, etc. — and then by a component’s actual implementation. Component implementations, of course, have very concrete faults, because we know the actual message types various components will be sending and receiving. The list of errors produced by this refinement can then help a developer when it comes time to perform a hazard analysis (like FTA, FMEA, or STPA), regardless of which analysis / style (ie, top-down or bottom-up) is being performed.

The paper was accepted to the upcoming ASSURE workshop in Delft, The Netherlands, (collocated with SAFECOMP) so I’ll be headed out there in September to give a talk on it. Hopefully we’ll get some good discussion from the other workshop attendees, plus it’ll be fun to check out the presentations at the main conference, not to mention the city of Delft itself.

A trip to Lübeck, Germany

The Lubeck town hall
The Lubeck town hall

My work with medical devices took me to Lausanne, Switzerland last month, and since I was already halfway around the world my advisor and I decided a trip up north to Lübeck, Germany to visit medical device manufacturer Dräger made sense. The work I did there was really cool — in contrast to the conference in Lausanne, where I was the only person talking about medical devices, the Dräger work focused on nothing else. Also, I had a little over an hour for my presentation instead of 12 minutes.

I really enjoyed meeting some people who work in medical device connectivity research at Dräger, and I hope that we can continue to interact in the future. I also really enjoyed exploring Lübeck, which, like Lausanne, had a number of neat historic buildings. Communicating with people was definitely easier in Germany than it was in Switzerland (with the exception of one very insistent fellow in a train station) so while there were still challenges — mostly related to getting around: the mass transit is impressive, but it’s also sort of difficult to understand at first — it was all in all pretty smooth.

I managed to grab another geocache in a really scenic downtown park, eat a ton of marzipan, and get (briefly) addicted to currywurst. I also went to a number of very cool churches, saw the famous city gate (“Holstentor”) and even walk through a puppet museum.  You can see some photos of the churches and outdoor areas below!


Lübeck’s Churches


Lübeck’s Old Town

A trip to Lausanne, Switzerland

Lausanne, as seen from the Sauvabelin Tower
Lausanne, as seen from the Sauvabelin Tower

I recently mentioned that a paper I wrote got accepted to MEMOCODE, a conference in Lausanne, Switzerland. Having never been out of the country before, a trip to Europe was really exciting. It was also a little imposing — I would be traveling alone to a place with no knowledge of the local language (in this case, French). Unlike my trip to DC, where I was most anxious about my presentation, for this trip I was actually more anxious about the typically unimportant details — getting from one place to another, ordering food, etc.

I won’t say that those anxieties didn’t turn out to be at least somewhat warranted, the trip was at times quite challenging.  At different times, I…

  • Accidentally ordered a dish served with a raw egg on top
  • Had to sprint through airports to make connections (in both Heathrow and O’Hare)
  • Had conversations entirely in French (thanks entirely to a phrasebook app on my phone)

But the benefits definitely outweighed the negatives.  The trip was an amazing experience, and the conference went really well — my presentation was unfortunately quite short (I was given only 12 minutes), but it went well and I got some good questions / discussion. I also really enjoyed seeing the sights in the area — the cathedral is amazing, as is the nearby Sauvabelin tower / park / forest (where I even found a geocache!).  On top of that, the food (particularly the fondue and the pasta) was delicious, and the local wines were also really good.

All that said, though, I’m happy to be home for a while, and back to business-as-usual.  You can check out the pictures I took below!


Montbenon


Lausanne Cathedral


Ouchy


Sauvabelin Park & Tower

An Architecturally-Integrated, Systems-Based Hazard Analysis for Medical Applications

An example STPA-style control loop, annotated with the subset of EMV2 and AADL properties from the paper
An example STPA-style control loop, annotated with the subset of EMV2 and AADL properties from the paper

A few months ago, I wrote about my recent work on defining a subset of the language AADL to specify the architecture of bits of software (apps) that would run on medical application platforms (MAPs). Since then, I’ve been working on how developers can use these semi-formal architectural descriptions to do useful things. The first “useful thing” is integrating hazard analysis annotations with these architectural descriptions — that is, specifying how things could go wrong in the app.

Structured hazard analyses have been performed for over half a century now (some dating back to the late 1940s!) but in some ways are still the same as they were back then, in that they are still unintegrated with the system under analysis — that is, any analysis performed would live in a separate document (often a Word file). In programming terms, this would be like having a system’s documentation be separate from the implementation, which isn’t nearly as useful as techniques like Doxygen or Javadoc, where everything is more tightly integrated.

So, after looking at a number of hazard analysis techniques, I (and others my research lab) settled on the relatively new, systems-focused Systems Theoretic Process Analysis (STPA).  From there, we looked at tailoring it to the medical application development process, and how that tailored process could be integrated with the architecture specifications from our previous work. The result of this effort was a paper, which was recently accepted to the 2014 ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE) in Lausanne, Switzerland. I’m really excited to go and present.

My advisor has described the paper as “incredibly dense” (I blame page limits.) so in the next few months I’ll be expanding it into a whitepaper that will hopefully be much clearer, and will be of use to our research partners in regulatory agencies.