An Architecturally-Integrated, Systems-Based Hazard Analysis for Medical Applications

An example STPA-style control loop, annotated with the subset of EMV2 and AADL properties from the paper
An example STPA-style control loop, annotated with the subset of EMV2 and AADL properties from the paper

A few months ago, I wrote about my recent work on defining a subset of the language AADL to specify the architecture of bits of software (apps) that would run on medical application platforms (MAPs). Since then, I’ve been working on how developers can use these semi-formal architectural descriptions to do useful things. The first “useful thing” is integrating hazard analysis annotations with these architectural descriptions — that is, specifying how things could go wrong in the app.

Structured hazard analyses have been performed for over half a century now (some dating back to the late 1940s!) but in some ways are still the same as they were back then, in that they are still unintegrated with the system under analysis — that is, any analysis performed would live in a separate document (often a Word file). In programming terms, this would be like having a system’s documentation be separate from the implementation, which isn’t nearly as useful as techniques like Doxygen or Javadoc, where everything is more tightly integrated.

So, after looking at a number of hazard analysis techniques, I (and others my research lab) settled on the relatively new, systems-focused Systems Theoretic Process Analysis (STPA).  From there, we looked at tailoring it to the medical application development process, and how that tailored process could be integrated with the architecture specifications from our previous work. The result of this effort was a paper, which was recently accepted to the 2014 ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE) in Lausanne, Switzerland. I’m really excited to go and present.

My advisor has described the paper as “incredibly dense” (I blame page limits.) so in the next few months I’ll be expanding it into a whitepaper that will hopefully be much clearer, and will be of use to our research partners in regulatory agencies.

Tying A Build Together with Jenkins

Recently I wrote about the project I’m working on, and mentioned the range of technologies used in support of that effort. Since then, I’ve written about the building, testing and documentation tools I used, but today I’d like to discuss how everything is tied together using Jenkins. Jenkins is a tool that enables continuous integration — the practice of integrating all the parts of a project every time code is committed. It has a huge number of options, plugins, and is crazily configurable, so I’ll just be talking about how I used it in the development of the MDCF Architect.

Building with Jenkins

Building a project with Maven in Jenkins is super straightforward — so much so that it’s arguably too vanilla to blog about. Since the MDCF Architect uses GitHub for its source control, I used the aptly-named “Git Plugin” for Jenkins. Once that’s installed, I just pointed it at the repository URL and set Jenkins to poll git periodically.  If new changes are found, they’re pulled, built, tested, and reported on. One thing I particularly like is the “H” option in the polling schedule — it lets the server determine a time (via some hashing function) to query git / start a build. This avoids the problem where a bunch of projects would all try to run at common times (eg, the top of the hour) without forcing developers to set particular times for each project.

My project’s Jenkins build configuration — 1 of 2

After polling git, I have one build step — invocation of two top-level Maven targets under the “uploadToRepo” profile, which triggers upload of the plugin’s executable and documentation. Also, since running the MDCF Architect test suite requires a UI thread, one additional build step is needed — running a vnc server. This can be tricky for Jenkins (which runs headlessly) but it’s solved by the Xvnc Plugin. I found this post to be really helpful in setting up this part of my project.

My project’s Jenkins build configuration — 2 of 2

Testing with Jenkins

The project’s tests are run by the “install” Maven target, and two post-build steps collate the testing and coverage reports. The first of these steps, JUnit test report collation, requires no plugin — you just have to tell Jenkins where it can find the report .xml files. The second step, generation of coverage reports from JaCoCo, is provided by the “JaCoCo Plugin.” Execution of the project under JaCoCo results in some binary “.exec” files that contain the coverage data — you have to tell the plugin where these files are, as well as the .class and .java files that your project builds to / from.  You can also set coverage requirements, though I chose not to.

My project’s Jenkins test configuration

Once everything is set up, your project will be building, testing, and deploying automatically, leaving you free to do other things, or just stare at some lovely graphs. Let me know if you have any feedback or questions in the comments section!

Some lovely graphs related to my project’s tests

Automating all Aspects of a Build with Maven Plugins

I’ve mentioned in recent posts that I recently wrote some software called the MDCF Architect for my research, and along with the implementation (an eclipse plugin), I also built a number of supporting artifacts — things like developer-targeted documentation and testing with coverage information. Integrating these (and other) build features with Maven is often pretty straightforward because a lot of functionality is available as Maven plugins. So, today, I’m going to discuss how I configured three fairly common Maven plugins: “Exec,” “JaCoCo,” and “Wagon.”

Integrating Maven & Sphinx

Sphinx is a tool for generating developer-targeted documentation.  I wrote about some extensions I made to it earlier this week, but today I’m going to talk about how I automated the documentation generation process.  Initially I used the sphinx-maven plugin, though it uses an older version of Sphinx that was missing some features I needed.  The plugin’s documentation has a page on how to update the built-in version of sphinx, but I had some trouble getting everything to update correctly.  Pull requests have been created that would solve this and other issues, but the plugin looks to be abandoned (or at least on hiatus).

So, since the native plugin wasn’t going to work, I needed to go to my backup plan — which meant running Sphinx via an external program call. Fortunately, this is easy to do with Mojo’s exec-maven-plugin, but on the other hand it means that the build now has an external dependency on Sphinx. I decided this was something I had to live with, and hooked the generation of documentation into the package phase of the Maven build. I also hooked Sphinx’s  clean  into the clean phase of the Maven build so that there wouldn’t be a ton of extra files laying around that required manual deletion. Here’s the relevant pom.xml snippet:

Integrating Maven & JaCoCo

I think that code coverage is really useful for seeing how well your tests are doing, and after looking at some of the options, I settled on using JaCoCo. One thing I really like about it is that it uses Java Agents to instrument the code on the fly — meaning that (unlike when I was an undergraduate) you don’t have to worry about mixing up your instrumented and uninstrumented code. JaCoCo works by first recording execution trace information (in a .exec file) and then interpreting it, along with your project’s .java and .class files, to (typically) produce standalone reports. Since I’ll be building / testing via Jenkins, I just generated the execution traces, and used Jenkin’s JaCoCo plugin’s built-in report format.

I had a bit of a tricky time figuring out where exactly I should be using the JaCoCo plugin — I first tried putting it in my test project’s build configuration (pom.xml), but that meant that I only got coverage of the testing code itself instead of the business logic. Then I put it in the main plugin’s project, only to find that since that project didn’t have any tests (since the tests are in their own project) I had no coverage information at all. Finally I put the JaCoCo configuration in the top-level pom.xml (and none of the individual project files) and still had no execution information.  Turns out, both the Tycho testing plugin and JaCoCo modify the JVM flags when tests are run, and so you have to manually integrate them. Once I did that, everything finally started working.

I ended up with this in my top-level pom.xml:

And this configuration for the Tycho Surefire (testing) plugin in the test project’s pom.xml (the custom flags I needed for Surefire are in the sureFireArgLine variable):

Deploying Artifacts with Maven Wagon

Maven Wagon enables developers to automatically upload the outputs of their builds to other servers.  In my case, I wanted to post both the update-site (that is, an installable version of my plugin) and the developer documentation I was generating. It took significant fiddling to get everything running correctly, but most of this was a result of the environment I’m working in — no matter what I did, it kept requesting a manually entered password.  It turns out that the authentication methods used by my target server were non-standard, and it took a while to figure out how to get around that. I first found that I had to use wagon’s external ssh interface since some of the authentication steps required weren’t possible with the built-in client. I then ended up using an ssh-key for authentication on my personal machine (and any non-buildserver device) and exploited the fact that the buildserver user has (restricted) write access to the web-facing directories.

Once authentication was hammered out, the plugin configuration was nested inside a profile element that could be activated via Maven’s -P switch:

So that wraps up three of the trickier plugins I used when automating MDCF Architect builds.  As always, the full build configurations are available on github, and let me know if you have any questions or feedback in the comment section!

Documenting a language using a custom Sphinx domain and Pygments lexer

Recently I’ve been looking at the software engineering tools / techniques I used when engineering the MDCF Architect (see my original post). Today I’m going to talk about Sphinx and Pygments — tools used by my research lab for developer-facing documentation.  Both of these tools work great “out of the box” for most setups, but since my project uses the somewhat-obscure programming language AADL, quite a bit of extra configuration was needed for everything to work correctly.

Sphinx is a documentation-generating tool that was originally created for the python language documentation, though it can now support a number of languages / other features through the sphinx-contrib project.  It uses reStructuredText, which I found to be totally usable after I took some time to poke around at a few examples. Since your documentation will probably have lots of code examples, it uses Pygments to provide syntax highlighting. Pygments supports a crazy-huge number of languages, which is probably one reason why it’s one of the most popular programs for syntax highlighting.

But, what do you do when you want to document a language that isn’t supported by either Sphinx or Pygments?  You add your own support, of course! Though it took quite a bit of digging / trial-and-error, I added a custom domain to Sphinx and a custom lexer for Pygments, and integrated the whole process so generating documentation is still just one step.

A Custom Sphinx Domain

Before I get into discussing how I made a custom Sphinx domain, let me first back up and explain what exactly a domain (in Sphinx parlance) is.  A full explanation is available from the Sphinx website, but the short version is that a domain provides support for documenting a programming language — primarily by enabling grouping and cross-linking of the language’s constructs. So, for example, you could specify a function name and its parameters, and get a nicely formatted description in your documentation (the example formatting has been somewhat wordpress-ified, but it gives an idea):

Description: Threads correspond to individual units of work such as handling an incoming message or checking for an alarm condition
Contained Elements:
  • features (port) — The process-level ports this thread either reads from or writes to.

There isn’t a lot of documentation for creating a custom Sphinx domains, but there are a lot of examples in the sphinx-contrib project. All of these examples, though, are built to produce a standalone, installable package that will make the domain available for use on a particular machine.  Unfortunately, this would greatly complicate the distribution process of my software — anyone who wanted to build the project (including documentation) from source would have to install a bunch of extra stuff.  Plus, this installation would need to be repeated on each of the build machines my research lab uses (there are nearly 20 of them, and all installation has to go through the already overworked KSU CIS IT) and any changes would mean repeating the entire process. Instead, I decided to try and just hook the custom domain into my Sphinx installation, and it turned out this was pretty easy to do.  There are two steps: 1) develop the custom domain, and 2) add it to sphinx.

Developing the Domain

I got started by using the GNU Make domain, by Kay-Uwe Lorenz, as a template; I found it to be quite understandable. From there I sort of hacked in some dependencies from the sphinx-contrib project (and imported others) until I had enough to use the  custom_domain  class. Then it was just the configuration of a few names, template strings, and the fields used by the AADL elements I wanted to document.  Fields, which make up the bulk of the domain specification, come in three kinds — Fields, GroupedFields, and TypedFields. Fields are the most basic elements, GroupedFields add the ability for fields to be grouped together, and TypedFields enable both grouping and a type specification.  I didn’t find a lot of documentation online, but the source is available, and pretty illustrative if you’re stuck.

Now you can use elements from these domains in your documentation pretty easily:

A Custom Pygments Lexer

The Pygments documentation has a pretty thorough walkthrough of how to write your own lexer.  Using that (and the examples in the lexer file) I was able to write my own lexer with relatively little frustration.  When it came time to use my lexer in Sphinx, though, I ran into a problem similar to the one I had with the domain — in the typical use case, the lexer would have to be installed into an existing Pygments installation before the documentation could be built.  Fortunately, like domains, lexers can be provided directly to Sphinx (assuming Pygments is installed somewhere, that is).

Developing the Lexer

Pygments lexer development using the RegexLexer class is pretty straightforward — you essentially just define a state machine with regular expressions that govern transitions between the various tokens (ie, your lexemes). Here’s an excerpt of the full lexer:

Once available, using your lexer to describe an example is even more straightforward; you simply use the  :language:  directive:

Putting it all together

Once you have your domain and lexer built, you just need to make Sphinx aware of them.  Put the files somewhere accessible (I have mine in a util folder that sits at the top level of my documentation) and use the  sphinx.add_lexer("name", lexer)  and  sphinx.add_domain(domain)  functions in the  setup(sphinx)  function in your file:

You can see an example of what this all looks like over at the MDCF Architect documentation, and you can see the full domain and lexer files on the MDCF Architect github page.

Building an Eclipse Plugin with Maven Tycho

In a recent post, I wrote about my current research project: a restricted subset of AADL and a translator that converts from this subset to Java. Since AADL has a pretty nice Eclipse plugin in OSATE2, I think it’s pretty natural to build on top of that. Not only does this make for an easy workflow (no leaving your development environment when it’s time to run the translator) but I got a lot of things “for free” — like tokenization, parsing, and AST traversal. Since good engineering practices are pretty strongly encouraged / outright required in my research lab (yesterday’s post discussed testing), that meant I would need to learn how to build an Eclipse plugin automatically, so that everything could be automated using Jenkins (our continuous integration tool).

Integrating Tycho

Other projects in my research lab had used SBT for automated building, so that was where I started out. Unfortunately, there isn’t a lot of SBT support for building Eclipse plugins (there’s this project, but it seems to be abandoned), so after some googling, I ran across Maven’s Tycho plugin. It seemed to support everything I wanted, though with no Maven experience I found the learning curve a bit steep. I then ran across this tutorial, which really got everything rolling. I would find myself coming back to this article every time I wanted to automate another feature, like testing.

The basic idea behind Tycho is that it enables Maven to read build information (dependencies, contents of source and binary builds, etc.) from your plugin’s plugin.xml and files. This greatly simplifies Maven’s build configuration specifications (the pom.xml files), since all you need to do is tell Tycho what kind of a project you’re building, and it takes everything from there.  For example, the entirety of my main plugin project’s configuration file is only 14 lines:

Note in particular the packaging element (line 13), which specifies the type of artifact — other options I used were eclipse-feature , eclipse-update-site , and eclipse-test-plugin .

Using Maven

I won’t attempt to re-create the Vogella tutorial here, but I do want to mention a couple of general Maven things I learned:

  • I found the ability to use Eclipse’s P2 update sites as repositories (from which dependencies can be pulled) really helpful.  Since OSATE2 isn’t available from Maven’s main repository, I initially thought I’d have to somehow add (and maintain!) a bazillion .jar files to my build configurations.  Instead, I was able to simply use:
  • I put off learning about / using profiles as long as I could. Profiles let you specify how your build should change in different contexts, depending on things like the host OS, command line switches, etc. I probably should have learned about them sooner since they’re so powerful, but I’m glad I worked to generalize the build as much as possible, because they’re definitely a tool that can be overused.
    • When it was time to learn about profiles, I mostly used random examples from StackOverflow for the actual code, but I thought this article was particularly good on the philosophy behind profiles, and the “Further Reading” section has a lot of good references.
  • The two different profiles I did use were:
    1. A host-os activated profile to enable a Mac OS X specific option that’s required if the testing needs a UI thread ( <argLine>-XstartOnFirstThread</argLine>).
    2. A command-line activated switch to trigger uploading (using Maven Wagon‘s ext-ssh provider) of the generated update site and documentation.
  • Since my tests relied on some functionality present only in OSATE2, I had to declare the providing plugin’s id as an extra dependency for my tests to run. That meant adding the following to my test project’s pom.xml file:


Ultimately, the build files I ended up with represent the most up-to-date working state of my Maven / Tycho knowledge.  They’re all available on github. Let me know if you have any feedback in the comments!

Eclipse Plug-In Testing with JUnit Plug-In Tests

I recently mentioned that my current research project is a subset of AADL and an associated Eclipse plug-in which translates from that subset into Java.  Since both my advisors and I are interested in following recommended software engineering practices, I needed to figure out how to programmatically test my plug-in’s functionality. Unfortunately, testing an Eclipse plug-in can be sort of complicated, since some of your code may depend on Eclipse itself — either Eclipse services or the UI — running.  Fortunately, Eclipse’s Plug-in Development Environment (PDE) provides a launcher for JUnit tests that makes the process more straightforward.

Plug-In Test Run Configuration
The JUnit Plug-In Test Run Configuration

The functionality I relied on in OSATE2 (the AADL-focused Eclipse distribution) was, unfortunately, deeply tied to the UI thread. This meant that I needed to launch Eclipse as part of my test suite, initialize the project(s) I needed for compilation, and then run my tests. Unlike some of the other tasks I’d eventually work on, I didn’t find any super-clear tutorials on this stuff, so while it wasn’t super difficult, I had to sort of hack my way through it.

The testing vision

At a high-level, the basic outline of what I needed to do was (steps 2-4 are repeated for each test):

  1. Initialize the environment (using JUnit’s @BeforeClass annotation)
    1. Execute a command (which creates a built-in project) in the running version of Eclipse provided by the launcher
    2. Create a test project
    3. Add XText and AADL natures to the project
    4. Mark the built-in project as a dependency of the test project
    5. Create folders and copy in source files
    6. Build the project
  2. Run pre-test setup (using JUnit’s @Before annotation)
  3. Run the test (using JUnit’s @Test annotation)
    1. Specify which files are needed for this test
    2. Run the translator on the specified files
    3. Inspect the model and compare it to expected values
  4. Run post-test teardown (using JUnit’s @After annotation)

I ended up structuring my test suite so that one class contained all the initialization logic common to each test, and then dividing the actual tests between a number of files depending on their functionality. This is pretty easy to do with the  @RunWith  and  @Suite.SuiteClasses  JUnit annotations:

Lessons Learned

As I built and tweaked my test suite, I learned a number of things that may help other people working on plug-in tests:

  • In my list of steps, steps 1 and 2 should not be combined. This is because OSATE2 uses an XtextResourceSet to store the files contained in a project, and that class does substantial behind-the-scenes caching. I was unable to get around this caching, and I probably shouldn’t even have been trying to defeat the optimizations in the first place — there’s no reason to recreate the various files that are re-used between tests.
  • All AADL projects can use certain built-in properties.  These properties are typically created by running an OSATE2 command (via a right-click menu). I found the command’s id by sifting through the various OSATE2 component’s plugin.xml files, and ran it (this code will need to be within a try / catch block):

    The downside to this code  is that it relies on the Eclipse UI.  The PDE’s JUnit Plug-In test launch configuration is smart in that it won’t launch a UI if it doesn’t need to, so using the UI should be avoided if possible.  Unfortunately, the functionality of this command couldn’t be recreated without getting seriously hack-y.
  • Forgetting step 1.3 (adding project natures) will lead to some really screwy errors.  Natures are easy to add — using IProjectDescription — once you have their ids, which are again found by sifting through plugin.xml files:
  • Same thing with not compiling the project — it sounds basic, but since my translator doesn’t explicitly require Xtext validation / compilation, I didn’t know that it would be required.  Fortunately, once the project is defined, it’s just a single command:

The full testing package is available over on github, and most of the interesting initialization code can be found in Let me know if you have any questions or suggestions in the comments below!