I finished my PhD!

dissertation-defense-1
My dissertation defense (thanks to Dr. Bill Hsu for the photo)

On June 17, I successfully defended my doctoral research. Last week, I received final approval of my dissertation. It feels pretty weird to say it, but I now have a PhD, and am “Dr. Procter.”

I’m finding it hard to summarize the entirety of my grad school experience, so I won’t try. Rather, I’ll say that it was on the whole a very positive, enriching time and what I will miss the most are the people I met and spent time with while I was in Manhattan. I imagine that sounds sort of cheesy, but it’s true: most of the time I spent in my office reading papers or hammering out code or revising a draft has already started to blur together. Conversations with people — in the graduate school or at conferences or just out and about in Manhattan (or Lawrence or Kansas City, where I frequently found myself) remain quite vivid, though.

dissertation-defense-2
Another view of the defense (thanks to Dr. Bill Hsu for the photo)

In the big scheme of things, it of course hasn’t been very long: only a couple of months have passed since I was in front of everybody trying to explain what I’ve spent the last 4.5 years doing. I’ll write another post soon about what I’m up to now (preview: I got a job and moved east!), but for now, I’ll just say that while I’m happy to be done, I already finding myself missing a number of things about Manhattan (like Varsity Donuts!)

A trip to Delft

Delft and the Oude Kerk (Old Church) as viewed from Nieuwe Kerk (New Church)
Delft and the Oude Kerk (Old Church) as viewed from Nieuwe Kerk (New Church)

In July I wrote about a recent paper covering my lab’s recent work with a concept called “Error Type Refinement.” Then, in late September, I got to travel to the city of Delft in the Netherlands and present the work. I presented at the ASSURE workshop, which was co-located with this year’s SAFECOMP conference. The workshop and conference were great — the presentations were really interesting, and I got to meet a lot of people working in the field (many of whom I knew previously as names on papers that I had read!)

I also had a lot of fun exploring Delft — the city is very old, and has a lot of really cool history, including a couple of neat churches — the Nieuwe Kerk (or New Church, though construction actually started in 1396) and the Oude Kerk (or Old Church, which dates to 1246). I also got to see more modern buildings (like the University library, which is a giant cone built into a hillside) as well as very small ones at Madurodam, which is a theme park that has 1:25 scale replicas of famous Dutch buildings, roads, ships and trains.

It was really nice to have nearly everything in the city easily accessible by foot, though I felt like I was always at risk of getting run over by people on bicycles, which was quite a change coming from the US. The canals (and accompanying ducks and geese) around which the sidewalks and streets were laid out were also a very pleasant change from the more mundane Kansas.

All in all, it was a great trip, and I learned a lot — both about safety critical computing, and about the history of the Netherlands. You can check out the photos I took below!

Delft and its University

Nieuwe Kerk — Exterior

Nieuwe Kerk — Interior

Madurodam

Oude Kerk

Nieuwe Meer in Amsterdam

Error Type Refinement for Assurance of Families of Platform-Based Systems

value-hierarchy
The fault type hierarchy for timing related errors, extended first for MAP Apps in general, and then for the PCA Interlock App

Last time I wrote about my work, I mentioned that we were using the architectural modeling language AADL to describe a particular type of distributed, compositional medical applications called Medical Application Platform (MAP) apps. One neat aspect of AADL is that the core language — which describes hardware and software architectures — can be extended by language annexes to cover other, related aspects of a system like its behavior (via the creatively-named “Behavior Annex”) or failure-related aspects (via the “Error Modeling” annex or “EM”). The EM has a number of neat features, not the least of which is its error type system.

Briefly, the idea behind the EM is that you can describe problems with your system, like a sensor failing and producing values that are wrong, or a communications channel delaying or dropping a message. These problems, termed “errors” in the EM, are made machine-readable by being described in a type system. The EM allows errors to be related to one another by three refinement mechanisms: extension, renaming, and aggregation — so you can actually create a full type lattice, if you’d like. The EM even comes with a standard set of errors, called the error library.

In a new paper, we looked at using the refinement mechanisms in the context of MAPs to specialize these fault types first by a component’s architectural category — the idea being that there are a finite number of categories in a MAP system: things like software application, network controller, medical device, etc. — and then by a component’s actual implementation. Component implementations, of course, have very concrete faults, because we know the actual message types various components will be sending and receiving. The list of errors produced by this refinement can then help a developer when it comes time to perform a hazard analysis (like FTA, FMEA, or STPA), regardless of which analysis / style (ie, top-down or bottom-up) is being performed.

The paper was accepted to the upcoming ASSURE workshop in Delft, The Netherlands, (collocated with SAFECOMP) so I’ll be headed out there in September to give a talk on it. Hopefully we’ll get some good discussion from the other workshop attendees, plus it’ll be fun to check out the presentations at the main conference, not to mention the city of Delft itself.

A trip to Lübeck, Germany

The Lubeck town hall
The Lubeck town hall

My work with medical devices took me to Lausanne, Switzerland last month, and since I was already halfway around the world my advisor and I decided a trip up north to Lübeck, Germany to visit medical device manufacturer Dräger made sense. The work I did there was really cool — in contrast to the conference in Lausanne, where I was the only person talking about medical devices, the Dräger work focused on nothing else. Also, I had a little over an hour for my presentation instead of 12 minutes.

I really enjoyed meeting some people who work in medical device connectivity research at Dräger, and I hope that we can continue to interact in the future. I also really enjoyed exploring Lübeck, which, like Lausanne, had a number of neat historic buildings. Communicating with people was definitely easier in Germany than it was in Switzerland (with the exception of one very insistent fellow in a train station) so while there were still challenges — mostly related to getting around: the mass transit is impressive, but it’s also sort of difficult to understand at first — it was all in all pretty smooth.

I managed to grab another geocache in a really scenic downtown park, eat a ton of marzipan, and get (briefly) addicted to currywurst. I also went to a number of very cool churches, saw the famous city gate (“Holstentor”) and even walk through a puppet museum.  You can see some photos of the churches and outdoor areas below!


Lübeck’s Churches


Lübeck’s Old Town

A trip to Lausanne, Switzerland

Lausanne, as seen from the Sauvabelin Tower
Lausanne, as seen from the Sauvabelin Tower

I recently mentioned that a paper I wrote got accepted to MEMOCODE, a conference in Lausanne, Switzerland. Having never been out of the country before, a trip to Europe was really exciting. It was also a little imposing — I would be traveling alone to a place with no knowledge of the local language (in this case, French). Unlike my trip to DC, where I was most anxious about my presentation, for this trip I was actually more anxious about the typically unimportant details — getting from one place to another, ordering food, etc.

I won’t say that those anxieties didn’t turn out to be at least somewhat warranted, the trip was at times quite challenging.  At different times, I…

  • Accidentally ordered a dish served with a raw egg on top
  • Had to sprint through airports to make connections (in both Heathrow and O’Hare)
  • Had conversations entirely in French (thanks entirely to a phrasebook app on my phone)

But the benefits definitely outweighed the negatives.  The trip was an amazing experience, and the conference went really well — my presentation was unfortunately quite short (I was given only 12 minutes), but it went well and I got some good questions / discussion. I also really enjoyed seeing the sights in the area — the cathedral is amazing, as is the nearby Sauvabelin tower / park / forest (where I even found a geocache!).  On top of that, the food (particularly the fondue and the pasta) was delicious, and the local wines were also really good.

All that said, though, I’m happy to be home for a while, and back to business-as-usual.  You can check out the pictures I took below!


Montbenon


Lausanne Cathedral


Ouchy


Sauvabelin Park & Tower

An Architecturally-Integrated, Systems-Based Hazard Analysis for Medical Applications

An example STPA-style control loop, annotated with the subset of EMV2 and AADL properties from the paper
An example STPA-style control loop, annotated with the subset of EMV2 and AADL properties from the paper

A few months ago, I wrote about my recent work on defining a subset of the language AADL to specify the architecture of bits of software (apps) that would run on medical application platforms (MAPs). Since then, I’ve been working on how developers can use these semi-formal architectural descriptions to do useful things. The first “useful thing” is integrating hazard analysis annotations with these architectural descriptions — that is, specifying how things could go wrong in the app.

Structured hazard analyses have been performed for over half a century now (some dating back to the late 1940s!) but in some ways are still the same as they were back then, in that they are still unintegrated with the system under analysis — that is, any analysis performed would live in a separate document (often a Word file). In programming terms, this would be like having a system’s documentation be separate from the implementation, which isn’t nearly as useful as techniques like Doxygen or Javadoc, where everything is more tightly integrated.

So, after looking at a number of hazard analysis techniques, I (and others my research lab) settled on the relatively new, systems-focused Systems Theoretic Process Analysis (STPA).  From there, we looked at tailoring it to the medical application development process, and how that tailored process could be integrated with the architecture specifications from our previous work. The result of this effort was a paper, which was recently accepted to the 2014 ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE) in Lausanne, Switzerland. I’m really excited to go and present.

My advisor has described the paper as “incredibly dense” (I blame page limits.) so in the next few months I’ll be expanding it into a whitepaper that will hopefully be much clearer, and will be of use to our research partners in regulatory agencies.

Tying A Build Together with Jenkins

Recently I wrote about the project I’m working on, and mentioned the range of technologies used in support of that effort. Since then, I’ve written about the building, testing and documentation tools I used, but today I’d like to discuss how everything is tied together using Jenkins. Jenkins is a tool that enables continuous integration — the practice of integrating all the parts of a project every time code is committed. It has a huge number of options, plugins, and is crazily configurable, so I’ll just be talking about how I used it in the development of the MDCF Architect.

Building with Jenkins

Building a project with Maven in Jenkins is super straightforward — so much so that it’s arguably too vanilla to blog about. Since the MDCF Architect uses GitHub for its source control, I used the aptly-named “Git Plugin” for Jenkins. Once that’s installed, I just pointed it at the repository URL and set Jenkins to poll git periodically.  If new changes are found, they’re pulled, built, tested, and reported on. One thing I particularly like is the “H” option in the polling schedule — it lets the server determine a time (via some hashing function) to query git / start a build. This avoids the problem where a bunch of projects would all try to run at common times (eg, the top of the hour) without forcing developers to set particular times for each project.

Jenkins-BuildConfig
My project’s Jenkins build configuration — 1 of 2

After polling git, I have one build step — invocation of two top-level Maven targets under the “uploadToRepo” profile, which triggers upload of the plugin’s executable and documentation. Also, since running the MDCF Architect test suite requires a UI thread, one additional build step is needed — running a vnc server. This can be tricky for Jenkins (which runs headlessly) but it’s solved by the Xvnc Plugin. I found this post to be really helpful in setting up this part of my project.

Jenkins-BuildConfig2
My project’s Jenkins build configuration — 2 of 2

Testing with Jenkins

The project’s tests are run by the “install” Maven target, and two post-build steps collate the testing and coverage reports. The first of these steps, JUnit test report collation, requires no plugin — you just have to tell Jenkins where it can find the report .xml files. The second step, generation of coverage reports from JaCoCo, is provided by the “JaCoCo Plugin.” Execution of the project under JaCoCo results in some binary “.exec” files that contain the coverage data — you have to tell the plugin where these files are, as well as the .class and .java files that your project builds to / from.  You can also set coverage requirements, though I chose not to.

Jenkins-TestConfig
My project’s Jenkins test configuration

Once everything is set up, your project will be building, testing, and deploying automatically, leaving you free to do other things, or just stare at some lovely graphs. Let me know if you have any feedback or questions in the comments section!

Jenkins-Graphs
Some lovely graphs related to my project’s tests

Automating all Aspects of a Build with Maven Plugins

I’ve mentioned in recent posts that I recently wrote some software called the MDCF Architect for my research, and along with the implementation (an eclipse plugin), I also built a number of supporting artifacts — things like developer-targeted documentation and testing with coverage information. Integrating these (and other) build features with Maven is often pretty straightforward because a lot of functionality is available as Maven plugins. So, today, I’m going to discuss how I configured three fairly common Maven plugins: “Exec,” “JaCoCo,” and “Wagon.”

Integrating Maven & Sphinx

Sphinx is a tool for generating developer-targeted documentation.  I wrote about some extensions I made to it earlier this week, but today I’m going to talk about how I automated the documentation generation process.  Initially I used the sphinx-maven plugin, though it uses an older version of Sphinx that was missing some features I needed.  The plugin’s documentation has a page on how to update the built-in version of sphinx, but I had some trouble getting everything to update correctly.  Pull requests have been created that would solve this and other issues, but the plugin looks to be abandoned (or at least on hiatus).

So, since the native plugin wasn’t going to work, I needed to go to my backup plan — which meant running Sphinx via an external program call. Fortunately, this is easy to do with Mojo’s exec-maven-plugin, but on the other hand it means that the build now has an external dependency on Sphinx. I decided this was something I had to live with, and hooked the generation of documentation into the package phase of the Maven build. I also hooked Sphinx’s  clean  into the clean phase of the Maven build so that there wouldn’t be a ton of extra files laying around that required manual deletion. Here’s the relevant pom.xml snippet:

Integrating Maven & JaCoCo

I think that code coverage is really useful for seeing how well your tests are doing, and after looking at some of the options, I settled on using JaCoCo. One thing I really like about it is that it uses Java Agents to instrument the code on the fly — meaning that (unlike when I was an undergraduate) you don’t have to worry about mixing up your instrumented and uninstrumented code. JaCoCo works by first recording execution trace information (in a .exec file) and then interpreting it, along with your project’s .java and .class files, to (typically) produce standalone reports. Since I’ll be building / testing via Jenkins, I just generated the execution traces, and used Jenkin’s JaCoCo plugin’s built-in report format.

I had a bit of a tricky time figuring out where exactly I should be using the JaCoCo plugin — I first tried putting it in my test project’s build configuration (pom.xml), but that meant that I only got coverage of the testing code itself instead of the business logic. Then I put it in the main plugin’s project, only to find that since that project didn’t have any tests (since the tests are in their own project) I had no coverage information at all. Finally I put the JaCoCo configuration in the top-level pom.xml (and none of the individual project files) and still had no execution information.  Turns out, both the Tycho testing plugin and JaCoCo modify the JVM flags when tests are run, and so you have to manually integrate them. Once I did that, everything finally started working.

I ended up with this in my top-level pom.xml:

And this configuration for the Tycho Surefire (testing) plugin in the test project’s pom.xml (the custom flags I needed for Surefire are in the sureFireArgLine variable):

Deploying Artifacts with Maven Wagon

Maven Wagon enables developers to automatically upload the outputs of their builds to other servers.  In my case, I wanted to post both the update-site (that is, an installable version of my plugin) and the developer documentation I was generating. It took significant fiddling to get everything running correctly, but most of this was a result of the environment I’m working in — no matter what I did, it kept requesting a manually entered password.  It turns out that the authentication methods used by my target server were non-standard, and it took a while to figure out how to get around that. I first found that I had to use wagon’s external ssh interface since some of the authentication steps required weren’t possible with the built-in client. I then ended up using an ssh-key for authentication on my personal machine (and any non-buildserver device) and exploited the fact that the buildserver user has (restricted) write access to the web-facing directories.

Once authentication was hammered out, the plugin configuration was nested inside a profile element that could be activated via Maven’s -P switch:

So that wraps up three of the trickier plugins I used when automating MDCF Architect builds.  As always, the full build configurations are available on github, and let me know if you have any questions or feedback in the comment section!

Documenting a language using a custom Sphinx domain and Pygments lexer

Recently I’ve been looking at the software engineering tools / techniques I used when engineering the MDCF Architect (see my original post). Today I’m going to talk about Sphinx and Pygments — tools used by my research lab for developer-facing documentation.  Both of these tools work great “out of the box” for most setups, but since my project uses the somewhat-obscure programming language AADL, quite a bit of extra configuration was needed for everything to work correctly.

Sphinx is a documentation-generating tool that was originally created for the python language documentation, though it can now support a number of languages / other features through the sphinx-contrib project.  It uses reStructuredText, which I found to be totally usable after I took some time to poke around at a few examples. Since your documentation will probably have lots of code examples, it uses Pygments to provide syntax highlighting. Pygments supports a crazy-huge number of languages, which is probably one reason why it’s one of the most popular programs for syntax highlighting.

But, what do you do when you want to document a language that isn’t supported by either Sphinx or Pygments?  You add your own support, of course! Though it took quite a bit of digging / trial-and-error, I added a custom domain to Sphinx and a custom lexer for Pygments, and integrated the whole process so generating documentation is still just one step.

A Custom Sphinx Domain

Before I get into discussing how I made a custom Sphinx domain, let me first back up and explain what exactly a domain (in Sphinx parlance) is.  A full explanation is available from the Sphinx website, but the short version is that a domain provides support for documenting a programming language — primarily by enabling grouping and cross-linking of the language’s constructs. So, for example, you could specify a function name and its parameters, and get a nicely formatted description in your documentation (the example formatting has been somewhat wordpress-ified, but it gives an idea):

thread
Description: Threads correspond to individual units of work such as handling an incoming message or checking for an alarm condition
Contained Elements:
  • features (port) — The process-level ports this thread either reads from or writes to.
Properties:

There isn’t a lot of documentation for creating a custom Sphinx domains, but there are a lot of examples in the sphinx-contrib project. All of these examples, though, are built to produce a standalone, installable package that will make the domain available for use on a particular machine.  Unfortunately, this would greatly complicate the distribution process of my software — anyone who wanted to build the project (including documentation) from source would have to install a bunch of extra stuff.  Plus, this installation would need to be repeated on each of the build machines my research lab uses (there are nearly 20 of them, and all installation has to go through the already overworked KSU CIS IT) and any changes would mean repeating the entire process. Instead, I decided to try and just hook the custom domain into my Sphinx installation, and it turned out this was pretty easy to do.  There are two steps: 1) develop the custom domain, and 2) add it to sphinx.

Developing the Domain

I got started by using the GNU Make domain, by Kay-Uwe Lorenz, as a template; I found it to be quite understandable. From there I sort of hacked in some dependencies from the sphinx-contrib project (and imported others) until I had enough to use the  custom_domain  class. Then it was just the configuration of a few names, template strings, and the fields used by the AADL elements I wanted to document.  Fields, which make up the bulk of the domain specification, come in three kinds — Fields, GroupedFields, and TypedFields. Fields are the most basic elements, GroupedFields add the ability for fields to be grouped together, and TypedFields enable both grouping and a type specification.  I didn’t find a lot of documentation online, but the source is available, and pretty illustrative if you’re stuck.

Now you can use elements from these domains in your documentation pretty easily:

A Custom Pygments Lexer

The Pygments documentation has a pretty thorough walkthrough of how to write your own lexer.  Using that (and the examples in the other.py lexer file) I was able to write my own lexer with relatively little frustration.  When it came time to use my lexer in Sphinx, though, I ran into a problem similar to the one I had with the domain — in the typical use case, the lexer would have to be installed into an existing Pygments installation before the documentation could be built.  Fortunately, like domains, lexers can be provided directly to Sphinx (assuming Pygments is installed somewhere, that is).

Developing the Lexer

Pygments lexer development using the RegexLexer class is pretty straightforward — you essentially just define a state machine with regular expressions that govern transitions between the various tokens (ie, your lexemes). Here’s an excerpt of the full lexer:

Once available, using your lexer to describe an example is even more straightforward; you simply use the  :language:  directive:

Putting it all together

Once you have your domain and lexer built, you just need to make Sphinx aware of them.  Put the files somewhere accessible (I have mine in a util folder that sits at the top level of my documentation) and use the  sphinx.add_lexer("name", lexer)  and  sphinx.add_domain(domain)  functions in the  setup(sphinx)  function in your conf.py file:

You can see an example of what this all looks like over at the MDCF Architect documentation, and you can see the full domain and lexer files on the MDCF Architect github page.

Building an Eclipse Plugin with Maven Tycho

In a recent post, I wrote about my current research project: a restricted subset of AADL and a translator that converts from this subset to Java. Since AADL has a pretty nice Eclipse plugin in OSATE2, I think it’s pretty natural to build on top of that. Not only does this make for an easy workflow (no leaving your development environment when it’s time to run the translator) but I got a lot of things “for free” — like tokenization, parsing, and AST traversal. Since good engineering practices are pretty strongly encouraged / outright required in my research lab (yesterday’s post discussed testing), that meant I would need to learn how to build an Eclipse plugin automatically, so that everything could be automated using Jenkins (our continuous integration tool).

Integrating Tycho

Other projects in my research lab had used SBT for automated building, so that was where I started out. Unfortunately, there isn’t a lot of SBT support for building Eclipse plugins (there’s this project, but it seems to be abandoned), so after some googling, I ran across Maven’s Tycho plugin. It seemed to support everything I wanted, though with no Maven experience I found the learning curve a bit steep. I then ran across this tutorial, which really got everything rolling. I would find myself coming back to this article every time I wanted to automate another feature, like testing.

The basic idea behind Tycho is that it enables Maven to read build information (dependencies, contents of source and binary builds, etc.) from your plugin’s plugin.xml and manifest.mf files. This greatly simplifies Maven’s build configuration specifications (the pom.xml files), since all you need to do is tell Tycho what kind of a project you’re building, and it takes everything from there.  For example, the entirety of my main plugin project’s configuration file is only 14 lines:

Note in particular the packaging element (line 13), which specifies the type of artifact — other options I used were eclipse-feature , eclipse-update-site , and eclipse-test-plugin .

Using Maven

I won’t attempt to re-create the Vogella tutorial here, but I do want to mention a couple of general Maven things I learned:

  • I found the ability to use Eclipse’s P2 update sites as repositories (from which dependencies can be pulled) really helpful.  Since OSATE2 isn’t available from Maven’s main repository, I initially thought I’d have to somehow add (and maintain!) a bazillion .jar files to my build configurations.  Instead, I was able to simply use:
  • I put off learning about / using profiles as long as I could. Profiles let you specify how your build should change in different contexts, depending on things like the host OS, command line switches, etc. I probably should have learned about them sooner since they’re so powerful, but I’m glad I worked to generalize the build as much as possible, because they’re definitely a tool that can be overused.
    • When it was time to learn about profiles, I mostly used random examples from StackOverflow for the actual code, but I thought this article was particularly good on the philosophy behind profiles, and the “Further Reading” section has a lot of good references.
  • The two different profiles I did use were:
    1. A host-os activated profile to enable a Mac OS X specific option that’s required if the testing needs a UI thread ( <argLine>-XstartOnFirstThread</argLine>).
    2. A command-line activated switch to trigger uploading (using Maven Wagon‘s ext-ssh provider) of the generated update site and documentation.
  • Since my tests relied on some functionality present only in OSATE2, I had to declare the providing plugin’s id as an extra dependency for my tests to run. That meant adding the following to my test project’s pom.xml file:

     

Ultimately, the build files I ended up with represent the most up-to-date working state of my Maven / Tycho knowledge.  They’re all available on github. Let me know if you have any feedback in the comments!