Tying A Build Together with Jenkins

Recently I wrote about the project I’m working on, and mentioned the range of technologies used in support of that effort. Since then, I’ve written about the building, testing and documentation tools I used, but today I’d like to discuss how everything is tied together using Jenkins. Jenkins is a tool that enables continuous integration — the practice of integrating all the parts of a project every time code is committed. It has a huge number of options, plugins, and is crazily configurable, so I’ll just be talking about how I used it in the development of the MDCF Architect.

Building with Jenkins

Building a project with Maven in Jenkins is super straightforward — so much so that it’s arguably too vanilla to blog about. Since the MDCF Architect uses GitHub for its source control, I used the aptly-named “Git Plugin” for Jenkins. Once that’s installed, I just pointed it at the repository URL and set Jenkins to poll git periodically.  If new changes are found, they’re pulled, built, tested, and reported on. One thing I particularly like is the “H” option in the polling schedule — it lets the server determine a time (via some hashing function) to query git / start a build. This avoids the problem where a bunch of projects would all try to run at common times (eg, the top of the hour) without forcing developers to set particular times for each project.

Jenkins-BuildConfig
My project’s Jenkins build configuration — 1 of 2

After polling git, I have one build step — invocation of two top-level Maven targets under the “uploadToRepo” profile, which triggers upload of the plugin’s executable and documentation. Also, since running the MDCF Architect test suite requires a UI thread, one additional build step is needed — running a vnc server. This can be tricky for Jenkins (which runs headlessly) but it’s solved by the Xvnc Plugin. I found this post to be really helpful in setting up this part of my project.

Jenkins-BuildConfig2
My project’s Jenkins build configuration — 2 of 2

Testing with Jenkins

The project’s tests are run by the “install” Maven target, and two post-build steps collate the testing and coverage reports. The first of these steps, JUnit test report collation, requires no plugin — you just have to tell Jenkins where it can find the report .xml files. The second step, generation of coverage reports from JaCoCo, is provided by the “JaCoCo Plugin.” Execution of the project under JaCoCo results in some binary “.exec” files that contain the coverage data — you have to tell the plugin where these files are, as well as the .class and .java files that your project builds to / from.  You can also set coverage requirements, though I chose not to.

Jenkins-TestConfig
My project’s Jenkins test configuration

Once everything is set up, your project will be building, testing, and deploying automatically, leaving you free to do other things, or just stare at some lovely graphs. Let me know if you have any feedback or questions in the comments section!

Jenkins-Graphs
Some lovely graphs related to my project’s tests

Automating all Aspects of a Build with Maven Plugins

I’ve mentioned in recent posts that I recently wrote some software called the MDCF Architect for my research, and along with the implementation (an eclipse plugin), I also built a number of supporting artifacts — things like developer-targeted documentation and testing with coverage information. Integrating these (and other) build features with Maven is often pretty straightforward because a lot of functionality is available as Maven plugins. So, today, I’m going to discuss how I configured three fairly common Maven plugins: “Exec,” “JaCoCo,” and “Wagon.”

Integrating Maven & Sphinx

Sphinx is a tool for generating developer-targeted documentation.  I wrote about some extensions I made to it earlier this week, but today I’m going to talk about how I automated the documentation generation process.  Initially I used the sphinx-maven plugin, though it uses an older version of Sphinx that was missing some features I needed.  The plugin’s documentation has a page on how to update the built-in version of sphinx, but I had some trouble getting everything to update correctly.  Pull requests have been created that would solve this and other issues, but the plugin looks to be abandoned (or at least on hiatus).

So, since the native plugin wasn’t going to work, I needed to go to my backup plan — which meant running Sphinx via an external program call. Fortunately, this is easy to do with Mojo’s exec-maven-plugin, but on the other hand it means that the build now has an external dependency on Sphinx. I decided this was something I had to live with, and hooked the generation of documentation into the package phase of the Maven build. I also hooked Sphinx’s  clean  into the clean phase of the Maven build so that there wouldn’t be a ton of extra files laying around that required manual deletion. Here’s the relevant pom.xml snippet:

Integrating Maven & JaCoCo

I think that code coverage is really useful for seeing how well your tests are doing, and after looking at some of the options, I settled on using JaCoCo. One thing I really like about it is that it uses Java Agents to instrument the code on the fly — meaning that (unlike when I was an undergraduate) you don’t have to worry about mixing up your instrumented and uninstrumented code. JaCoCo works by first recording execution trace information (in a .exec file) and then interpreting it, along with your project’s .java and .class files, to (typically) produce standalone reports. Since I’ll be building / testing via Jenkins, I just generated the execution traces, and used Jenkin’s JaCoCo plugin’s built-in report format.

I had a bit of a tricky time figuring out where exactly I should be using the JaCoCo plugin — I first tried putting it in my test project’s build configuration (pom.xml), but that meant that I only got coverage of the testing code itself instead of the business logic. Then I put it in the main plugin’s project, only to find that since that project didn’t have any tests (since the tests are in their own project) I had no coverage information at all. Finally I put the JaCoCo configuration in the top-level pom.xml (and none of the individual project files) and still had no execution information.  Turns out, both the Tycho testing plugin and JaCoCo modify the JVM flags when tests are run, and so you have to manually integrate them. Once I did that, everything finally started working.

I ended up with this in my top-level pom.xml:

And this configuration for the Tycho Surefire (testing) plugin in the test project’s pom.xml (the custom flags I needed for Surefire are in the sureFireArgLine variable):

Deploying Artifacts with Maven Wagon

Maven Wagon enables developers to automatically upload the outputs of their builds to other servers.  In my case, I wanted to post both the update-site (that is, an installable version of my plugin) and the developer documentation I was generating. It took significant fiddling to get everything running correctly, but most of this was a result of the environment I’m working in — no matter what I did, it kept requesting a manually entered password.  It turns out that the authentication methods used by my target server were non-standard, and it took a while to figure out how to get around that. I first found that I had to use wagon’s external ssh interface since some of the authentication steps required weren’t possible with the built-in client. I then ended up using an ssh-key for authentication on my personal machine (and any non-buildserver device) and exploited the fact that the buildserver user has (restricted) write access to the web-facing directories.

Once authentication was hammered out, the plugin configuration was nested inside a profile element that could be activated via Maven’s -P switch:

So that wraps up three of the trickier plugins I used when automating MDCF Architect builds.  As always, the full build configurations are available on github, and let me know if you have any questions or feedback in the comment section!

Documenting a language using a custom Sphinx domain and Pygments lexer

Recently I’ve been looking at the software engineering tools / techniques I used when engineering the MDCF Architect (see my original post). Today I’m going to talk about Sphinx and Pygments — tools used by my research lab for developer-facing documentation.  Both of these tools work great “out of the box” for most setups, but since my project uses the somewhat-obscure programming language AADL, quite a bit of extra configuration was needed for everything to work correctly.

Sphinx is a documentation-generating tool that was originally created for the python language documentation, though it can now support a number of languages / other features through the sphinx-contrib project.  It uses reStructuredText, which I found to be totally usable after I took some time to poke around at a few examples. Since your documentation will probably have lots of code examples, it uses Pygments to provide syntax highlighting. Pygments supports a crazy-huge number of languages, which is probably one reason why it’s one of the most popular programs for syntax highlighting.

But, what do you do when you want to document a language that isn’t supported by either Sphinx or Pygments?  You add your own support, of course! Though it took quite a bit of digging / trial-and-error, I added a custom domain to Sphinx and a custom lexer for Pygments, and integrated the whole process so generating documentation is still just one step.

A Custom Sphinx Domain

Before I get into discussing how I made a custom Sphinx domain, let me first back up and explain what exactly a domain (in Sphinx parlance) is.  A full explanation is available from the Sphinx website, but the short version is that a domain provides support for documenting a programming language — primarily by enabling grouping and cross-linking of the language’s constructs. So, for example, you could specify a function name and its parameters, and get a nicely formatted description in your documentation (the example formatting has been somewhat wordpress-ified, but it gives an idea):

thread
Description: Threads correspond to individual units of work such as handling an incoming message or checking for an alarm condition
Contained Elements:
  • features (port) — The process-level ports this thread either reads from or writes to.
Properties:

There isn’t a lot of documentation for creating a custom Sphinx domains, but there are a lot of examples in the sphinx-contrib project. All of these examples, though, are built to produce a standalone, installable package that will make the domain available for use on a particular machine.  Unfortunately, this would greatly complicate the distribution process of my software — anyone who wanted to build the project (including documentation) from source would have to install a bunch of extra stuff.  Plus, this installation would need to be repeated on each of the build machines my research lab uses (there are nearly 20 of them, and all installation has to go through the already overworked KSU CIS IT) and any changes would mean repeating the entire process. Instead, I decided to try and just hook the custom domain into my Sphinx installation, and it turned out this was pretty easy to do.  There are two steps: 1) develop the custom domain, and 2) add it to sphinx.

Developing the Domain

I got started by using the GNU Make domain, by Kay-Uwe Lorenz, as a template; I found it to be quite understandable. From there I sort of hacked in some dependencies from the sphinx-contrib project (and imported others) until I had enough to use the  custom_domain  class. Then it was just the configuration of a few names, template strings, and the fields used by the AADL elements I wanted to document.  Fields, which make up the bulk of the domain specification, come in three kinds — Fields, GroupedFields, and TypedFields. Fields are the most basic elements, GroupedFields add the ability for fields to be grouped together, and TypedFields enable both grouping and a type specification.  I didn’t find a lot of documentation online, but the source is available, and pretty illustrative if you’re stuck.

Now you can use elements from these domains in your documentation pretty easily:

A Custom Pygments Lexer

The Pygments documentation has a pretty thorough walkthrough of how to write your own lexer.  Using that (and the examples in the other.py lexer file) I was able to write my own lexer with relatively little frustration.  When it came time to use my lexer in Sphinx, though, I ran into a problem similar to the one I had with the domain — in the typical use case, the lexer would have to be installed into an existing Pygments installation before the documentation could be built.  Fortunately, like domains, lexers can be provided directly to Sphinx (assuming Pygments is installed somewhere, that is).

Developing the Lexer

Pygments lexer development using the RegexLexer class is pretty straightforward — you essentially just define a state machine with regular expressions that govern transitions between the various tokens (ie, your lexemes). Here’s an excerpt of the full lexer:

Once available, using your lexer to describe an example is even more straightforward; you simply use the  :language:  directive:

Putting it all together

Once you have your domain and lexer built, you just need to make Sphinx aware of them.  Put the files somewhere accessible (I have mine in a util folder that sits at the top level of my documentation) and use the  sphinx.add_lexer("name", lexer)  and  sphinx.add_domain(domain)  functions in the  setup(sphinx)  function in your conf.py file:

You can see an example of what this all looks like over at the MDCF Architect documentation, and you can see the full domain and lexer files on the MDCF Architect github page.

Building an Eclipse Plugin with Maven Tycho

In a recent post, I wrote about my current research project: a restricted subset of AADL and a translator that converts from this subset to Java. Since AADL has a pretty nice Eclipse plugin in OSATE2, I think it’s pretty natural to build on top of that. Not only does this make for an easy workflow (no leaving your development environment when it’s time to run the translator) but I got a lot of things “for free” — like tokenization, parsing, and AST traversal. Since good engineering practices are pretty strongly encouraged / outright required in my research lab (yesterday’s post discussed testing), that meant I would need to learn how to build an Eclipse plugin automatically, so that everything could be automated using Jenkins (our continuous integration tool).

Integrating Tycho

Other projects in my research lab had used SBT for automated building, so that was where I started out. Unfortunately, there isn’t a lot of SBT support for building Eclipse plugins (there’s this project, but it seems to be abandoned), so after some googling, I ran across Maven’s Tycho plugin. It seemed to support everything I wanted, though with no Maven experience I found the learning curve a bit steep. I then ran across this tutorial, which really got everything rolling. I would find myself coming back to this article every time I wanted to automate another feature, like testing.

The basic idea behind Tycho is that it enables Maven to read build information (dependencies, contents of source and binary builds, etc.) from your plugin’s plugin.xml and manifest.mf files. This greatly simplifies Maven’s build configuration specifications (the pom.xml files), since all you need to do is tell Tycho what kind of a project you’re building, and it takes everything from there.  For example, the entirety of my main plugin project’s configuration file is only 14 lines:

Note in particular the packaging element (line 13), which specifies the type of artifact — other options I used were eclipse-feature , eclipse-update-site , and eclipse-test-plugin .

Using Maven

I won’t attempt to re-create the Vogella tutorial here, but I do want to mention a couple of general Maven things I learned:

  • I found the ability to use Eclipse’s P2 update sites as repositories (from which dependencies can be pulled) really helpful.  Since OSATE2 isn’t available from Maven’s main repository, I initially thought I’d have to somehow add (and maintain!) a bazillion .jar files to my build configurations.  Instead, I was able to simply use:
  • I put off learning about / using profiles as long as I could. Profiles let you specify how your build should change in different contexts, depending on things like the host OS, command line switches, etc. I probably should have learned about them sooner since they’re so powerful, but I’m glad I worked to generalize the build as much as possible, because they’re definitely a tool that can be overused.
    • When it was time to learn about profiles, I mostly used random examples from StackOverflow for the actual code, but I thought this article was particularly good on the philosophy behind profiles, and the “Further Reading” section has a lot of good references.
  • The two different profiles I did use were:
    1. A host-os activated profile to enable a Mac OS X specific option that’s required if the testing needs a UI thread ( <argLine>-XstartOnFirstThread</argLine>).
    2. A command-line activated switch to trigger uploading (using Maven Wagon‘s ext-ssh provider) of the generated update site and documentation.
  • Since my tests relied on some functionality present only in OSATE2, I had to declare the providing plugin’s id as an extra dependency for my tests to run. That meant adding the following to my test project’s pom.xml file:

     

Ultimately, the build files I ended up with represent the most up-to-date working state of my Maven / Tycho knowledge.  They’re all available on github. Let me know if you have any feedback in the comments!

Eclipse Plug-In Testing with JUnit Plug-In Tests

I recently mentioned that my current research project is a subset of AADL and an associated Eclipse plug-in which translates from that subset into Java.  Since both my advisors and I are interested in following recommended software engineering practices, I needed to figure out how to programmatically test my plug-in’s functionality. Unfortunately, testing an Eclipse plug-in can be sort of complicated, since some of your code may depend on Eclipse itself — either Eclipse services or the UI — running.  Fortunately, Eclipse’s Plug-in Development Environment (PDE) provides a launcher for JUnit tests that makes the process more straightforward.

Plug-In Test Run Configuration
The JUnit Plug-In Test Run Configuration

The functionality I relied on in OSATE2 (the AADL-focused Eclipse distribution) was, unfortunately, deeply tied to the UI thread. This meant that I needed to launch Eclipse as part of my test suite, initialize the project(s) I needed for compilation, and then run my tests. Unlike some of the other tasks I’d eventually work on, I didn’t find any super-clear tutorials on this stuff, so while it wasn’t super difficult, I had to sort of hack my way through it.

The testing vision

At a high-level, the basic outline of what I needed to do was (steps 2-4 are repeated for each test):

  1. Initialize the environment (using JUnit’s @BeforeClass annotation)
    1. Execute a command (which creates a built-in project) in the running version of Eclipse provided by the launcher
    2. Create a test project
    3. Add XText and AADL natures to the project
    4. Mark the built-in project as a dependency of the test project
    5. Create folders and copy in source files
    6. Build the project
  2. Run pre-test setup (using JUnit’s @Before annotation)
  3. Run the test (using JUnit’s @Test annotation)
    1. Specify which files are needed for this test
    2. Run the translator on the specified files
    3. Inspect the model and compare it to expected values
  4. Run post-test teardown (using JUnit’s @After annotation)

I ended up structuring my test suite so that one class contained all the initialization logic common to each test, and then dividing the actual tests between a number of files depending on their functionality. This is pretty easy to do with the  @RunWith  and  @Suite.SuiteClasses  JUnit annotations:

Lessons Learned

As I built and tweaked my test suite, I learned a number of things that may help other people working on plug-in tests:

  • In my list of steps, steps 1 and 2 should not be combined. This is because OSATE2 uses an XtextResourceSet to store the files contained in a project, and that class does substantial behind-the-scenes caching. I was unable to get around this caching, and I probably shouldn’t even have been trying to defeat the optimizations in the first place — there’s no reason to recreate the various files that are re-used between tests.
  • All AADL projects can use certain built-in properties.  These properties are typically created by running an OSATE2 command (via a right-click menu). I found the command’s id by sifting through the various OSATE2 component’s plugin.xml files, and ran it (this code will need to be within a try / catch block):

    The downside to this code  is that it relies on the Eclipse UI.  The PDE’s JUnit Plug-In test launch configuration is smart in that it won’t launch a UI if it doesn’t need to, so using the UI should be avoided if possible.  Unfortunately, the functionality of this command couldn’t be recreated without getting seriously hack-y.
  • Forgetting step 1.3 (adding project natures) will lead to some really screwy errors.  Natures are easy to add — using IProjectDescription — once you have their ids, which are again found by sifting through plugin.xml files:
  • Same thing with not compiling the project — it sounds basic, but since my translator doesn’t explicitly require Xtext validation / compilation, I didn’t know that it would be required.  Fortunately, once the project is defined, it’s just a single command:

The full testing package is available over on github, and most of the interesting initialization code can be found in AllTests.java. Let me know if you have any questions or suggestions in the comments below!

Building a truly waterproof geocache

A friend whose family has access to a fairly large pond invited me and a group of friends out for a day of swimming and boating. While out enjoying a lovely Kansas day, I found a spot on the pond — an island — that I thought would make a perfect spot for a geocache. In general I’m a fan of caches that are fun to get to but not super hard to find (no one wants to hike 8 miles and then spend three hours looking under rocks) so this island would be perfect — the cache wouldn’t be too hard to find, but it wouldn’t be a park and grab either.

While a lot of places sell “waterproof” containers, most are built to get splashed, or maybe briefly dunked. But when you’re hiding a geocache in a pond where it could potentially be submerged for months at a time, not to mention frozen in the winter and baked by the sun in the summer, you’ll need a stronger build. I decided to make my cache out of PVC, and I wanted to explain how I went about doing that. I should preface this post by saying that I really have no experience with plumbing in general, so this post is written assuming no expertise, and if you see anything that can be improved let me know.

Parts list

These are just the parts I used, most of them could probably be substituted for similar things — in particular the ginormous tent stakes were only necessary because this cache is going into some pretty deep mud:

Here’s a picture of the parts that went into my geocache (except instead of thread seal tape, I have a small tube of dope in this picture — I ended up returning it and getting some tape before I built the geocache):

Geocache Parts
The parts used to make my geocache

Building the Geocache

Once you have the parts, putting everything together is pretty straightforward. There are really only three steps:

  1. Cut the PVC pipe to a more usable size.  I chose around 8″ because that would give me enough room to store the pencil and Lego guy.
  2. Glue the end cap and the thread adapter onto the pipe.  Being a PVC noob, I found this video pretty helpful. Basically, you…
    1. Apply some primer to both the male and female sides of the joint.
    2. Apply some cement to both sides.
    3. Join the two sides of the joint with a 1/4 turn and hold them together for a few seconds (the video says 5ish, the cement can says 30) so they don’t come apart.
  3. Secure the steel cable to both the pipe and the stakes.
    1. I used a noose knot to secure the cable to the pipe, and made sure it was tight enough that it wouldn’t slip over the the caps. Though the steel cable is kind of tough to work with, its inflexibility helps make it harder to slide over the cap end.
    2. Once you have the knot tied, apply one of the clamps so it can’t come undone — basically you can just clamp it right next to the knot.
    3. I used another noose knot to secure the cable to the stake, although I think the knot choice is probably a lot more flexible here. I was originally planning on tying the knot through the loop at the top of the stake, but if you look closely there are actually small holes just below the top of the stakes, and the cable I had fit perfectly through these so I ran the cable through them instead.
    4. Finally, use the other clamp to secure this second knot.

Once all of this is done, you just need to write / print a “Geocache Note” (see the right-hand column of the Hide & Seek a Geocache page). Since I was using waterproof paper, I hand wrote my note.  Here’s a picture of the trimmed, glued-together cache and note (I should’ve taken this picture after I tied the cable on — sorry that step is not photographed here):

Geocache Build
The geocache after being shortened and glued together

Placing the Geocache

Once everything is built, putting the geocache in the ground is the easy part.  Some friends and I found a nice spot on the side of the island — an 8″ vertical, muddy wall — that wouldn’t get stepped on (whether you were walking on the island or wading through the pond) and wouldn’t interfere with boating. Then all we did was drive in the stakes, wind the geocache around the hooks on their tops (similar to the cord on a vacuum cleaner) and call it good.

Geocache Home
The geocache’s new home

So that’s all there is to it! If you have any questions or feedback, let me know in the comments!

Using a subset of AADL to define medical application architectures

Code Gen Vision
The driving vision behind the MDCF Architect: An app’s architecture is specified in AADL, translated to Java and XML skeletons, and then run on a compatible platform.

Late last year (October-ish) I began working on a way to specify the software architecture of applications (apps) that run on medical application platforms (MAPs).  The specification takes the form of a subset of the Architecture Analysis and Description Language (AADL) and some supporting tooling — namely a plugin for OSATE2 (an Eclipse distribution which supports the editing of AADL) that translates from AADL to runnable MAP (aka Java) code. This work, referred to as the “MDCF Architect,” is part of my research here at K-State, and in fact this task constitutes a large part of my research proficiency exam — the (oral) examination all KSU PhD students must complete in order to become PhD candidates.

The work reached some good milestones in April, culminating in a paper submission to the Software Engineering in Healthcare workshop at this year’s meeting of Foundations of Health Information Engineering and Systems. My paper was accepted, and in about a month I’ll get to go to Washington DC and give my first conference presentation — I’m pretty excited.

Working with AADL, Eclipse, and a host of supporting technologies necessary for sound software engineering (Jenkins for automated building, JUnit for testing, Maven for building, Sphinx for documentation, Pygments for code highlighting, Jacoco for code coverage) was pretty cool, and one of the main reasons I enjoy studying in the SAnToS lab at K-State: not only am I working on the science part of computer science, but the ideas we work on get translated into real-world, publicly distributed tools.  So, while it’s not exactly likely that anyone outside of academia would find this work super interesting (yet!), the project is open-source (under the EPL), and freely available.