I’ve mentioned in recent posts that I recently wrote some software called the MDCF Architect for my research, and along with the implementation (an eclipse plugin), I also built a number of supporting artifacts — things like developer-targeted documentation and testing with coverage information. Integrating these (and other) build features with Maven is often pretty straightforward because a lot of functionality is available as Maven plugins. So, today, I’m going to discuss how I configured three fairly common Maven plugins: “Exec,” “JaCoCo,” and “Wagon.”
Integrating Maven & Sphinx
Sphinx is a tool for generating developer-targeted documentation. I wrote about some extensions I made to it earlier this week, but today I’m going to talk about how I automated the documentation generation process. Initially I used the sphinx-maven plugin, though it uses an older version of Sphinx that was missing some features I needed. The plugin’s documentation has a page on how to update the built-in version of sphinx, but I had some trouble getting everything to update correctly. Pull requests have been created that would solve this and other issues, but the plugin looks to be abandoned (or at least on hiatus).
So, since the native plugin wasn’t going to work, I needed to go to my backup plan — which meant running Sphinx via an external program call. Fortunately, this is easy to do with Mojo’s exec-maven-plugin, but on the other hand it means that the build now has an external dependency on Sphinx. I decided this was something I had to live with, and hooked the generation of documentation into the package phase of the Maven build. I also hooked Sphinx’s clean into the clean phase of the Maven build so that there wouldn’t be a ton of extra files laying around that required manual deletion. Here’s the relevant pom.xml snippet:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
<plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.3</version> <executions> <execution> <configuration> <executable>make</executable> <workingDirectory>${basedir}/src/site/sphinx</workingDirectory> <arguments> <argument>clean</argument> </arguments> </configuration> <id>sphinx-clean</id> <phase>clean</phase> <goals> <goal>exec</goal> </goals> </execution> <execution> <configuration> <executable>make</executable> <workingDirectory>${basedir}/src/site/sphinx</workingDirectory> <arguments> <argument>html</argument> </arguments> </configuration> <id>sphinx-gen-html</id> <phase>package</phase> <goals> <goal>exec</goal> </goals> </execution> </executions> </plugin> |
Integrating Maven & JaCoCo
I think that code coverage is really useful for seeing how well your tests are doing, and after looking at some of the options, I settled on using JaCoCo. One thing I really like about it is that it uses Java Agents to instrument the code on the fly — meaning that (unlike when I was an undergraduate) you don’t have to worry about mixing up your instrumented and uninstrumented code. JaCoCo works by first recording execution trace information (in a .exec file) and then interpreting it, along with your project’s .java and .class files, to (typically) produce standalone reports. Since I’ll be building / testing via Jenkins, I just generated the execution traces, and used Jenkin’s JaCoCo plugin’s built-in report format.
I had a bit of a tricky time figuring out where exactly I should be using the JaCoCo plugin — I first tried putting it in my test project’s build configuration (pom.xml), but that meant that I only got coverage of the testing code itself instead of the business logic. Then I put it in the main plugin’s project, only to find that since that project didn’t have any tests (since the tests are in their own project) I had no coverage information at all. Finally I put the JaCoCo configuration in the top-level pom.xml (and none of the individual project files) and still had no execution information. Turns out, both the Tycho testing plugin and JaCoCo modify the JVM flags when tests are run, and so you have to manually integrate them. Once I did that, everything finally started working.
I ended up with this in my top-level pom.xml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
<plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>${jacoco.version}</version> <executions> <execution> <id>integration-agent-prep</id> <phase>pre-integration-test</phase> <goals> <goal>prepare-agent-integration</goal> </goals> <configuration> <destFile>${basedir}/../jacoco-integration.exec</destFile> </configuration> </execution> </executions> </plugin> |
And this configuration for the Tycho Surefire (testing) plugin in the test project’s pom.xml (the custom flags I needed for Surefire are in the sureFireArgLine variable):
1 2 3 4 5 6 7 8 9 10 11 |
<plugin> <groupId>org.eclipse.tycho</groupId> <artifactId>tycho-surefire-plugin</artifactId> <version>${tycho.version}</version> <configuration> <testSuite>edu.ksu.cis.projects.mdcf.aadl-translator-test</testSuite> <testClass>edu.ksu.cis.projects.mdcf.aadltranslator.test.AllTests</testClass> <useUIHarness>true</useUIHarness> <argLine>${sureFireArgLine} ${tycho.testArgLine}</argLine> </configuration> </plugin> |
Deploying Artifacts with Maven Wagon
Maven Wagon enables developers to automatically upload the outputs of their builds to other servers. In my case, I wanted to post both the update-site (that is, an installable version of my plugin) and the developer documentation I was generating. It took significant fiddling to get everything running correctly, but most of this was a result of the environment I’m working in — no matter what I did, it kept requesting a manually entered password. It turns out that the authentication methods used by my target server were non-standard, and it took a while to figure out how to get around that. I first found that I had to use wagon’s external ssh interface since some of the authentication steps required weren’t possible with the built-in client. I then ended up using an ssh-key for authentication on my personal machine (and any non-buildserver device) and exploited the fact that the buildserver user has (restricted) write access to the web-facing directories.
Once authentication was hammered out, the plugin configuration was nested inside a profile element that could be activated via Maven’s -P switch:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
<profile> <id>uploadToSite</id> <properties> <scp.repourl>scpexe://myserver.edu/path/to/updatesite/</scp.repourl> <repo.path>${project.build.directory}/site/</repo.path> </properties> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>wagon-maven-plugin</artifactId> <version>1.0-beta-5</version> <executions> <execution> <id>upload</id> <phase>install</phase> <goals> <goal>upload</goal> </goals> <configuration> <fromDir>${repo.path}</fromDir> <url>${scp.repourl}</url> <serverId>sftp-repository</serverId> </configuration> </execution> </executions> </plugin> </plugins> </build> </profile> |
So that wraps up three of the trickier plugins I used when automating MDCF Architect builds. As always, the full build configurations are available on github, and let me know if you have any questions or feedback in the comment section!
Leave a Reply