CI with Jenkins for Javascript: Part 3: Scheduling and reporting

In Part One, we set up a Jenkins server and some unit testing. In Part Two, we added some static analysis tools to our build. But we’re still manually running all this, even if it’s all tied together now. Let’s talk about some of the features Jenkins brings to the table.

Building automatically

Our code release pipeline is going through some revisions to make better use of branching, so I have the good fortune of being able to detail for you two different build strategies for two different types of branching strategies. Today I will detail our old style, and in a future post, I will detail the updates we did to make a more branch-heavy system work.

Our original strategy involved a branch for each codebase representing our demo environment; to promote a project to demo, the code would be merged into the demo branch using Subversion. This is the easier strategy to set up,  because you always know where in the repository to point Jenkins.

The first change we made was to symlink the location our repo was checked out by  Jenkins to a network share on a demo server. This allows Jenkins to check out the code directly to the server, where it can then run the unit tests. That was the simplest way for us to get the code onto our servers, but there are many ways you can go about this step, including using FTP or SSH to update the server; if you have many servers you want Jenkins to update, that’s probably the best way to do it. We used a symlink because it plays nicely with Jenkins’ preferred build pipeline: First it checks out the code, then it runs the tests, then it would deploy to other servers. Our code does not need to be compiled before being deployed, and Jenkins was not running on a machine configured to run as a Coldfusion server, so by checking out the code directly onto a server, we had it up and running as fast as possible.

Once you’ve figured out your deployment strategy, you’re ready to trigger Jenkins to automatically build based on code promotion. There are two strategies to accomplish this task: polling and a post-commit hook. Polling is the easiest to set up; there’s literally a checkbox under “Build Triggers” called “Poll SCM”. This allows you to set up a poll strategy usinga similar syntax as the one used to configure cronjobs; for example, to poll every fifteen minutes, you use the string “H/15 * * * *”. This can be configured without ever leaving Jenkins, and it will only build when there’s new changes.

Post-commit hooks require some work in Subversion. With this strategy, you configure Subversion to activate Jenkins whenever a commit is pushed. I didn’t do this myself, but there’s some details in the subversion plugin notes about how you might set this up. Honestly, the more I read about it, the less interesting it looked. Polling every ten minutes or so would achieve the same level of detail for my organization; remember, I’m talking about major code promotions to demo that happen probably no more often than once a day.

 

Information Radiators

So, you have your Jenkins server pointed to your repo. It’s polling every fifteen minutes, and it reports out on the unit tests, linting results, and code complexity. You’re feeling pretty proud of yourself: this is a nice spiffy setup, capable of giving a good sense of the long-term health of the project.

Too bad nobody looks at it.

Oh sure, you can give them the dashboard link. Maybe one or two of them will poke at it every week or so. For a while. Until they get bored and wander off. IT people are humans too, and humans are notoriously averse to reading anything or seeking out information on their own. How can you make the information more in-their-face?

One answer is to present the information in a pretty easily-understood graph or chart and display that on a monitor in the hallway. As people walk past it, the information is thrust into their face, and they tend to stop and take a look at it. The nicer the visualization, the more likely people will stop to look at it and accidentally ingest the information you’re trying to get across 🙂

Jenkins has a lovely API for retrieving information about a build: on any page, add “/api” to the end. If you just add /api, it gives you a description of the formats you can retrieve the api information in; to get the JSON data, you add /api/json to any page. For human-readability, add “?pretty=true”. You can also get the data in xml format using the same method.

With that in mind, I wrote a quick javascript app that polls Jenkins for data about unit tests using Backbone to abstract away all the details. The model is something like:

var TestResult = Backbone.Model.extend({
    baseURL: "",
    build: "lastCompletedBuild",
    url: function() {
        return this.baseURL + "/job/" +  this.id + "/" + this.build + "/testReport/api/json?jsonp=?";
    }
});

And the view something like:

var PlatformView = Backbone.View.extend({
    initialize: function(options) {
        this.options = options;
        this.model.on("change", this.render, this);
        this.model.on("error", this.renderErrorState, this)
    },
    render: function() {
        var tpl = Handlebars.compile($("#platformTemplate").html());
        var data = this.model.toJSON();
        var ts = new Date(this.options.timestamp);
        data.timestamp = ts.getMonth() + 1 + "-" + ts.getDate() + "-" + ts.getFullYear() + " " + ts.getHours() + ":" + (ts.getMinutes() < 10 ? "0" : "") + ts.getMinutes();
        var html = tpl(data);
        $(this.el).html(html);

        var model_id = this.model.get("id");
        var chartdata = [
                {label: "Pass", value: this.model.get("passCount")},
                {label: "Fail", value: this.model.get("failCount")},
                {label: "Skip", value: this.model.get("skipCount")}
            ];
        testResultsPieChart.drawTestResultsGraph(chartdata,"#" + model_id + "-chart");
        return this;
    },
    renderErrorState : function() {
        var tpl = Handlebars.compile($("#errorTemplate").html());
        var data = this.model.toJSON();
        var html = tpl(data);
        $(this.el).html(html);

        var model_id = this.model.get("id");
        var chartdata = [
                {label: "Pass", value: 0},
                {label: "Fail", value: 1},
                {label: "Skip", value: 0}
            ];
        testResultsPieChart.drawTestResultsGraph(chartdata,"#" + model_id + "-chart");
    }
});

Where testResultsPieChart uses the d3 library to convert the data into a pie chart. I tossed all this into a basic Bootstrap page, because I’m not much of a designer 🙂 The result ends up looking like:

jenkins_radiator
Note that one project had managed to break their test runner while I was taking this screenshot. You’ll see that result if qUnit never finishes executing.

 

And that’s where I’d gotten when someone told me we were changing branching strategy to remove the idea of a single demo branch 😀 Moving goalposts keeps life interesting.

CI with Jenkins for Javascript: Part 2: Static Analysis

Part one

So. We’re up, we’re unit testing, we’re publishing results. But unit testing is only as good as the tests themselves, and that depends heavily on the programmers’ ability to write good tests. Maybe we want more than that. Maybe we want a metric that isn’t essentially self-reported. Maybe we want static analysis.

What is Static Analysis

Static Analysis is a category of testing techniques that covers any metric of code that can be collected without executing the code. These techniques can be used to measure code against an agreed-upon standard without needing anything more from a developer than the code they’ve already written. This is a great way to touch in on the quality of the code when your developers are already over-worked, stressed, and up against the wall; they can put off writing unit tests until “later”, but they can’t prevent you from looking at the code and evaluating it. Obviously, you don’t want to just spring new metrics on people, but once a standard is in place, holding people to it shouldn’t be unreasonable.

 

Linting with ESLint

The most common static analysis tool you’ll hear JS developers talking about is linting. If you already know about this, feel free to skip down to the practical section, but if you’re picturing the fuzzy stuff that comes out of your dryer after you wash a load of blankets, allow me to dispel the confusion a little. The basic metaphor comes from using a lint roller to clean little bits of lint (or cat hair) off a sweater so you look neater and more presentable. Linting, therefore, is the act of cleaning up little stylistic issues to make the overall code look neat and tidy.

The most widely known linter is JSLint; I believe that’s where the name came from in the first place. You can test out how JSLint works at their website. Notice the checkboxes below the input box; JSLint is configurable, but not overly so. It was designed to enforce the Crockford Conventions, which some JS developers hold to be the best possible standard for Javascript code style. However, as with all things in JS land, the “standard” is hotly debated, and in many places rejected entirely. Therefore, for linting, I prefer a tool called ESLint. Every single rule in ESLint is configurable; at the minimum, this means it has three levels of enforcement: Do not enforce, Warn, or Error. Many rules also have configurable options, such as whether spaces should be before a comma, after a comma, both, or neither.

So let’s say you’ve got ESLint, talked with your team, and come up with a configuration file that enforces your standards. We can fairly simply add that into our gruntfile for Jenkins to execute, using a package like grunt-eslint. However, we now have a problem. Unlike grunt-qunit-junit, grunt-eslint does NOT allow for writing to a file. We’d have to pipe the output, and that includes any output from grunt itself, which might make our file no longer conform to the desired output format without more massaging. So I prefer to install eslint as a standalone console application, as detailed here.

Now our buildfile has two items:

 

jenkins_eslint

That command line breaks down as follows:

  • eslint calls the linter
  • -c eslint.conf points it to our custom configuration file
  • -f checkstyle outputs the results in checkstyle format. This can be other formats like jslint, junit, or tap, but I found the checkstyle plugin to be to my liking.
  • file paths indicate what files should be linted. Here I’m only linting the models and views for my project
  • > lintresults.xml is the linux way to pipe the results of the output into a file. This can be any file.
  • || echo is, as with last time, a way to ensure that the build does not fail when linting fails. Again, the reporting plugin will take care of marking the build as unstable when the linting fails. Without this, linting errors will prevent Jenkins from moving on to the unit tests.

We can then use the Checkstyle plugin (or any other plugin that can process Checkstyle reports) to display the results:

jenkins_checkstyle

 

And voila!

jenkins_checkstyle_output

 

Complexity with Plato

Another static analysis that can be useful to shed some light on code quality is complexity analysis. Now, if you’re planning to write angry comments, please keep in mind that all of these metrics measure one aspect of quality, and I don’t believe any of them are infallible be-all end-all measures. But complexity can tell you a little about what parts of your application are going to be more troublesome to maintain.

The most common metric for complexity is cyclomatic complexity. This is a rough measure of code complexity created in 1976 by Thomas McCabe, defined as the count of the number of linearly independent paths through the source code. Basically, this tells you how much branching, looping, and nesting is present in a piece of code. Lower is easier to understand and maintain, but obviously, code with a complexity of 1 doesn’t do very much that’s interesting at all; it’s 100% deterministic, and will always do exactly the same thing, with no change in behavior based on inputs. Your basic “Hello World” program has a cyclomatic complexity of 1; FizzBuzz tends to be around 6 or so.

Another metric of complexity is Halstead Complexity. This is a more robust set of measures proposed in 1977 by Maurice Halstead. These are calculated by counting the number of distinct operators, total number of operators, number of operands, and other such analysis to produce a slew of numbers. One such number is the difficulty index, which is half the number of distinct operators times the total number of operands divided by the number of distinct operands. In theory, this measures how difficult code is to maintain over time.

As both of these metrics are strongly correlated with lines of code, the Maintainability Index seeks to relate them to each other and to the LOC to get an overall quick-and-dirty number to represent how difficult code is to maintain. This index was created in 1991 by Paul Oman and Jack Hagemeister, and it ranges from negative infinity to a “perfect” score of 171, achieved only by an empty file with 0  lines of code. They proposed that code scoring above about 65 should be considered easy to maintain.

These metrics are all measured with a tool called JSComplexity, a tool written by Paul Booth to easily measure the complexity of javascript code. The command-line version of this tool is complexity-report, and there’s a nicely formatted HTML reporter using that tool called Plato. From that we have a Grunt wrapper called grunt-plato, which we can use to generate an HTML report that can be included in Jenkins automatically. Still with me? 🙂

The grunt setup is pretty straightforward, as before. We can add it to our existing file with a few lines:

module.exports = function(grunt) {
  // Project configuration.
  grunt.initConfig({
[...]
   plato: {
    complexity: {
        options: {
        jshint: false
        },
        files: {
        'reports': ['../src/source/model/*/*.js', '../src/source/ui/*/*.js']
        }
    }
    }
  });
  // These plugins provide necessary tasks.
  grunt.loadNpmTasks('grunt-contrib-qunit');
  grunt.loadNpmTasks('grunt-qunit-junit');
  grunt.loadNpmTasks('grunt-plato');
  // Default task.
  grunt.registerTask('default', ['plato','qunit_junit','qunit']);
};

I’ve turned off JSHint because I’m using ESLint above. If you like JSHint, you can leave it included and skip the whole section above about checkstyle.

We already have the grunt file being run by jenkins, so we just add the report like so:

jenkins_plato_outputAnd voila! It’ll appear on the left as a link:

jenkins_sidebar

Which takes you right to the report.

Conclusions

There’s a lot more in the world of static analysis that I’d love to be able to show you. There are tools to generate dependency analyses, tools to find common bugs, tools for finding duplicated or dead code, tools to find potential security holes… but unfortunately, the tooling for javascript is rather limited. Compiled languages are always easier to analyse than interpreted ones, and strongly typed languages are easier to analyse than weakly typed ones. Frankly, though, with all the wonders javascript developers are able to produce, I have to wonder: does the community really care about quality? Maybe the tooling is limited because beyond linting and maybe some complexity, javascript developers aren’t interested in writing these kinds of tools.

Maybe I’ll write some myself.

CI with Jenkins for Javascript: Part 1: Unit Testing

In a lot of ways, the Javascript world feels like it’s trapped in the year 2k: the dot com bubble is swelling huge, and nobody has time for best practices, it’s time to reinvent everything and strike it rich. As an SQA professional, it’s immensely frustrating to outline a technique and be told “Javascript doesn’t do that.” (That’s one of three answers that ought to be banned from a webdev’s vocabulary; the other two are “I think jQuery does that” and “Maybe with Node?” Protip: using the latest shiny library is no substitute for using your brain. But I digress.)

Anyway, so let’s set the scene: A young, frazzled SQA professional, trying to get a sandbox install of Jenkins full of shiny things to prove to the Directors that really, we do need more tools, we’re not just being lazy. Jenkins’ install was a tale for another blog, but it’s up and running. Now what?

 

Unit testing with QUnit

The first thing, the key component for any sort of continuous-testing exercise (nevermind the integration part for now, this is only a demo) is to automate the unit testing. In my case, we used qUnit for our tests, which is pretty standard. Or was it? Since we don’t do TDD and we’re backporting testing into legacy apps that weren’t built with testability in mind, I ended up putting Coldfusion to work for us. I created a series of drop-down menus, customized for each platform we were testing, that would let you drill down to a specific component to test (model, view, library element, et cetera). It would then read a json file to find any dependencies that were required (yes, the developers were given a lecture about minimizing dependencies. No, that doesn’t mean our legacy apps would detangle themselves magically overnight. Yes, they really insist on testing views with the real Handlebars templates stored in separate files. Sure, why not.) and include them on the page, then the item under test, then the test code.

How do I make jenkins do this? The qUnit “way” seems to be to generate this page on the fly, but there ended up being quite a bit of logic implemented in Coldfusion I didn’t want to remake. And why should I? Add a “all” option to the dropdown and I had the exact result-set I wanted to include in Jenkins. What I really needed was a way to hit that page and retrieve the results.

Enter Grunt.

Grunt is a automation system made to run javascript tasks, particularly in a node environment. I found it works pretty well to treat it like ant or maven in your javascript stack: migrate the nitty-gritty logic to grunt, then execute the grunt script from jenkins. I didn’t see a plugin to do this, so I used the shell plugin to execute grunt. That gets the tests run and result files generated.

To get Grunt installed, you need Node (and the Node Package Manager). Once you have a working install of Node, you can install grunt with npm install -g grunt, which does a global install of grunt using node package manager. Of course, this isn’t enough to get running. You then have to install the command-line interface for grunt, which is packaged separately (because nothing shiny can be simple):npm install -g grunt-cli.

The Tao of Node involves creating projects, much like you do with Java and Eclipse. We’re not actually building anything here, there’s no asset pipeline involved at this stage, but you still have to have a project. So we create one. The simplest way to create a node package is with npm init, which will create a file called “package.json”. You probably never need to edit this file directly, but you can if you want. It’s just a json file.

The Tao of Grunt then involves a file you’ll be editing heavily: “Gruntfile.js”. This is where the instructions for Grunt go. These instructions come in two flavors: a list of configuration options for the specific plugin you’re using, and a list of plugins to activate for a given build target. This is kind of like ant, but with JSON instead of XML. Your basic gruntfile looks like:

module.exports = function(grunt) {

  // Project configuration.
  grunt.initConfig({
   //JSON for config options here
  });

  // Load the plugins
  grunt.loadNpmTasks('grunt-sometask');

  // Default task(s).
  grunt.registerTask('default', ['sometask']);

};

For this use case, we want to use grunt-contrib-qunit to run my tests and snag the results. So we set that up first. Since I want to use an existing runner, I use the urls option to pass in the URL of the runner. My gruntfile then looks like:

module.exports = function(grunt) {

  // Project configuration.
  grunt.initConfig({
   qunit: {
      all: {
        options: {
          urls: [
             'http://[redacted]/testRunner/index.cfm?component=all'
          ]
        }
      }
    },
  });

  // Load the plugins
  grunt.loadNpmTasks('grunt-contrib-qunit');

  // Default task(s).
  grunt.registerTask('default', ['qunit']);

};

From the command line, I can then run sudo grunt from the folder with the gruntfile and bam, there’s my results. (Make sure there’s no authentication required to get to your test runner. That’s for the advanced class. Also, if you find someone teaching the advanced class, I’d love to sign up 🙂 ).
Of course, I can’t actually do that until I install grunt-contrib-qunit. Thankfully, these plugins are all available via npm. This is probably a good time to make an aside note about how npm works. See, we installed grunt globally, because we want it to be availible to multiple jenkins projects, but you’re not supposed to do that often. The better way to install dependencies is without the -g flag and with the --save-dev flag. This will do two things:

  • Download the module to the project’s dependencies folder
  • Add the module to your project.json file

So in this case, we want to do a npm install grunt-contrib-qunit --save-deps. Do that for any other plugin I discuss and you’ll be up and running in no time.
Now, that’s all fine and dandy, except that qunit prints the output in a human-readable format to the screen. We want the output in a jenkins-readable file instead. Jenkins can read xUnit, TAP, and HTML reports with the help of some readily-available plugins, so basically anything standard will do; luckily, there’s another grunt plugin that takes the output from grunt-contrib-qunit and massages it into JUnit format before saving to a file. It’s called grunt-qunit-junit, because naming conventions are weird.

module.exports = function(grunt) {

  // Project configuration.
  grunt.initConfig({
  qunit_junit: {
        options: {
          "dest":"../test-reports"
        }
    },
   qunit: {
      all: {
        options: {
          urls: [
             'http://[redacted]/testRunner/index.cfm?component=all'
          ]
        }
      }
    },
  });

  // Load the plugins
  grunt.loadNpmTasks('grunt-contrib-qunit');
  grunt.loadNpmTasks('grunt-qunit-junit');

  // Default task(s).
  grunt.registerTask('default', ['qunit_junit','qunit']);

};

Note that qunit-junit wants to be loaded before qunit. This will write the output in junit format to a folder called test-reports. Make sure that’s writeable by jenkins!
Speaking of Jenkins…

Displaying jenkins_grunt.png

The or (||) and output aren’t strictly necessary; what that does is allow jenkins to continue building if the unit tests failed. Later, in the reporting step, a build will be marked “unstable” if the tests failed, but if you don’t have this you won’t be able to execute any later steps.

Speaking of reporting, here’s how I configured the junit reporter for jenkins:

Displaying jenkins_junit.png

And voila! Click “build” and you’ll see your results right away.