Teatime: Maintainable Code, MV*, and Backbone

Welcome back to Teatime! This is a weekly feature in which we sip tea and discuss some topic related to quality. Feel free to bring your tea and join in with questions in the comments section.

Tea of the week: Royal Milk Tea. In Japan, when they want black tea, they buy it powdered with milk powder already added so they don’t have to go buy milk. This stuff is super simple to make since it’s instant, but it tastes amazing. You can often find it at an Asian grocery store if you happen to have one in your area.
teaset2pts

Today’s topic: Maintainable code with MV*

Today we’re venturing into the area of maintainability. There are a lot of design patterns out there to increase the maintainability of your code, but none are as widespread in the web world as MV*

What? MV*?

The MV[something] family of patterns (abbreviated here as MV*, where * is a wildcard character) consist of a Model, a View, and… something else. The most common types are as follows:

  • MVC: Model, View, Controller. This is the classic model, the basic template that the others are following.
  • MVVM: Model, View, View-Model
  • MVP: Model, View, Presenter

Backbone, a popular MV* framework for Javascript, provides tools to create Models and Views, but leaves the third piece up to you. We’ll be having a look at the pieces in the next few sections here.

Models

The model contains the data the application needs to work. Generally, this is an Object in an OOP paradigm; you might have, say, a Product model that knows the description and price of a single product, plus can work out the sales tax given the country, or some such. This is independant of any particular presentation, and should be global to your application. The model layer contains the business logic pertaining to the domain; for example, a Car model knows it can only have four Tires attached, but a Truck might have 18.

The thing to keep in mind here is that the model is a layer like any other, and can contain more than just your specific object models. It can also contain data mappers that store the models into a database, services to translate models into a different format, data in the database held in a  normalized format, and the code needed to reconstruct those model objects out of the normalized data. There’s a lot going on here, and all of it can contain business logic that needs to be tested.

Views

Views are responsible for providing an interface to the user to interact with the data. In a classic server-based web app, your view is HTML, maybe a little injected javascript. In a single-page Javascript app, you’ll have View objects that construct widgets on the DOM, plus the HTML that’s actually injected. In a Windows Presentation Foundation app, you’ll have some XML defining your view.

View-models

This confused me for the longest time until I realized what the name was trying to imply. The Model layer models your application; the View-Model is a model of your ViewEverything you need to render a view — the data, the methods, the callbacks — live in a model, and your actual view — the DOM, or whatever — just renders what’s in the view-model. This is the pattern Knockout.js uses: the DOM is the view, and your methods bind to it from the view-model. You never write code to update the view, it just happens automatically.

Controllers

The controller represents the logic of the application (or page, in a multi-page js app). It determines what models are needed and what views should be displayed based on the context. For example, in ASP.NET, you might have a bit of HTML (your view) showing a record from the Users table (your Model). When you click the edit pencil, the view dynamically changes out to a form (a different view) showing the same record. When you click save, it swaps back. The controller contains this sort of logic.

In an MVP or MVVM setup, without a controller, often the logic is controlled via a REST-style “route”: one page has one view and one model and they compose directly, and to do anything with that page, you make another HTTP request which returns another view with another model.

Practical Example: Backbone

Let’s take a look at one Javascript library that’s become very popular in recent years. These code samples are from cdnjs, by the way, which is a great place to learn about Backbone. Backbone can be used in an MVC style very easily: your controller summons up views and models and composes the two together.[1] One model maps to one endpoint in the API layer, which serves up one model; for example, this might be a model for a user:

var UserModel = Backbone.Model.extend({ 
    urlRoot: '/user', 
    defaults: { 
        name: '', 
        email: '' 
    } 
});

user.save(userDetails, { 
    success: function (user) { alert(user.toJSON()); } 
});

You can see here how we extend the basic model to create our own, and only override what we want to use. The urlRoot tells it where to look for the API backend; the defaults tell us what properties we expect to see back as well as providing sensible defaults in case a new object is created during the course of our application.

A view might look something like this:

var SearchView = Backbone.View.extend({
    initialize: function(){
      this.render();
    },
    render: function(){
      var template = _.template( $("#search_template").html(), {} );
      this.$el.html( template );
    },
    events: {
      "click input[type=button]": "doSearch"
    },
    doSearch: function( event ){
      // Button clicked, you can access the element that was clicked with event.currentTarget
      alert( "Search for " + $("#search_input").val() );
    }
  });

  var search_view = new SearchView({ el: $("#search_container") });

Here we have an element ($el) we will bind our template to; that’s usually passed in when the view is instantiated. We also have a template here, in this case an underscore template. We then render the template when we’re asked to render.

We use Backbone’s events array to register for certain events on our element, and when the button is clicked, we perform an action (in this case, an alert).

Do you think this code is easier to maintain than the usual giant closure full of methods? Why or why not? Have you seen any maintenance nightmares recently you think could have been sorted out by using MV* better?


[1]: This assumes a REST backend, which we can talk about another day.

Teatime: The Three Ways of Devops

Welcome back to Teatime! This is a weekly feature in which we sip tea and discuss some topic related to quality. Feel free to bring your tea and join in with questions in the comments section.

Tea of the week: I’m feeling herbal today, chilling out with a refreshing cup of mint tea. I grow my own mint, but you can find good varieties at your local grocery store in all probability. I like mine with honey and lemon. 
teaset2pts

Today’s topic: The Three Ways of Devops

DevOps is one of my favorite topics to touch on. I firmly feel that the path to better quality software lies through the DevOps philosophy, even more so than through good QA. After all, finding bugs in finished software can only do so much; it’s better to foster a mindset in developers that prevents them from occurring in the first place.

What is DevOps?

DevOps is essentially a cultural phenomenon that aims to build a working relationship and harmony between development, QA, and operations. You can’t buy devops; you can’t hire devops. You have to shift your culture to follow the tao of devops, one step at a time. Anyone trying to sell you instant devops is lying, and you should avoid them like the plague. In that way, it’s a lot like Agile 🙂

DevOps is a logical extension of Agile, in fact. It was developed from the theories laid out in The Phoenix Project, a book I suggested reading a few weeks back, and made famous by a talk at the Velocity conference called “10+ Deploys Per Day: Dev and Ops Cooperation at Flickr” by John Allspaw and Paul Hammond.

You can see in the below graphic how it sort of extends the agile cycle into an infinity symbol:

For teams that are already agile, the jump is pretty straightforward: the definition of “done” now means, not just tested, but released, working in production. It adds a mindset that devs should absolutely care about how their code is doing in production, that the feedback loop should go all the way to the end user and back to the dev.

The First Way

Let me explain each way with a graphic designed by Gene Kim, one of the authors of The Phoenix Project:

The first way is the left-to-right flow of development to release. It promotes systems thinking, trying to get people in each stage of the way (us QA folks are about halfway along that line) to consider not just their own piece in the assembly line, but the line as a whole, maximizing for global efficiency rather than local. What good is testing quickly if the devs can’t get you another release until two weeks after you finish testing? What good is finding and fixing a ton of bugs if the user won’t see the fixes for two years? What good is a million passing tests if the application crashes after three hours in prod?

The principles necessary here to enact the first way are:

  • Small batch sizes. Don’t release huge projects once a year, release small chunks of functionality regularly.
  • Continuous build. If it doesn’t build, it won’t go to prod.
  • Continuous integration. If your code builds but not when your neighbor’s code is added, it’ll blow up in production if it’s only ever tested in isolation.
  • Continuous deployment. Get the code on a server ASAP so it can be tested
  • Continuous testing. Unit tests, integration tests, functional tests; we want to know ASAP when something’s broken.
  • Limit the amount of work in progress; too many balls in the air means you never know what’s going and what’s not.

The biggest rule of the first way: never pass defects downstream. Fix them where you find them.

The Second Way

The first way is the left-to-right flow of software the second way concerns the right-to-left flow of information back down to development:

Information needs to flow back from QA to Dev, back from staging to dev, back from deployment to dev, back from production to dev. The goal is to prevent problems from occurring over again when we already solved that issue. We need to foster faster detection of problems, and faster recovery from them when they occur. As QA professionals, we know you have to create quality at the source; you can’t test it in. In order to do that, we need to embed knowledge where it can do the most good: at the source.

In order to achieve the second way,  you need to:

  • Stop the production line when tests fail. Getting that crucial information back to development is key, and fixing problems before moving on to new development is crucial to ensure quality.
  • Elevate the improvement of work over the work itself. If there’s a tool you could spend 2 hours building that would save 10 hours per week, why wouldn’t you spend the two hours and build it?
  • Fast automated test suites. Overnight is too slow; we need information ASAP, before the dev’s attention span wanders.
  • Shared goals and shared pain between dev and ops. When ops has a goal to keep the system up, dev should be given the same goal; when the system goes down, devs should help fix it. Their knowledge can be highly useful in an incident.
  • Pervasive production information. Devs should know at all times what their code is doing in the wild, and be keeping an eye on things like performance and security.

The Third Way

We have a cycle, it’s very agile, we’re testing all the things; what more could we possibly learn? The Third Way, it turns out, is fostering a culture of experimentation and repetition in order to prepare for the unexpected and continue to innovate.

We have two competing concerns here in the third way, the yin and yang of innovation: how do we take risks without hurting our hard-won stability? The key is repetitive practice. It’s the same principle behind fire drills: when you do something regularly, it become second nature, and you’re ready to handle when it happens for real. That leaves you free to experiment and take risks, falling back on your well-practiced habits to save you when something goes wrong.

Key practices are:

  • Promote risk taking over mindless order taking
  • Create an environment of high trust over low trust
  • Spend 20% of every cycle on nonfunctional requirements, such as performance and security concerns
  • Constantly reinforce the idea that improvements are encouraged, even celebrated. Bring in cake when someone achieves something. Make it well known that they’ve done something good.
  • Run recovery drills regularly, making sure everyone knows how to recover from common issues
  • Warm up with code kata (or test kata) daily

 

Where are you in your journey toward DevOps? Have you begun? What can you take from this back to your daily life?

Teatime: Performance Testing Basics with jMeter

Welcome back to Teatime! This is a weekly feature in which we sip tea and discuss some topic related to quality. Feel free to bring your tea and join in with questions in the comments section.

Tea of the week: Black Cinnamon by Sub Rosa tea. It’s a nice spicy black tea, very like a chai, but with a focus on cinnamon specifically; perfect for warming up on a cold day.
teaset2pts

Today’s topic: Performance Testing with jMeter

This is the second in a two-part miniseries about performance testing. After giving my talk on performance testing, my audience wanted to know about jMeter specifically, as they had some ideas for performance testing. So I spent a week learning a little bit of jMeter, and created a Part 2. I’ve augmented this talk with my learnings since then, but I’m still pretty much a total novice in this area 🙂

And, as always when talking about performance testing, I’ve included bunnies.

What can jMeter test?

jMeter is a very diverse application; it works at the raw TCP/IP layer, so it can test out both your basic websites (by issuing a GET over HTTP),  or your API layer (with SOAP or XML-RPC calls). It also ships with a JDBC connector, so it can test out your database performance specifically.  It also comes with basic configuration for testing LDAP, email (POP3, IMAP, or SMTP), and FTP protocols. It’s pretty handy that way!

A handy bunny

Setting up jMeter

The basic unit in jMeter is called a test plan; you get one of those per file, and it outlines what will be run when you hit “go”. In that plan, you have the next smaller unit: a thread group. Thread groups contain one or more actions and zero or more reporters or listeners that report out on the results. A test plan can also have listeners or reporters that are global to the entire plan.

The thread group controls the number of threads allocated to the actions underneath them; in layman’s terms, how many things it does simultaneously, simulating how many users. There’s also settings for a ramp-up time (how long it takes to go from 0 users/threads to the total number) and the number of executions. The example in the documentation lays it out like so: if you want 10 users to hit your site at once, and you have a ramp-up time of 100 seconds, each thread will start 10 seconds after the previous one, so that after 100 seconds you have 10 threads going at once, performing the same action.

Actions are implemented via a unit called a “Controller”. Controllers come in two basic types: samplers and logical controllers. A sampler sends a request and waits for the response; this is how you tell the thread what it’s doing. A logic controller performs some basic logic, such as “only send this request once ever” (useful for things like logging in) or “Alternate between these two requests”.

Multiple bunnies working together

You can see here an example of some basic logic:

jmeter

In this example, once only, I log in and set my dealership (required for this request to go through successfully). Then, in a loop, I send a request to our service (here called PQR), submitting a request for our products. I then verify that there was a successful return, and wait 300 milliseconds between requests (to simulate the interface doing something with the result). In the thread group, I have it set to one user, ramping up in 1 second, looping once; this is where I’d tweak it to do a proper load test, or leave it like that for a simple response check.

Skeptical bunny is skeptical

In this test, I believe I changed it to 500 requests and then ran the whole thing three times until I felt I had enough data to take a reasonable average. The graph results listener gave me a nice, easy way to see how the results were trending, which gave me a feel for whether or not the graph was evening out. My graph ended up looking something like this:

graphThe blue line is the average; breaks are where I ran another set of tests. The purple is the median, which you can see is basically levelling out here. The red is the deviation from the average, and the green is the requests per minute.

Good result? bad result? Bunnies.

Have you ever done anything with jMeter, readers? Any tips on how to avoid those broken graph lines? Am I doing everything wrong? Let me know in the comments 🙂

Teatime: Testing Application Performance

Welcome back to Teatime! This is a weekly feature in which we sip tea and discuss some topic related to quality. Feel free to bring your tea and join in with questions in the comments section.

Tea of the week: Still on a chai kick from last week, today I’m sipping on Firebird’s Child Chai from Dryad Teas. I first was introduced to Dryad Tea at a booth at a convention; I always love being able to pick out teas in person, and once I’d found some good ones I started ordering online regularly. It’s a lovely warm chai, with a great kick to it. 
teaset2pts

Today’s topic: Testing Application Performance

This is the first in a two-part miniseries about performance testing. The first time I gave this talk, it was a high-level overview of performance testing, and when I asked (as I usually do) if anyone had topic requests for next week, they all wanted to know about jMeter. So I spent a week learning a little bit of jMeter, and created a Part 2.

This talk, however, remains high-level. I’m going to cover three main topics:

  • Performance Testing
  • Load Testing
  • Volume Testing

I have a bit of a tradition when I talk about performance testing: I always illustrate my talks with pictures of adorable bunnies. After all, bunnies are fast, but also cute and non-threatening. Who could be scared of perf tests when they’re looking at bunnies?

White Angora Bunny Rabbit
Aww, lookit da bunny!

Performance Testing

Performance testing is any testing that assesses the performance of the application. It’s really a super-group over the other two categories that way, in that load testing and volume testing are types of performance testing. However, when used without qualifiers, we’re typically talking about measuring the response time under a typical load, to determine how fast the application will perform in the average, everyday use case.

You can measure the entire application, end to end, but it’s often valuable to instead test small pieces of functionality in isolation. Typically, we do this the same way a user would: we make an HTTP request (for a web app), or a series of HTTP requests, and measure how long it took to come back. Ideally, we do this a lot of times, to simulate a number of simultaneous users of our site. For a desktop application, instead of adding “average load”, we are concerned about the average hardware: we run the application on an “average” system and measure how long it takes to, say, repaint the screen after a button click.

But what target do you aim for? The Nielsen Normal Group outlined some general guidelines:

  • One tenth of a second response time feels like the direct result of a user’s action. When the user clicks a button, for example, the button should animate within a tenth of a second, and ideally, the entire interaction should complete within that time. Then the user feels like they are in control of the situation: they did something and it made the computer respond!
  • One second feels like a seamless interaction. The user did something, and it made the computer go do something complicated and come back with an answer. It’s less like moving a lever or pressing a button, and more like waiting for an elevator’s door to close after having pressed the button: you don’t doubt that you made it happen, but it did take a second to respond.
  • Ten seconds and you’ve entirely lost their attention. They’ve gone off to make coffee. This is a slow system.
Baby Bunny Rabbit
Bunny!

Load Testing

Load testing is testing the application under load. In this situation, you simulate the effects of a number of users all using your system at once. This generally ends up having one of two goals: either you’re trying to determine what the maximum capacity of your system is, or you’re trying to figure out if the system gracefully degrades when it exceeds that maximum capacity. Either way, you typically start with a few users and “ramp up” the number of users over a period of time. You should figure out before you begin what the maximum capacity you intend to have is, so you know if you’re on target or need to do some tuning.

Like the previous tests, this can be done at any layer; you can fire off a ton of requests at your API server, for example, or simulate typical usage of a user loading front-end pages that fire requests at the API tier. Often, you’ll do a mix of approaches: you’ll generate load using API calls, then simulate a user’s degraded experience as though they were browsing the site.

There’s an interesting twist on load testing where you attempt to test for sustained load: what happens to your application over a few days of peak usage rather than having a few hours and then a downtime in between? This can sometimes catch interesting memory leaks and so forth that you wouldn’t catch in a shorter test.

Bunny under load

Volume Testing

While load testing handles the case of a large amount of users at once, volume testing is more focused on the data: what happens when there’s a LOT of data involved. Does the system slow down finding search results when there’s millions or billions of records? Does the front-end slow to a crawl when the API returns large payloads of data? Does the database run out of memory executing complex query plans when there’s a lot of records?

Deilenaar.jpg
A high volume of bunnies

Do you load test at your organization? How have you improved your load testing over the years? What challenges do you face?