Selenium Grid on Docker and Vagrant: Part 2

Last time we got Vagrant configured to run a single VM with three docker containers: a Selenium Grid hub, a Chrome node, and a Firefox node. This is a good start, but I wanted to configure a Selendroid node to round out the browser selection. That’s when things got a little… messy.

So upon investigation into how the Docker images I was already using were constructed, I discovered a few key points:

  • Docker images are defined by Dockerfiles in the same way Vagrant VMs are defined by Vagrantfiles. The format and syntax are totally different, but both are more-or-less human-readable flatfiles that explain how to set up a system. So far so good.
  • Dockerfiles are nestable, and in fact, are often nested. The ones from Selenium HQ have a clear hierarchy. This pleased me, because I figured it gave me a nice stable base to work on: my file, like the Chrome and Firefox files, would inherit from the node-base image, but with tweaks specific to Selendroid.

So there’s the top of my Dockerfile:

FROM selenium/node-base:2.53.0
MAINTAINER Bgreen <[redacted]>

USER root

I cracked open the Chrome dockerfile and the Dockerfile reference guide and got reading. It looked pretty straightforward at first: just write a bash script, but stick “RUN” in front of it. Spoiler alert: as I started working on my own script, I learned that this was entirely the wrong way to go about writing a dockerfile. Docker has a lot of useful commands other than “RUN”, and it wasn’t long before I was breaking apart my scripts learning how to put the dockerfile together property.

Looking at the Selendroid grid instructions and the Selendroid getting started guide, there were three major steps:

  • Install the JDK
  • Install the Android SDK
  • Install Selendroid and start it in grid mode

Step 1 appeared to be done already, by virtue of the Node-Base dockerfile. This was a dangerous and ultimately wrong assumption, but it was one I worked with for over a day before I realized my mistake. It turns out, the JRE was installed in the base image… under the name openJDK. Nice.

Java installation:

#===============
# JAVA
#===============
RUN apt-get update && apt-get install -y openjdk-8-jdk
RUN ls -l /usr/lib/jvm/
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64

Now it was time to install the Android SDK. And here I ran into the first massive bunch of problems. I spent several hours fighting with the system, making small tweaks, before I realized I’d accidentally installed Android Studio instead of Android SDK and had to start over.

I originally was going to do a wget followed by a tar, but then I learned about Docker’s ADD command. The ADD command takes a file that is located in the same directory structure as the Dockerfile and moves it into the directory structure inside the container. If the file is a tarball, it will untar the file into a folder as it does, removing the need to write an explicit tar command — a major plus, as tar commands are always annoying to write. I chose to download the tar into the file structure to avoid the network hit and used the ENV command to set ANDROID_HOME the same way I set JAVA_HOME:

#===============
# Android SDK
#===============
ADD android-sdk_r24.4.1-linux.tgz /opt/selenium/
ENV ANDROID_HOME=/opt/selenium/android-sdk-linux
ENV PATH=${PATH}:${ANDROID_HOME}/tools:${ANDROID_HOME}/platform-tools

However, upon installation, there is no such folder as ANDROID_HOME/platform-tools. This is because it is only created once you fire up the android sdk  tool and begin downloading sdks to develop with. So I figured I’d do  RUN android update sdk --no-ui. Then I learned you have to accept the license agreement. So I updated my code to the very idomatic RUN yes | android update sdk --no-ui .  And well… the results were mildly amusing, but not what I hoped for:

Do you accept the license 'google-gdk-license-35dc2951' [y/n]:
Unknown response 'y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y

Dear maintainers of linux software: Don’t break the ‘yes’ command! Sincerely, Bay.

Thanks to Stack Overflow, I found this:

#===============
# Android SDK
#===============
ADD android-sdk_r24.4.1-linux.tgz /opt/selenium/
ENV ANDROID_HOME=/opt/selenium/android-sdk-linux
ENV PATH=${PATH}:${ANDROID_HOME}/tools:${ANDROID_HOME}/platform-tools

#The following downloads the platform-tools folder
RUN ( sleep 5 && while [ 1 ]; do sleep 1; echo y; done ) \
    | android update sdk --no-ui --all\
 	--filter tool,platform-tools,android-23,build-tools-23.0.3

Which got us to our next section: Installing Selendroid. It seemd pretty simple:

#===============
# Selendroid
#===============
ADD selendroid-standalone-0.17.0-with-dependencies.jar /opt/selenium/selendroid.jar
ADD selendroid-grid-plugin-0.17.0.jar /opt/selenium/selendroid-grid.jar

RUN java -jar /opt/selenium/selendroid.jar
RUN java -Dfile.encoding=UTF-8 -cp "/opt/selenium/selendroid-grid.jar:/opt/selenium/selendroid.jar" org.openqa.grid.selenium.GridLauncher -capabilityMatcher io.selendroid.grid.SelendroidCapabilityMatcher -role hub -host 127.0.0.1 -port 4444

But it didn’t work. And this, dear reader, is where I was stuck for hours, tearing my hair out in frustration. There were three errors. The first, it seems, is a red herring: there’s nothing actually wrong here (so why is it marked “SEVERE”? Bad usability, Selendroid!)

    android: SEVERE: Error executing command: /opt/selenium/android-sdk-linux/bu
ild-tools/23.0.3/aapt remove /tmp/android-driver7255065332626262791.apk META-INF
/NDKEYSTO.RSA
    android: org.apache.commons.exec.ExecuteException: Process exited with an er
ror: 1 (Exit value: 1)
    android:    at org.apache.commons.exec.DefaultExecutor.executeInternal(Defau
ltExecutor.java:377)
    android:    at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecut
or.java:160)
    android:    at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecut
or.java:147)
    android:    at io.selendroid.standalone.io.ShellCommand.exec(ShellCommand.ja
va:49)
    android:    at io.selendroid.standalone.android.impl.DefaultAndroidApp.delet
eFileFromWithinApk(DefaultAndroidApp.java:112)
    android:    at io.selendroid.standalone.builder.SelendroidServerBuilder.dele
teFileFromAppSilently(SelendroidServerBuilder.java:133)
    android:    at io.selendroid.standalone.builder.SelendroidServerBuilder.resi
gnApp(SelendroidServerBuilder.java:148)
    android:    at io.selendroid.standalone.server.model.SelendroidStandaloneDri
ver.initApplicationsUnderTest(SelendroidStandaloneDriver.java:172)
    android:    at io.selendroid.standalone.server.model.SelendroidStandaloneDri
ver.<init>(SelendroidStandaloneDriver.java:94)
    android:    at io.selendroid.standalone.server.SelendroidStandaloneServer.in
itializeSelendroidServer(SelendroidStandaloneServer.java:63)
    android:    at io.selendroid.standalone.server.SelendroidStandaloneServer.<i
nit>(SelendroidStandaloneServer.java:52)
    android:    at io.selendroid.standalone.SelendroidLauncher.launchServer(Sele
ndroidLauncher.java:65)
    android:    at io.selendroid.standalone.SelendroidLauncher.main(SelendroidLa
uncher.java:117)

The second drove me nuts because it outright lied to me. The file it complained about not having was right there, with executable permissions, owned by root (which I was operating as)!

INFO: Executing shell command: /opt/selenium/android-sdk-linux/build-tools/23.0.
3/aapt remove /tmp/android-driver2951817352746346830.apk META-INF/MANIFEST.MF
←[0m←[91mJul 06, 2016 8:22:29 AM io.selendroid.standalone.io.ShellCommand exec
SEVERE: Error executing command: /opt/selenium/android-sdk-linux/build-tools/23.
0.3/aapt remove /tmp/android-driver2951817352746346830.apk META-INF/MANIFEST.MF
java.io.IOException: Cannot run program "/opt/selenium/android-sdk-linux/build-t
ools/23.0.3/aapt" (in directory "."): error=2, No such file or directory
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
        at java.lang.Runtime.exec(Runtime.java:620)

It turns out that the “No such file or directory” was coming from inside the executable named, not referring to that executable. I needed to install some dependencies I’d missed, which I did using RUN apt-get update && apt-get install -y lib32stdc++6 lib32z1.

Now I had a different problem involving the keytool:

Jul 07, 2016 6:17:04 AM io.selendroid.standalone.io.ShellCommand exec
INFO: Executing shell command: /usr/lib/jvm/java-8-openjdk-amd64/bin/keytool -genkey -v -keystore /home/seluser/.android/debug.keystore -storepass android -alias androiddebugkey -keypass android -dname CN=Android Debug,O=Android,C=US -storetype JKS -sigalg MD5withRSA -keyalg RSA -validity 9999
Jul 07, 2016 6:17:06 AM io.selendroid.standalone.io.ShellCommand exec
SEVERE: Error executing command: /usr/lib/jvm/java-8-openjdk-amd64/bin/keytool -genkey -v -keystore /home/seluser/.android/debug.keystore -storepass android -alias androiddebugkey -keypass android -dname CN=Android Debug,O=Android,C=US -storetype JKS -sigalg MD5withRSA -keyalg RSA -validity 9999
org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit value: 1)
        at org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:377)
        at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:160)
        at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:147)
        at io.selendroid.standalone.io.ShellCommand.exec(ShellCommand.java:49)
        at io.selendroid.standalone.builder.SelendroidServerBuilder.signTestServer(SelendroidServerBuilder.java:277)
        at io.selendroid.standalone.builder.SelendroidServerBuilder.resignApp(SelendroidServerBuilder.java:154)
        at io.selendroid.standalone.server.model.SelendroidStandaloneDriver.initApplicationsUnderTest(SelendroidStandaloneDriver.java:172)
        at io.selendroid.standalone.server.model.SelendroidStandaloneDriver.<init>(SelendroidStandaloneDriver.java:94)
        at io.selendroid.standalone.server.SelendroidStandaloneServer.initializeSelendroidServer(SelendroidStandaloneServer.java:63)
        at io.selendroid.standalone.server.SelendroidStandaloneServer.<init>(SelendroidStandaloneServer.java:52)
        at io.selendroid.standalone.SelendroidLauncher.launchServer(SelendroidLauncher.java:65)
        at io.selendroid.standalone.SelendroidLauncher.main(SelendroidLauncher.java:117)

 

This simply claims that the process has exited with a failure; neither the stack trace nor the error message are useful. When I tried to execute that command myself, it complained about an invalid command-line option, which to me indicated that there needed to be quotes around the OU. I didn’t have access to change that, though I could generate my own keystore. However, I also had the third error to deal with:

SEVERE: Error building server: io.selendroid.standalone.exceptions.ShellCommandE
xception: Error executing shell command: /usr/lib/jvm/java-8-openjdk-amd64/bin/j
arsigner -sigalg MD5withRSA -digestalg SHA1 -signedjar /tmp/resigned-android-dri
ver694668080026603748.apk -storepass android -keystore /root/.android/debug.keys
tore /tmp/android-driver694668080026603748.apk androiddebugkey
←[0m←[91mException in thread "main" ←[0m←[91mjava.lang.RuntimeException: io.sele
ndroid.standalone.exceptions.ShellCommandException: Error executing shell comman
d: /usr/lib/jvm/java-8-openjdk-amd64/bin/jarsigner -sigalg MD5withRSA -digestalg
 SHA1 -signedjar /tmp/resigned-android-driver694668080026603748.apk -storepass a
ndroid -keystore /root/.android/debug.keystore /tmp/android-driver69466808002660
3748.apk androiddebugkey←[0m←[91m
←[0m←[91m       at io.selendroid.standalone.server.model.SelendroidStandaloneDri
ver.initApplicationsUnderTest(SelendroidStandaloneDriver.java:175)←[0m←[91m
←[0m←[91m       at io.selendroid.standalone.server.model.SelendroidStandaloneDri
ver.<init>(SelendroidStandaloneDriver.java:94)←[0m←[91m
←[0m←[91m       at io.selendroid.standalone.server.SelendroidStandaloneServer.in
itializeSelendroidServer(SelendroidStandaloneServer.java:63)←[0m←[91m
←[0m←[91m       at io.selendroid.standalone.server.SelendroidStandaloneServer.<i
nit>(SelendroidStandaloneServer.java:52)←[0m←[91m
←[0m←[91m       at io.selendroid.standalone.SelendroidLauncher.launchServer(Sele
ndroidLauncher.java:65)←[0m←[91m
←[0m←[91m       at io.selendroid.standalone.SelendroidLauncher.main(SelendroidLa
uncher.java:117)←[0m←[91m
←[0m←[91mCaused by: io.selendroid.standalone.exceptions.ShellCommandException: E
rror executing shell command: /usr/lib/jvm/java-8-openjdk-amd64/bin/jarsigner -s
igalg MD5withRSA -digestalg SHA1 -signedjar /tmp/resigned-android-driver69466808
0026603748.apk -storepass android -keystore /root/.android/debug.keystore /tmp/a
ndroid-driver694668080026603748.apk androiddebugkey←[0m←[91m
←[0m←[91m       at io.selendroid.standalone.io.ShellCommand.exec(ShellCommand.ja
va:56)←[0m←[91m
←[0m←[91m       at io.selendroid.standalone.builder.SelendroidServerBuilder.sign
TestServer(SelendroidServerBuilder.java:296)←[0m←[91m
←[0m←[91m       at io.selendroid.standalone.builder.SelendroidServerBuilder.resi
gnApp(SelendroidServerBuilder.java:154)←[0m←[91m
←[0m←[91m       at io.selendroid.standalone.server.model.SelendroidStandaloneDri
ver.initApplicationsUnderTest(SelendroidStandaloneDriver.java:172)←[0m←[91m
←[0m←[91m       ... 5 more←[0m←[91m
←[0m←[91mCaused by: io.selendroid.standalone.exceptions.ShellCommandException: ←
[0m←[91m

This executable was missing altogether, and rightly so: I couldn’t find it on the filesystem. And that was when I realized my “JDK” was a JRE. Installing the proper JDK, shown above, took care of both those errors. Lessons learned.

(As a side note: one thing I really like about Docker is the caching strategy. It only seemed to re-install the Android SDKs if I changed that step or an earlier one, preferring the cached version when I was working on the later steps — something that saved me a ton of time and frusturation.)

So now we have a working (sort of) Dockerfile! Two problems left:

  • There are no emulators available for Selendroid. Oops!
  • The dockerfile starts Selenium Server and then hangs forever, because it doesn’t return from that. TBD

Now, normally you’d fire up the Android Studio GUI and create yourself an AVD file for Selendroid to use, but I’m doing it all the hard way, via the command line in my Docker file. The first thing I have to do is download an ABI to make an AVD out of:

#The following downloads the platform-tools folder and the ABI
RUN ( sleep 5 && while [ 1 ]; do sleep 1; echo y; done ) \
    | android update sdk --no-ui --all\
 	--filter tool,platform-tools,android-23,sys-img-x86-android-23,build-tools-23.0.3

And then, I create an ABI out of it. Note that we are asked one question I couldn’t get rid of using command-line flags, so I used the “echo” command to send a newline and accept the default option (no hardware profile):

#Create AVD. Echo sends a newline and nothing else here, for accepting the default to the question asked.
RUN echo | android create avd --name Default --target android-23 --abi x86

Now before it hangs forever, it clearly states:

android: INFO: Shell command output
android: -->
android: Available Android Virtual Devices:
android:     Name: Default
android:     Path: /root/.android/avd/Default.avd
android:   Target: Android 6.0 (API level 23)
android:  Tag/ABI: default/x86
android:     Skin: WVGA800
android: <--
android:

Progress made!

It was then that I began to really dig into the nitty gritty about how the base image started selenium server. It seems that Selenium HQ chose to use a shell script to wrangle a series of environment variables; since they know the product better than I do, I went down the same path and created my own version of this script, modified for Selendroid:

#!/bin/bash

source /opt/bin/functions.sh

java ${JAVA_OPTS} -jar /opt/selenium/selendroid.jar -keystore /home/seluser/debug.keystore &
NODE_PID=$!

curl -H "Content-Type: application/json" -X POST --data /opt/selenium/config.json http://$HUB_PORT_4444_TCP_ADDR:$HUB_PORT_4444_TCP_PORT/grid/register

trap shutdown SIGTERM SIGINT
wait $NODE_PID

You can see how much shorter it is; Selendroid is weird in that it doesn’t seem to take most of the config options required, and requires me to manually curl the config to the hub node.

A note: be sure to save this with unix line endings. You’ll get a very strange error reading [8] System error: no such file or directory if you do not, and that’s awful to try and figure out because it’s so generic. I also got really comfortable SSHing into the underlying VM to run “docker rm -f” at this point, because the container was building fine but erroring out, so now it had name conflicts when I tried to rebuild.

At this point, the container built, but did not run successfully. This means our debugging strategy changes from inspecting the vagrant output carefully to reading the container’s logs using “docker logs selenium-selendroid”. I now found the resurgence of several of our older problems, which was incredibly frustrating; I was sure that those had been resolved already. It was all stupid little fixes, like generating the keystore in a known location as root and then passing it into selendroid (an approach I had tried earlier but found unnecessary, and one that is already accounted for in the above shell script) or making sure to generate the AVD as the same user that would run selendroid so it ended up in the right location.

At this point we have a working Selendroid docker container! But….. it doesn’t register with the hub correctly. Also, the hub’s web console isn’t accessible, making debugging troubling. At this point, I’m taking a breather, because it’s been 3 days and I’m frustrated. We’ll return in part 3 to make this fully functional. Feel free to comment if you have tips and tricks!

Our current dockerfile:

FROM selenium/node-base:2.53.0
MAINTAINER Bgreen <[redacted email]>

USER root

#===============
# JAVA
#===============
RUN apt-get update && apt-get install -y openjdk-8-jdk
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64

#Keystore generation is broken somehow?
RUN /usr/lib/jvm/java-8-openjdk-amd64/bin/keytool -genkey -v -keystore /home/seluser/debug.keystore \
    -storepass android -alias androiddebugkey -keypass android \
    -dname CN="Android Debug,O=Android,C=US" -storetype JKS -sigalg MD5withRSA \
    -keyalg RSA -validity 9999

RUN chown seluser /home/seluser/debug.keystore

#===============
# Android SDK
#===============
ADD android-sdk_r24.4.1-linux.tgz /opt/selenium/
ENV ANDROID_HOME=/opt/selenium/android-sdk-linux
ENV PATH=${PATH}:${ANDROID_HOME}/tools:${ANDROID_HOME}/platform-tools

#The following downloads the platform-tools folder and the ABI
RUN ( sleep 5 && while [ 1 ]; do sleep 1; echo y; done ) \
    | android update sdk --no-ui --all\
 	--filter tool,platform-tools,android-23,sys-img-x86-android-23,build-tools-23.0.3

#========================
# Selenium Configuration
#========================
COPY config.json /opt/selenium/config.json

#========================
# Extra libraries
#========================
RUN apt-get update && apt-get install -y lib32stdc++6 lib32z1
RUN apt-get install -y curl

#===============
# Selendroid
#===============
ADD selendroid-standalone-0.17.0-with-dependencies.jar /opt/selenium/selendroid.jar
ADD selendroid-grid-plugin-0.17.0.jar /opt/selenium/selendroid-grid.jar

COPY startSelendroid.sh /opt/bin/
RUN chmod +x /opt/bin/startSelendroid.sh

#===============
# Start the grid
#===============
USER seluser

#Create AVD. Echo sends a newline and nothing else here, for accepting the default to the question asked.
RUN echo | android create avd --name Default --target android-23 --abi x86

CMD ["/opt/bin/startSelendroid.sh"]

 

Selenium Grid on Docker and Vagrant: Part 1

I’ve been putting together a quick proof-of-concept here at work about how we could use Docker to run a Selenium Grid. I’m not sure we’ll go that route, but I was curious how it could be done.

One of the main advantages of doing this sort of rough proof in Vagrant is that it becomes very portable. At the end of the day, I have a mini testing cloud I can run my tests against — and any member of my team can check out a few files and have their own mini testing cloud. It’s pretty neat, and it means that even if we decide against implementing this on a larger scale, I get some value out of it in years to come.

I’ll assume you’re passingly familiar with vagrant already, and have at least read the getting started docs. I was an absolute newbie to Docker when I started, so this discussion will assume no prior Docker knowledge. If you do know Docker, feel free to tell me how wrong I am in the comments section 🙂

I went down the path of using a Docker Provisioner for an hour or so before I realized that was the wrong path: I want to use the Docker Provider. The way to think of this is like a series of super tiny VMs which have to live on a giant VM in much the same way lily pads decorate the top of a pond. Docker as a provider can manage the whole set of lily pads and knows nothing about the pond; Docker as a provisioner can add a lily pad to your existing pond ecosystem without making as many waves.

So we have a secret VM, and a series of explicit Docker containers. Now, this was a proof of concept, but I actually care what OS that secret VM uses; if it’s not compatible with RHEL 6, then I won’t be able to make a good case for it in the end. Lots of shiny new toys only work on Ubuntu, after all.

Vagrant by default picks the tiniest OS it can find, just enough to support the containers on top. Usually that’s a good decision, but as we just discussed I want that secret VM to be CentOS 6 instead. This is where things get a little difficult: to specify your own VM to use, you give Vagrant another Vagrantfile.

Because Vagrantfiles need to be called “Vagrantfile”, you have to create a subfolder; mine is “dockerHost/Vagrantfile” for lack of better terminology. I also wanted to limit the amount of RAM Virtualbox would eat up, and enable networking (this will become important later). Try to think through what you’ll need, because every time you need to destroy and recreate this box, it’s going to suck and feel like it takes forever.

My dockerHost vagrantfile:

Vagrant.configure("2") do |config|
    # Every Vagrant development environment requires a box. You can search for
    # boxes at https://atlas.hashicorp.com/search.
    config.vm.box = "bento/centos-6.7"


    # Create a forwarded port mapping which allows access to a specific port
    # within the machine from a port on the host machine. In the example below,
    # accessing "localhost:8080" will access port 80 on the guest machine.
    config.vm.network "forwarded_port", guest: 80, host: 8088

    # Create a private network, which allows host-only access to the machine
    # using a specific IP.
    config.vm.network "private_network", ip: "192.168.33.10"


    # Provider-specific configuration so you can fine-tune various
    # backing providers for Vagrant. These expose provider-specific options.
    config.vm.provider "virtualbox" do |vb|
        # Customize the amount of memory on the VM:
        vb.memory = "1024"

        # enable network features
        vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
        vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
    end

    # Docker provisioner will install docker when given no options
    # This prepares the box to be a base image for the docker-related script
    config.vm.provision "docker"

    # The following line terminates all ssh connections. Therefore
    # Vagrant will be forced to reconnect.
    # That's a workaround to have the docker command in the PATH
    config.vm.provision "shell", inline:
        "ps aux | grep 'sshd:' | awk '{print $2}' | xargs kill"

    # Below are to fix an issue with docker provisioning
    config.vm.provision "shell", inline: "sudo chmod 777 /var/lib/docker "

end

A few things to point out:

  • I needed to enable network features, but not right away; that’s a later addition, for things we won’t get to until part 2.
  • Docker as a provisioner comes back into the mix in a rather unintuitive way. When given no images to load or dockerfiles to build, the provisioner simply installs Docker and exits. This makes a very easy, platform-agnostic way to install Docker. I was halfway through a shell script to do the provisioning when I learned this, and frankly, I just didn’t want to bother learning how to install Docker. On the other hand, this step takes forEVER to run, so you don’t want to recreate the VM often.
  • Docker isn’t available as a command until the ssh has been kicked out and reconnected. This is probably a Vagrant bug. I found the workaround listed above and stopped looking, because I didn’t want to spend more time on this than necessary.
  • The last line isn’t needed until part 2 of this series, but if you plan to build your own docker images, you probably want it.

When this is run with “vagrant up”, it creates a VM that has Docker installed. You probably want to test this before moving on, but once you do, you won’t need to explicitly start this again.

So let’s go to our upper-level Vagrantfile. I looked around and very quickly found some Docker images I want to use out of the box: https://github.com/SeleniumHQ/docker-selenium. The first one to get running is the hub node, the central node for our grid. We configure Docker like any other provider:

Vagrant.configure("2") do |config|
    # The most common configuration options are documented and commented below.
    # For a complete reference, please see the online documentation at
    # https://docs.vagrantup.com.

    # Skip checking for an updated Vagrant box
    config.vm.box_check_update = false

    # Always use Vagrant's default insecure key
    config.ssh.insert_key = false

    # Disable synced folders (prevents an NFS error on "vagrant up")
    config.vm.synced_folder ".", "/vagrant", disabled: true

    # Configure the Docker provider for Vagrant
    config.vm.provider "docker" do |docker|

        # Define the location of the Vagrantfile for the host VM
        # Comment out this line to use default host VM
        docker.vagrant_vagrantfile = "dockerHost/Vagrantfile"

        # Specify the Docker image to use
        docker.image = "selenium/hub"

        # Specify a friendly name for the Docker container
        docker.name = 'selenium-hub'
    end

Here we can see:

  • I’ll confess I stole that synced-folders workaround from another tutorial. It’s probably cargo-culting, since I never ran into that issue myself, but on the other hand, I’m not using shared folders here, and neither should you be. If you need to use shared-folders, use them in the lower level. If you need to move files into your container, that should be done using Docker’s native utilities for file system manipulation, which will be covered in part 2.
  • The vagrantfile for the host VM is the vagrantfile we built above, the centOS one.
  • The image to use is just the name of the image. Much like vagrant boxes, this will search the central repository and find the right container image to use, so don’t worry about this unless it fails.
  • The friendly name is used in the log output, so make it something you’ll recognize.

Once that launches successfully, the hard part is done: we now have a container on top of a custom VM. Now we just add nodes, which are also provided from the same source. Of course, now we’re moving from a single-machine setup to a multi-machine setup, so we use the multi-machine namespace tools Vagrant provides. We also should probably open port 4444 so that we can actually connect to the grid from our proper host machine.

# Parallelism will damage the links 
ENV['VAGRANT_NO_PARALLEL'] = 'yes'

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
    # The most common configuration options are documented and commented below.
    # For a complete reference, please see the online documentation at
    # https://docs.vagrantup.com.

    # Skip checking for an updated Vagrant box
    config.vm.box_check_update = false

    # Always use Vagrant's default insecure key
    config.ssh.insert_key = false

    # Disable synced folders (prevents an NFS error on "vagrant up")
    config.vm.synced_folder ".", "/vagrant", disabled: true

    config.vm.define "hub" do |hub|
        # Configure the Docker provider for Vagrant
        hub.vm.provider "docker" do |docker|

            # Define the location of the Vagrantfile for the host VM
            # Comment out this line to use default host VM
            docker.vagrant_vagrantfile = "dockerHost/Vagrantfile"

            # Specify the Docker image to use
            docker.image = "selenium/hub"

            # Specify port mappings
            # If omitted, no ports are mapped!
            docker.ports = ['4444:4444']

            # Specify a friendly name for the Docker container
            docker.name = 'selenium-hub'
        end
    end

    #We can parallel now
    ENV['VAGRANT_NO_PARALLEL'] = 'no'
    config.vm.define "chrome" do |chrome|
        # Configure the Docker provider for Vagrant
        chrome.vm.provider "docker" do |docker|

            # Define the location of the Vagrantfile for the host VM
            # Comment out this line to use default host VM that is
            # based on boot2docker
            docker.vagrant_vagrantfile = "dockerHost/Vagrantfile"

            # Specify the Docker image to use
            docker.image = "selenium/node-chrome:2.53.0"

            # Specify a friendly name for the Docker container
            docker.name = 'selenium-chrome'

            docker.link('selenium-hub:hub')
        end
    end

    config.vm.define "firefox" do |firefox|
        # Configure the Docker provider for Vagrant
        firefox.vm.provider "docker" do |docker|

            # Define the location of the Vagrantfile for the host VM
            # Comment out this line to use default host VM that is
            # based on boot2docker
            docker.vagrant_vagrantfile = "dockerHost/Vagrantfile"

            # Specify the Docker image to use
            docker.image = "selenium/node-firefox"

            # Specify a friendly name for the Docker container
            docker.name = 'selenium-firefox'

            docker.link('selenium-hub:hub')
        end
    end
end

Some things to note:

  • We use docker.link to link the nodes to the hub. This is a very Dockery thing, so i’m not entirely sure of the implications yet, but essentially, this pokes a bit of a hole in the container walls so that the processes in one container can see another container. This link creates our little network of grid nodes, allowing the nodes to register in the grid
  • We can’t create the hub and the nodes in parallel, because the nodes need to link to the hub and the hub may not be started yet when they try to register. I tried to turn parallel back on after the hub was created but I don’t think it actually works. Oh well. Maybe move the hub to its own machine that’s always up and only control the nodes with Docker?
  • You can pin to a specific version of the container, as I did for chrome, or you can leave it at the latest, as I did for firefox. There’s no reason I did them both differently except that I was testing out options to become more comfortable with the setup.

If you only need to test Chrome and Firefox, you can easily see how you can set up a small grid network this way. If you use vagrant heavily already with a cloud or private-cloud setup, you can just plug and play, replacing the virtualbox stuff with your provider of choice.

What about testing IE? Well, I started to put something together with the modernie VMs, as separate VMs that would need to be launched alongside the Docker provider and plug back into the hub, but ultimately I abandoned that course of action. We wouldn’t use vagrant for that task in a real setup, we’d just have a permanent VM set up to multithread requests for testing IE.

Instead, what interested me more was android testing with Selendroid. There was no docker image for selendroid however… yet. Docker as a provider also lets you build your own custom image, so that’s what I set out to do. Unfortunately, that doesn’t work yet. To Be Continued!

Teatime: Containers and VMs

Welcome back to Teatime! This is a weekly feature in which we sip tea and discuss some topic related to quality. Feel free to bring your tea and join in with questions in the comments section.

Tea of the week: Ceylon by Sub Rosa Tea. This is a nice, basic, bold tea, very astringent; it’s great for blending so long as you don’t choose delicate flavors to blend with. It really adds a kick!
teaset2pts

Today’s Topic: Containers and virtualization

Today, I’m going to give you a brief overview of a technology I think might be helpful when running a test lab. So often as testers we neglect to follow trends in development; we figure, devs love their fancy toys, but the processes for testing software really don’t change, so there’s no need to pay much heed to what they’re doing. Too often we forget that, especially as automation engineers, we are writing software and using software and immersing ourselves in software just like they are. So it’s worth taking the time to attend tooling talks from time to time, see if there’s anything worth picking up.

Vagrant

A tool I’ve picked up and put down a few times over the past year or so is Vagrant. Vagrant makes it very easy to provision VMs; you can store the configuration for the server needed to run software right with the source code or binaries. Adopting a system in which developers keep the vagrantfiles up to date and testers use them to spin up test instances can ensure that every test we run is on a valid system configuration, and both teams know what the supported configurations entail.

At a high level, the workflow is simple:

  1. Create a Vagrantfile
  2. On the command line, type “vagrant up”
  3. Wait for your VM to finish booting

In order for this to work, however, you have to have what’s called a “provider” configured with Vagrant. This is a specific VM technology that you’re using at your workplace; in my experiements, I’ve used Virtualbox, but if you’re already using something like VMWare or a cloud provider like AWS for your test lab, there’s integrations with those systems as well.

When creating the vagrantfile, you first select a base image to use. Typically, this will be a machine with a given version of a given OS and possible some software that’s more complex to install (to save time). HashiCorp, makers of Vagrant, provide a number of base machines that can be used, or you can create your own. This of course means that every VM you bring up has the same OS and patch level to begin with.

The next step is provisioning the box with the specific software you’re using. This is where you would install your application, any dependencies it has, and any dependencies of those dependencies, and so on. Since everything is installed automatically, everything is installed at the same version and with the same configuration, making it really easy to load up a fresh box with a known good state. Provisioning can be as simple as a handful of shell scripts, or it can use any of a number of provisioning systems, such as Chef, Ansible, or Puppet.

Here is a sample vagrantfile:

# -*- mode: ruby -*-

  $provisionScript = <<SCRIPT
    #Node & NPM
    sudo apt-get install -y curl
    curl -sL https://deb.nodesource.com/setup | sudo bash -  #We have to install from a newer location, the repo version is too old
    sudo apt-get install -y nodejs
    sudo ln -s /usr/bin/nodejs /usr/bin/node
    cd /vagrant
    sudo npm install --no-bin-links
SCRIPT

# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.

  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://atlas.hashicorp.com/search.
  config.vm.box = "hashicorp/precise64"

  config.vm.provider "virtualbox" do |v|
    v.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root", "1"]
  end
  config.vm.network "private_network", ip: "192.168.33.11"
  
  #Hosts file plugin
  #To install: vagrant plugin install vagrant-hostsupdater
  #This will let you access the VM at servercooties.local once it's up
  config.vm.hostname = "servercooties.local"
  
  config.vm.provision "shell",
  inline: $provisionScript

end

I left a good deal of the tutorial text in place, just in case I needed to reference it. We’re using Ubuntu Precise Pangolin 64-bit as the base box, distributed by HashiCorp, and I use a plugin that modifies my hosts file so that I can always find the machine in my browser at a known host. The provision script is just a simple shell script embedded within the config; I’ve placed it at the top so it’s easy to find.

One other major feature that I haven’t yet played with is the ability for a single Vagrantfile to bring up multiple machines. If your cluster generally consists of, say, two web servers, a database server, and a load balancer, you can encode that all in a single vagrantfile to bring up a fresh cluster on demand. This makes it simple to bring up new testing environments with just one command.

Docker

I haven’t played much with Docker, but everyone seems to be raving about it, so I figured I’d touch on it as an alternative to Vagrant. Docker takes the metaphor of shipping containers, which revolutionized the shipping industry by abstracting away the handling of specific types of goods from the underlying business of moving goods around, and extends it to software. Before standard shipping containers, different goods packed differently, required different packaging material to keep them safe, and shipped in different amounts and weights; cargo handlers had to learn all these things, and merchants were a little wary of trusting their precious goods to someone who was less experienced. The invention of the standard shipping container changed all that: shipping companies just had to understand how to load and transport containers, and it was up to the manufacturers to figure out how to pack them. Docker does the same thing for software: operations staff just have to know how to deploy containers, while it’s up to the application developers to understand how to pack them.

Inside a docker container, the application, its dependencies, and its required libraries reside, all pinned to the right versions and nestled inside the container. Outside, the operating system and any system-wide dependencies can be maintained by the operational staff. When it’s time to upgrade, they just remove the existing container and deploy the new one over top. Different containers with different versions of the same dependency can live side  by side; each one can only see its own contents and the host’s contents.

And thus, we reach the limit of my knowledge of Docker. Do you have more knowledge? Do you have experience with Vagrant? Share in the comments!