Skip to content

Instantly share code, notes, and snippets.

@grkvlt
Last active August 29, 2015 14:10
Show Gist options
  • Save grkvlt/3f9855561af77cb3e6c0 to your computer and use it in GitHub Desktop.
Save grkvlt/3f9855561af77cb3e6c0 to your computer and use it in GitHub Desktop.
Clocker Acceptance Testing

Clocker Configuration

The following sections list the location configuration options to be used when testing Clocker releases. All of these must be combined, to give 2 x 6 x 3 or 36 separate configurations. The default setup is a remote Ubuntu server running Brooklyn, with Clocker deployed to Ubuntu 14.04 VMs in Softlayer London. Results are to be recorded in a spreadsheet and successful completion of all configurations is required for a release GO or NO-GO decision, with performance to be included as a success criterion one a baseline has been established.

Sign-off on a release must be by both Clocker Architect and Lead Engineer @grkvlt and Cloudsoft VP Engineering @aledsage.

Brooklyn Server Location

  1. Local OSX 10.10 Laptop
  2. Remote Ubuntu 14.04 VM

Server Configuration

Minimum specifications for the Brooklyn server.

  • Java 1.7.0 JVM
  • 8 GiB RAM
  • 4 core 2 GHz CPU

Other Brooklyn server configurations may be tested if time and resources are available

Docker Cloud Prrovider Location

  1. Softlayer London and San Jose
  2. Amazon EC2 Dublin and California
  3. Google Compute Engine
  4. HP Helion
  5. CloudStack
  6. Pre-provisioned Linux servers

Cloud Operating System

  1. Ubuntu 12.04
  2. Ubuntu 14.04
  3. CentOS 6.5

The VM or servers used must have the following minimum specification; available on EC2 as a c3.xlarge instance type.

  • 8 GiB RAM
  • 4 core 2.0 GHz CPU

Testing

All automated unit tests must pass, giving a green build on the Travis CI build server. An integration test suite should be configured and executed for all tagged release branches, as a condition for creating the release page on GitHub.

NOTE The current test coverage is extremely low, and must be greatly increased for this to be of value.

For each of the test configurations, the following blueprints (the YAML files will be made available seperately) should be deployed and validated, and the list of features should also be exercised. To check the featuire list, a Riak cluster should be used on its own without a web application.

Blueprints

  1. Tomcat Webapp
  2. Riak Cluster with JBoss and Nginx
  3. Couchbase Cluster with Pillowfight and Scaling
  4. Node.JS and Redis TODO app
  5. Cassandra Cluster
  6. Push Diffusion and DNS server
  7. MySQL from Dockerfile

Features

  1. Headroom scaling
  2. Placement strategies
  3. Console
  4. Application shutdown
  5. Rebind and persistence

Performance

Measure CPU and RAM usage per container entity for a simple Java application entity with JMX sensors. Use profiling tools and time application startup and scaling operations. Full performance testing should also include a determination of a Brooklyn baseline standard, with which comparisions can be made.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment