Switching to Jenkins–SVN Revision Number as the Build Number

One of the things that I really liked about the version of CCNet that I was using was the custom build labeller we were using.  It would append to a base string the SVN revision number.  Here is a snapshot of the Recent Builds widget on one of our Release Build reports:

image

This build server was working on the 5.2.1 branch and the most recent build was of SVN revision 40468.  This allows developer to easily communicate what build their changes or fixes are in.  So for example a tester can easily understand which build they want to download, install, and test…  That is very convenient, yes?

Jenkins does not have this feature.  In all fairness neither did CCNet, we had to write a plugin.  In this case I found a Groovy plugin for Jenkins that I used to script up a solution.  The documentation for the plugin can be found here.

I made the first build step a system Groovy step and added the following code:

image

Executing as a system Groovy step means this will be executed in the same JVM as Jenkins, allowing access to all the Jenkins objects…so we can alter the build display name.  The first two lines are importing the Jenkins(Hudson) packages.  The javadocs for what is available are located here.

import hudson.model.*
import hudson.util.*

The next line is a neat little trick that will give you a reference to the current build.

def build = Thread.currentThread().executable

We will first use this build object to get the workspace folder path.  This is where the build is executing out of.  We need this piece of information so that we can execute the subversion command “info” in the root of the workspace.

def workspace = build.getWorkspace()

In Groovy if you want to specify the directory from which to execute a shell command and you want to capture the command’s output the easiest way to use the Ant task exec like so:

def ant = new AntBuilder()
ant.exec(executable: "svn", outputproperty: "output", dir: workspace){
    arg(line: "info")
}

svnInfo = ant.project.getProperty("output")

That captured the out of the svn “info” command into the variable svnInfo.  Now we can use a regular expression to extract the revision we are currently on.  Here is some example output from an svn “info” command:

C:\Projects\Chapter33\Trunk\Build>svn info
Path: .
URL: https://va33-repo01/svn/Chapter33/Trunk/Build
Repository Root: https://va33-repo01/svn/Chapter33
Repository UUID: f1ce2e10-74e2-f14b-9613-4d7166fa18d4
Revision: 40498
Node Kind: directory
Schedule: normal
Last Changed Author: bassettt
Last Changed Rev: 40484
Last Changed Date: 2012-03-14 16:24:37 -0400 (Wed, 14 Mar 2012)

We want to extract from all that the Last Changed Rev value, and we can do that with this code:

def pattern = /Last\s+Changed\s+Rev:\s+(\d+)/
def matcher = (svnInfo =~ pattern)

def buildLabel = ‘Dev-’ + matcher[0][1]

We take the extracted value, in this example 40484, and set a variable named buildLabel to “Dev-40484”.  Lastly we set the Jenkins build display name.

println ‘setting build label for this build’

build.setDisplayName(buildLabel)

This results in a Build History widget that looks like this:

image

 

If you are familiar with Jenkins you might ask why not just use the Build Name Setter Plugin?  I would have but the svn env var Jenkins sets is often incorrect as documented here(I too see this bug).  So I wrote my own solution…  I also use variations on this to show versions of the application as the move through the build pipeline.  Instead of grabbing the build name from subversion in downstream builds I grab it from the triggering upstream build.  There are lots of interesting uses for this.

71,426 Total Views
March 15, 2012 Continuous Integration

Continuous Integration Principles–Shared Read/Write Servers are Bad

At the beginning of most projects that I have been on the default starting position has been that dev and test will each get their own environment to share.  Developers share the dev env and testers share the test env.  I think we need to change this.  Shared environments are good for only a select set of scenarios and development and testing are not among them.

tug-o-warIf they have been the norm what has changed to allow us to go a different route?  Two things: first increased performance of workstations and laptops and second improved automation in the dev/test workspace.  Increased performance in hardware has allowed us to run more servers in a local workspace.  On my laptop I can easily run several Weblogic servers and an Oracle Database with resources left for an IDE and other dev/test tools. Not so long ago this was impossible.  Managing all my servers, the applications running on them, and one or more databases would leave little time to do anything else in a fast changing development project.  Automation alleviates this time sink.  Providing push button headless automation to setup, deploy, and manage these servers is key.  This too was thought of as impossible not to long ago, yet fully automated deployments are more and more common these days.

Okay, so maybe it is possible you say.  Why would I want to do this?  Haven’t shared environments been working…

No, I don’t think they have been working.  The whole purpose behind this change is to enable developers and testers to test the application more easily, spend less time identifying and recovering from collisions, and for developers specifically to spend less time chasing the version of the shared env.  When several people share an environment they can easily collide with each other.  There are many types of collision, they depend on the architecture of the application under test as well as it’s dependencies.  Many of these types of collision revolve around data.  There are schemes to minimize data collisions, yet no scheme is foolproof.  If every developer and tester has a private environment to work in several things become possible:

  • testers and developers can easily roll back to any version of the application to replicate a bug, or return to last working version
  • testers and developers can execute automated tests at will, or any kind of test for that matter, no more collisions due to data, execution resources, etc…
  • no one is forced to update to a new version of the application, in shared envs updates are normally done nightly, for example a developer could work uninterrupted on a bug through one day and into the next, i.e. no more chasing the shared env version…

All of these things will dramatically increase the productivity of a team!

At this point you may think that I am claiming that a whole environment should be hosted locally on a developer/tester workspace.  I am not.  I only think that all read/write servers should be local.  Readonly servers could, most of the time should, be shared.  Remember that what counts is how you interact with the server.  If your application only ever reads from it then you should treat it as a readonly server.

If you search around the internet on this subject you will mostly find that this subject has been written on from the database point of view.  Here are a few good examples:

Most of the issues that have been documented around shared database servers effect shared webservices, EJBs, .Net Remoting, REST, etc…

Just in case you are not yet convince let’s try some systems thinking.  This situation of shared servers or resources indicates that we should take a look at the archetype “Tragedy of the Commons”.  This is taken from the site http://www.systems-thinking.org:

 

Tragedy of the Commons

The Tragedy of the Commons structure represents a situation where, to produce growth, two or more Reinforcing Loops are dependent on the availability of some common limited resource.

A’s activity interacts with the resources available adding to A’s results. A’s results simply encourage more of A’s activity. The same sequence plays out for B’s activity. And, the more resources used the greater the results. This simply encourages A and B to use more resources.

A’s activity and B’s activity combine to produce some total activity. This total activity subtracts from the resources available. The extent of the resources available being defined by the resource limit.

Total activity continues until it completely depletes the resources available. When this happens A’s results and B’s results stop growing as there are no more resources to use. What makes this structure even worse is that whoever figures out the structure first, A or B, wins because they use all the resources before the other has a chance to. This structure is often referred to as "All for one and none for all."

Managing the Structure

This structure repeatedly appears in organizational contexts where a service organization supports the success of multiple departments who fail to support the service organization in return. There are two strategies for dealing with this structure, one more effective than the other.

  • The most effective strategy for dealing with this structure is to wire in feedback paths from A’s results and B’s results to the resource limit so as A and B use resources their results promotes the availability of additional resources.
  • The alternate, and less effective, strategy for dealing with this structure is to add an additional resource to control the use of resources by A and B. This strategy limits the overall potential results of the structure to the predefine resource limit. It also adds additional resource to the equation, and probably results in endless disputes as to the fairness associate with the allocation of resources. While not really the most appropriate strategy this is the one most often used — out of ignorance I would suspect.
Examples

In our case the resource can be replenished in a couple of ways depending on how it was or is being depleted.  A scorch and rebuild of the data will replenish a data depletion.  If the server resources, CPU, memory, IO, etc… where depleted then simply curtailing the over usage will replenish the server.  I find this site’s offered solutions lacking.  It’s presentation is as if it is a syllogism.  An easy third solution is to dedicate a resource per user.  This removes some of the limiting factor from the system and all of the shared aspect leaving a set of independent reinforcing loops, one per user.  I this case both A and B get their own resource. 

44,114 Total Views
March 11, 2012 Uncategorized

Congressional Testimony of Government Success with Agile

The following are snippets from the link below:

http://veterans.house.gov/prepared-statement/prepared-statement-hon-roger-w-baker-assistant-secretary-information-and

capital

Witness Testimony of Hon. Roger W. Baker, Assistant Secretary for Information and Technology and Chief Information Officer, U.S. Department of Veterans Affairs

 

Hearing on 03/11/2012:

 

Introduction

Chairman Johnson, Ranking Member Donnelly, members of the Subcommittee: thank you for inviting me to testify regarding the Department of Veterans Affairs’ (VA) Information Technology (IT) strategy for the 21st century.  I appreciate the opportunity to discuss VA’s plans, actions, and accomplishments that will position VA’s IT organization as a 21st century leader in the Federal Government.

 

  1. Product Delivery

IT is an enabler to the implementation of the Secretary’s 16 Transformational Initiatives, which cannot be executed without newly developed IT products.  These initiatives are key to improving VA’s services to Veterans, and IT investments have allowed us to deliver products or plan for on-time delivery of the following programs:

  • Successful, on-time delivery of the critical G.I. Bill project. VA successfully converted all processing of new Post-9/11 GI Bill claims to the Long Term Solution (LTS) prior to the commencement of the Fall 2010 enrollment process.  Since installation, processing with the new system has been excellent, with no significant “bugs” encountered.  The Veterans Benefits Administration claims processors like the new system and find it easier and more efficient to use.  By dramatically changing its development processes, adopting the Agile methodology for this project, VA also dramatically changed its system development results;

Agile development

A primary driver of our success under PMAS has been the adoption of incremental development.  Every project at VA, without exception, must deliver functionality to its users at least every six months.  Several of our most important projects, including the GI Bill and VBMS, have adopted Agile development methodologies. Whereas PMAS addresses the planning and management aspects of short, incremental delivery, the Agile development methodology provides the technical management guidance of how to turn project requirements into working software quickly and in collaboration with the customer.  

Agile development is important to the VA because it encourages continuous input from our customers.  In agile projects, all the development priorities are set by the customer, which ensures that the work is performed in the order of importance.  To increase the likelihood of success, large projects are broken down into small but valuable increments, each of which could potentially be a candidate for release.  This is consistent with our PMAS delivery requirements.  Lastly, agile development requires continuous quality assurance throughout the entire development effort, further ensuring high quality deliverables.

Agile software development methodologies are an effective means of improving the predictability, quality, and transparency of software products and their development. At the core of Agile is the iterative work process. Business problems are broken down into small increments of delivery that are tangible products that can be reviewed and verified regularly by business stakeholders. By constantly incorporating feedback, the software that is essential to solving the business problem is created in partnership with stakeholders and any miscommunications, revisions, or changes in business needs can be accommodated quickly and with little rework. The quality of software is kept high throughout the development process as the product in development is kept as close to a production-ready state as possible with each release increment. In addition, prior to the start of each increment, business stakeholders and the development team agree upon which features or requirements are to be satisfied during that increment thus ensuring that the most important work is completed first.

Contrary to popular belief, the successful Agile program requires great rigor as it is essentially a process based on statistical analysis. Every work product (software or otherwise) is defined, broken down and estimated. As work progresses, these work products are carefully tracked on a daily basis and results of progress are published to the team and stakeholders (and any other authorized, interested party) to provide complete transparency. The result of this hyper-transparency is that problems in the development process are identified early and changes, regardless of their origin, can be accommodated quickly and efficiently.

I am honored to work on this project.  We have accomplished a significant number of releases in the course of the project.  I am proud to say the we truly are practicing Agile.  This is the first project where I have gotten to implement automated deployments all the way to production!  I hope the next project I work on is as rewarding as this one is.

40,538 Total Views
Uncategorized

here