March 11, 2012 Uncategorized

Continuous Integration Principles–Shared Read/Write Servers are Bad

At the beginning of most projects that I have been on the default starting position has been that dev and test will each get their own environment to share.  Developers share the dev env and testers share the test env.  I think we need to change this.  Shared environments are good for only a select set of scenarios and development and testing are not among them.

tug-o-warIf they have been the norm what has changed to allow us to go a different route?  Two things: first increased performance of workstations and laptops and second improved automation in the dev/test workspace.  Increased performance in hardware has allowed us to run more servers in a local workspace.  On my laptop I can easily run several Weblogic servers and an Oracle Database with resources left for an IDE and other dev/test tools. Not so long ago this was impossible.  Managing all my servers, the applications running on them, and one or more databases would leave little time to do anything else in a fast changing development project.  Automation alleviates this time sink.  Providing push button headless automation to setup, deploy, and manage these servers is key.  This too was thought of as impossible not to long ago, yet fully automated deployments are more and more common these days.

Okay, so maybe it is possible you say.  Why would I want to do this?  Haven’t shared environments been working…

No, I don’t think they have been working.  The whole purpose behind this change is to enable developers and testers to test the application more easily, spend less time identifying and recovering from collisions, and for developers specifically to spend less time chasing the version of the shared env.  When several people share an environment they can easily collide with each other.  There are many types of collision, they depend on the architecture of the application under test as well as it’s dependencies.  Many of these types of collision revolve around data.  There are schemes to minimize data collisions, yet no scheme is foolproof.  If every developer and tester has a private environment to work in several things become possible:

  • testers and developers can easily roll back to any version of the application to replicate a bug, or return to last working version
  • testers and developers can execute automated tests at will, or any kind of test for that matter, no more collisions due to data, execution resources, etc…
  • no one is forced to update to a new version of the application, in shared envs updates are normally done nightly, for example a developer could work uninterrupted on a bug through one day and into the next, i.e. no more chasing the shared env version…

All of these things will dramatically increase the productivity of a team!

At this point you may think that I am claiming that a whole environment should be hosted locally on a developer/tester workspace.  I am not.  I only think that all read/write servers should be local.  Readonly servers could, most of the time should, be shared.  Remember that what counts is how you interact with the server.  If your application only ever reads from it then you should treat it as a readonly server.

If you search around the internet on this subject you will mostly find that this subject has been written on from the database point of view.  Here are a few good examples:

Most of the issues that have been documented around shared database servers effect shared webservices, EJBs, .Net Remoting, REST, etc…

Just in case you are not yet convince let’s try some systems thinking.  This situation of shared servers or resources indicates that we should take a look at the archetype “Tragedy of the Commons”.  This is taken from the site http://www.systems-thinking.org:

 

Tragedy of the Commons

The Tragedy of the Commons structure represents a situation where, to produce growth, two or more Reinforcing Loops are dependent on the availability of some common limited resource.

A’s activity interacts with the resources available adding to A’s results. A’s results simply encourage more of A’s activity. The same sequence plays out for B’s activity. And, the more resources used the greater the results. This simply encourages A and B to use more resources.

A’s activity and B’s activity combine to produce some total activity. This total activity subtracts from the resources available. The extent of the resources available being defined by the resource limit.

Total activity continues until it completely depletes the resources available. When this happens A’s results and B’s results stop growing as there are no more resources to use. What makes this structure even worse is that whoever figures out the structure first, A or B, wins because they use all the resources before the other has a chance to. This structure is often referred to as "All for one and none for all."

Managing the Structure

This structure repeatedly appears in organizational contexts where a service organization supports the success of multiple departments who fail to support the service organization in return. There are two strategies for dealing with this structure, one more effective than the other.

  • The most effective strategy for dealing with this structure is to wire in feedback paths from A’s results and B’s results to the resource limit so as A and B use resources their results promotes the availability of additional resources.
  • The alternate, and less effective, strategy for dealing with this structure is to add an additional resource to control the use of resources by A and B. This strategy limits the overall potential results of the structure to the predefine resource limit. It also adds additional resource to the equation, and probably results in endless disputes as to the fairness associate with the allocation of resources. While not really the most appropriate strategy this is the one most often used — out of ignorance I would suspect.
Examples

In our case the resource can be replenished in a couple of ways depending on how it was or is being depleted.  A scorch and rebuild of the data will replenish a data depletion.  If the server resources, CPU, memory, IO, etc… where depleted then simply curtailing the over usage will replenish the server.  I find this site’s offered solutions lacking.  It’s presentation is as if it is a syllogism.  An easy third solution is to dedicate a resource per user.  This removes some of the limiting factor from the system and all of the shared aspect leaving a set of independent reinforcing loops, one per user.  I this case both A and B get their own resource. 

49,298 Total Views

Leave a comment

*

here