Bridging the IT – Development Gap

In the last year I have increasingly run into a new character in application development shops – IT.

IT is not taking over coding tasks, but they are certainly taking a much more active role regarding where application development tasks get run.   This is no surprise, as software development has an insatiable appetite for compute resources.  Whether it is vast clusters for testing, dozens of machines to run ALM tools, or the incessant requests for “just one more box,” development teams are always asking for something.  Multiply this by the number of development teams in your company, and it is easy to see why this is a big job for IT.

IT’s response is to increasingly look at centralizing where development tasks are run.  The logic is that with a cloud of development resources, they can get more efficiency in sharing them across groups and offer their customers (development) more resources than they would otherwise get.  However, IT’s goals are largely different than those of the development teams.  IT organizations measure their success by how efficient, smooth and cost effective their compute environment is. They want a large, identically configured, scalable, uninterruptible environment that consistently delivers established levels of performance to the downstream customer.  In other words, the more that they can make things the same, the more efficient they can be.

On the surface, these goals are at odds with development.   Development teams are measured on software output and quality, not on resource efficiency.  Each development group often has unique needs to achieve peek productivity and quality.  The matrix of test environments never shrinks, in conflict with IT’s desire to standardize.  If environment customizations and optimizations make development more effective (which they often do) they want their own resources, even if it means they get fewer of them.

How do you bridge these competing goals?  The wrong answer is compromise: it’s not about finding the midpoint that’s just useless enough to each party’s goals that everyone is unhappy.

The right answer is to define an environment that can deliver against both goals simultaneously.  Allow IT to provision the compute cloud – these are the areas where homogeneity and efficiency shine.  This allows IT to meet development’s needs for peak resource demands by sharing across large pools of compute resources while reducing cost.  Virtualization is an important ingredient in the solution because it meets IT’s need for homogeneity and development’s need for configuration specialization.   However, virtualization is not enough.  What is really needed to bridge the gap is a framework that allows development to maintain control of what processes get run, who runs them, and how and when they end up on the compute cloud.

Is this possible?  Our most successful customers have used ElectricCommander to do just this.

For IT, ElectricCommander enables software development to happen in one large, scalable, reliable development cloud.  For development, they get all of the control that they need, only with a heck of a lot more compute resources.

Automating around scarcity by using virtual resources

[posted on behalf of Usman Muzaffar, who is on a long flight with no WiFi]

Here’s a sobering truth that shows up often in software automation: people are way better at sharing stuff than computers are. For example: say you have a scarce resource, like a box with special hardware or a service with serial access. You’re tasked with automating a software build/test/release workflow, and part of it needs to talk to this One Big And Fancy Thing. Do you try to teach your build script good playground behavior, so it automatically knows when to wait politely (and when, as deadlines approach, it should bully its way to the top of the slide), or do you declare this problem out-of-scope, and just provide the hook to let the team manage access manually?

The default on that checkbox is: *don’t automate*, for two reasons. First: letting people handle it means no extra work. More importantly, because we’ve been doing it our whole life, we’re actually pretty good at adapting to environments where we have to share things, whether that’s roads or restrooms or rack space. A small number of people on the same team with similar goals will usual self-organize around a few ground rules with a minimum of fuss. One clear and crisply delivered directive at a weekly team meeting (“OK guys the new 32-way sol box is for the full server test suite, so give that priority and check with each other before you use it for other stuff”) is often all it takes.

Second, technically getting the semantics of shared simultaneous access right is a notorious pain in the neck. As in any software automation system, there’s no credit for a partial answer: it’s a net loss if your script still needs a babysitter for the corner cases. So that means your solution needs to take selection and queuing and load into account, and have mechanisms for priority and pre-emption and be smart about busted network connections. More fundamentally, at its core it usually boils down to something awfully close to multithreaded programming, with the usual challenges in that space around semaphores, locks, deadlocks, races. Great stuff in a CS course or maybe your server’s ConnectionPool class — rathole alert in your build and test system!

So, largely with good reason, the automation train comes to a screeching halt right here. It’s just not worth the effort to build a system that’s going to manage the synchronization for parallel access to scarce resources. In other words: when shared resources wind up in the software production system, people show up next to them, and that sucks all the fun (and potential efficiency gains) out of automation. What to do?

One thing worth investigating are tools that can handle this for you.  Solving this was a key goal for our ElectricCommander product. Commander lets you describe your job as a series of command line steps, and each step can be specified to run on a resource. A resource is simply a system that we’ll remotely execute commands on, and it comes with a sack full of infrastructure goodies you’d expect like pooling, exclusive reservation, broadcast, security, access control, load balancing, and fault tolerance. As a user of the system, you specify what you want to run, and where you want to run it, press the ‘Go’ button and Commander does the rest, queuing steps when resources are oversubscribed and efficiently scheduling around your other constraints. Nice!

Then one day a customer asked us how they could automatically control access to a piece of hardware that simulated network traffic critical to the product’s system test. This wasn’t a gadget we could install software on; indeed, we couldn’t directly connect to it at all, so Commander can’t treat it as a resource. But it soon became evident that we could solve this just as elegantly with a simple tweak to the approach. Fundamentally, we needed the ability to specify that a step 1) needed access, 2) must block when it wasn’t available, and 3) once acquired, hang on to it until it was done. If something could just take care of this synchronization and queuing, the test could connect to the traffic simulator directly and simply execute as if invoked manually.

In other words: the problem called for a *subset* of Commander resources;  ignore half the stuff in the goodie sack (remote login, execution, fault tolerance, etc.) and you’re left with a general purpose resource access and acquisition facility. We set up dummy resources (good old 127.0.0.1, always up and ready for this sort of game!), injected them into the workflow and configured the job to hang on to them as long as it was talking to the traffic simulator. It worked beautifully: each test run was guaranteed to get just the access it needed, and for the first time, the customer had safe, parallel end-to-end automation for the full test cycle.

More importantly, this design pattern, since dubbed Virtual Resources, opened a whole new realm of possibilities. Once you start looking for them, there are *lots* of shared things in a software system that aren’t compute hosts, and they’re all threatening or overcomplicating automation in some way or another.  We’ve used Virtual Resources to manage access database tables, SCM labels, virtual machines, filesystem repositories, flaky external systems that don’t like more than one client talking to them, and our customers keep showing us new ways. It’s a great example of how the core of a clean design — a resource is something a job can request and relinquish — was readily adapted to a wider set of problems around Software Production Automation.

Private clouds: more than just buzzword bingo

A friend pointed me to a blog in which Ronald Schmelzer, an analyst at ZapThink, asserts that the term “private cloud” is nothing more than empty marketing hype. Ironically, he proposes that we instead use the term “service-oriented cloud computing.” Maybe I’m being obtuse, but “service-oriented” anything is about the most buzzladen term I’ve heard in the last five years. Seriously, have you read the SOA article on Wikipedia? It’s over 5,000 words long, chock-a-block full of the “principles of service-orientation” like “autonomy” and “composability”. What a joke!

Let me see how many words I need to define private clouds. It’s a centralized infrastructure supplied by a single organization’s IT department that provides virtualized compute resources on demand to users within that organization. Let’s see, that’s… 21 words. Not bad, but I bet if you’re like me, you’re probably looking at that and thinking that it still doesn’t make much sense, so let me give you a concrete example.
Read the rest of this entry »

On the Proper Setup of a Virtual Build Infrastructure Part 2

In this post I will get into some additional findings from our deployment and also tie all our findings into the deployment of ElectricAccelerator in a virtual environment.

You can find part one of this article here.

Read the rest of this entry »

On the Proper Setup of a Virtual Build Infrastructure Part 1

At Electric Cloud, we’ve identified some best practices as we have gone through the set up of a virtual infrastructure to support our own engineering needs. If you’ve already gone through this exercise, the following may seem trivial. Or perhaps you have other nuggets of wisdom to share. (If so, please do!) Either way, I hope these tips will improve the experience of setting up a virtual build infrastructure.

As a first step toward success, establish reasonable goals for what can/should be done in a virtual environment. Also, make sure you have a rollout plan that allows for proper testing (both functional and performance).

In this part I will describe what we have and what we learned from our deployment. The second part will wrap up some additional findings and relate the findings to our ElectricAccelerator product.

Read the rest of this entry »

Do more (computing) with less (hardware) (people) (money)

Imagine if you discovered your colleagues only work 4 hours a day. You thought everyone was working as hard as you until you started monitoring what they did all day. To your surprise, many of them were idle for hours at a time, just sitting still waiting for someone to give them work. And when they did work it was in 10 minute bursts separated by more waiting around. I think you would be upset if this were true. You should only get a full day’s pay if you do a full day’s work, right?
Read the rest of this entry »