Bridging the IT – Development Gap

In the last year I have increasingly run into a new character in application development shops – IT.

IT is not taking over coding tasks, but they are certainly taking a much more active role regarding where application development tasks get run.   This is no surprise, as software development has an insatiable appetite for compute resources.  Whether it is vast clusters for testing, dozens of machines to run ALM tools, or the incessant requests for “just one more box,” development teams are always asking for something.  Multiply this by the number of development teams in your company, and it is easy to see why this is a big job for IT.

IT’s response is to increasingly look at centralizing where development tasks are run.  The logic is that with a cloud of development resources, they can get more efficiency in sharing them across groups and offer their customers (development) more resources than they would otherwise get.  However, IT’s goals are largely different than those of the development teams.  IT organizations measure their success by how efficient, smooth and cost effective their compute environment is. They want a large, identically configured, scalable, uninterruptible environment that consistently delivers established levels of performance to the downstream customer.  In other words, the more that they can make things the same, the more efficient they can be.

On the surface, these goals are at odds with development.   Development teams are measured on software output and quality, not on resource efficiency.  Each development group often has unique needs to achieve peek productivity and quality.  The matrix of test environments never shrinks, in conflict with IT’s desire to standardize.  If environment customizations and optimizations make development more effective (which they often do) they want their own resources, even if it means they get fewer of them.

How do you bridge these competing goals?  The wrong answer is compromise: it’s not about finding the midpoint that’s just useless enough to each party’s goals that everyone is unhappy.

The right answer is to define an environment that can deliver against both goals simultaneously.  Allow IT to provision the compute cloud – these are the areas where homogeneity and efficiency shine.  This allows IT to meet development’s needs for peak resource demands by sharing across large pools of compute resources while reducing cost.  Virtualization is an important ingredient in the solution because it meets IT’s need for homogeneity and development’s need for configuration specialization.   However, virtualization is not enough.  What is really needed to bridge the gap is a framework that allows development to maintain control of what processes get run, who runs them, and how and when they end up on the compute cloud.

Is this possible?  Our most successful customers have used ElectricCommander to do just this.

For IT, ElectricCommander enables software development to happen in one large, scalable, reliable development cloud.  For development, they get all of the control that they need, only with a heck of a lot more compute resources.

Subbuilds: build avoidance done right

I’ve heard it said that the best programmer is a lazy programmer. I’ve always taken that to mean that the best programmers avoid unnecessary work, by working smarter and not harder; and that they focus on building only those features that are really required now, not allowing speculative work to distract them.

I wouldn’t presume to call myself a great programmer, but I definitely hate doing unnecessary work. That’s why the concept of build avoidance is so intriguing. If you’ve spent any time on the build speed problem, you’ve probably come across this term. Unfortunately it’s been conflated with the single technique implemented by tools like ccache and ClearCase winkins. I say “unfortunate” for two reasons: first, those tools don’t really work all that well, at least not for individual developers; and second, the technique they employ is not really build avoidance at all, but rather object reuse. But by co-opting the term build avoidance and associating it with such lackluster results, many people have become dismissive of build avoidance.

Subbuilds are a more literal, and more effective, approach to build avoidance: reduce build time by building only the stuff required for your active component. Don’t waste time building the stuff that’s not related to what you’re working on now. It seems so obvious I’m almost embarrassed to be explaining it. But the payoff is anything but embarrassing. On my project, after making changes to one of the prerequisites libraries for the application I’m working on, a regular incremental takes 10 minutes; a subbuild incremental takes just 77 seconds:

Standard incremental:
609s
Subbuild incremental:
77s

Not bad! Read on for more about how subbuilds work and how you can get SparkBuild, a free gmake- and NMAKE-compatible build tool, so you can try subbuilds yourself.
Read the rest of this entry »

Public Clouds

Today we announced integrations and compatibility with public cloud computing – specifically Amazon EC2. Cloud computing is a hot topic right now, and rightly so. It provides an easy to deploy, cost-effective, scalable, on-demand computing infrastructure –very timely, given shrinking or frozen IT budgets. I can’t count the number of customers who tell me that compute infrastructure is their #1 bottleneck. At Electric Cloud we have years of experience with internal or “private” clouds (after all, it’s in our name). We help customers set up private clouds, some with hundreds of machines, to accelerate and automate their software build and test tasks. It made sense for us to add public clouds to the mix. You can read the press release here.

Our customers gave us some interesting use cases for using our products in combination with the public cloud. Here are some of their ideas:
Read the rest of this entry »

Enabling Agile Software Development

Agile is great.  It seems that everyone is either adopting or talking about it (of course many of those will probably be perpetually adopting and talking).  However not everyone is succeeding with Agile.  One reason is that in order for Agile to work well you need highly experienced developers that are familiar with a broad range of skills and the processes involved in software development.  These developers (often the most experienced) are in high demand and short supply.  So what can be done to help ensure that your Agile team is successful even with fewer highly experience developers?  Use tools to export the highly specialized knowledge that those developers bring to the table, and to bridge the gap between the most and least experienced developers on your team.

Read the rest of this entry »

Organizational Cartography: what kind of development shop are you?

Image courtesy Norman B. Leventhal Map Center at the BPL

If you’ve ever worked on a significant software development project you’ve probably looked at some part of your process and said ‘we have to change this!’ More often than not the topic is the software production process: that problematic build-test-release cycle. We can all identify places where our process is slow or broken; so why do some initiatives succeed while others fail miserably? If you’re going to make a change which affects six people: stop reading – just go do it! Sixty? You’ll have to get buy-in from colleagues. Six hundred? You’re going to have to talk to your Director of Engineering. Navigating those waters benefits from drawing a map: you should know who your change will impact, how it will benefit them, and where the low-hanging fruit are. Over my four years with Electric Cloud I’ve successfully (and sometimes not so successfully!) sailed the process-improvement trade route; in this article I’m going to give you a heuristic in order to profile your own team and develop a navigation map. Read the rest of this entry »