Interesting posts on DevOps and private/hybrid clouds

Two Three recent blogs/articles about DevOps and private/hybrid clouds:

First, Mike Maciag, our CEO, published an interesting article at CM Crossroads: How the Rise of DevOps and the Private Cloud Will Change Development in 2011.

Second, by George Watts of CA: Pragmatic Cloud: ‘Please check your egos at the door’.

Third, by Bob Aiello of CM Crossroads: Behaviorally Speaking – Devops 101.

Give them a read.

Bridging the IT – Development Gap

In the last year I have increasingly run into a new character in application development shops – IT.

IT is not taking over coding tasks, but they are certainly taking a much more active role regarding where application development tasks get run.   This is no surprise, as software development has an insatiable appetite for compute resources.  Whether it is vast clusters for testing, dozens of machines to run ALM tools, or the incessant requests for “just one more box,” development teams are always asking for something.  Multiply this by the number of development teams in your company, and it is easy to see why this is a big job for IT.

IT’s response is to increasingly look at centralizing where development tasks are run.  The logic is that with a cloud of development resources, they can get more efficiency in sharing them across groups and offer their customers (development) more resources than they would otherwise get.  However, IT’s goals are largely different than those of the development teams.  IT organizations measure their success by how efficient, smooth and cost effective their compute environment is. They want a large, identically configured, scalable, uninterruptible environment that consistently delivers established levels of performance to the downstream customer.  In other words, the more that they can make things the same, the more efficient they can be.

On the surface, these goals are at odds with development.   Development teams are measured on software output and quality, not on resource efficiency.  Each development group often has unique needs to achieve peek productivity and quality.  The matrix of test environments never shrinks, in conflict with IT’s desire to standardize.  If environment customizations and optimizations make development more effective (which they often do) they want their own resources, even if it means they get fewer of them.

How do you bridge these competing goals?  The wrong answer is compromise: it’s not about finding the midpoint that’s just useless enough to each party’s goals that everyone is unhappy.

The right answer is to define an environment that can deliver against both goals simultaneously.  Allow IT to provision the compute cloud – these are the areas where homogeneity and efficiency shine.  This allows IT to meet development’s needs for peak resource demands by sharing across large pools of compute resources while reducing cost.  Virtualization is an important ingredient in the solution because it meets IT’s need for homogeneity and development’s need for configuration specialization.   However, virtualization is not enough.  What is really needed to bridge the gap is a framework that allows development to maintain control of what processes get run, who runs them, and how and when they end up on the compute cloud.

Is this possible?  Our most successful customers have used ElectricCommander to do just this.

For IT, ElectricCommander enables software development to happen in one large, scalable, reliable development cloud.  For development, they get all of the control that they need, only with a heck of a lot more compute resources.

The Cloud Two-Step: How do you know what Dev/Test processes to run in the Cloud?

I just came across a piece that Bernard Golden wrote in his CIO blog entitled Dev/Test in the Cloud: Rules for Getting it Right.   He makes a lot of good points including what we see the most successful enterprise development shops doing with the cloud; “Treat the cloud as an extension, not a separation.” 

Unfortunately, he does not point out what dev/test tools should be put up in the cloud but simply states “dev/test tasks,” as if it is obvious which ones to migrate.  Let’s see if we can leverage his work to figure which dev/test tasks are cloud-ready in two steps:
Read the rest of this entry »

Private clouds: more than just buzzword bingo

A friend pointed me to a blog in which Ronald Schmelzer, an analyst at ZapThink, asserts that the term “private cloud” is nothing more than empty marketing hype. Ironically, he proposes that we instead use the term “service-oriented cloud computing.” Maybe I’m being obtuse, but “service-oriented” anything is about the most buzzladen term I’ve heard in the last five years. Seriously, have you read the SOA article on Wikipedia? It’s over 5,000 words long, chock-a-block full of the “principles of service-orientation” like “autonomy” and “composability”. What a joke!

Let me see how many words I need to define private clouds. It’s a centralized infrastructure supplied by a single organization’s IT department that provides virtualized compute resources on demand to users within that organization. Let’s see, that’s… 21 words. Not bad, but I bet if you’re like me, you’re probably looking at that and thinking that it still doesn’t make much sense, so let me give you a concrete example.
Read the rest of this entry »

Getting data to the cloud

One of the problems facing cloud computing is the difficulty in getting data from your local servers to the cloud. My home Internet connection offers me maybe 768 Kbps upstream, on a good day, if I’m standing in the right place and nobody else in my neighborhood is home. Even at the office, we have a fractional T1 connection, so we get something like 1.5 Mbps upstream. One (just one!) of the VM images I use for testing is 3.3 GB. Pushing that up to the cloud would take about five hours under ideal conditions!

I don’t know what the solution to this problem is, yet, but it’s definitely something a lot of people are working on. I thought I’d point out a couple of interesting ideas in this area. First is the Fast and Secure Protocol, a TCP replacement developed by Aspera and now integrated with Amazon Web Services. The basic idea is to improve transmission rates by eliminating some of the inefficiencies in TCP. In theory this will allow you to more reliably achieve those “ideal condition” transfer rates, and if their benchmarks are to be believed, they’ve done just that. However, all this does is help me ensure that transferring my VM image really does take “only” 5 hours — so I guess that’s good, but this doesn’t seem like a revolution.

From my perspective, a more interesting idea is LBFS, the low-bandwidth filesystem. This is a network filesystem, like NFS, but expressly designed for use over “skinny” network connections. It was developed several years ago at MIT, but I hadn’t heard of it until today, so I imagine many of you probably haven’t either. The most interesting idea in LBFS is that you can reduce the amount of data you transfer by exploiting commonalities between different files or different versions of the same file. Basically, you compute a hash for every block of every file that is transferred, and then you only send blocks that haven’t already been sent. On the client side, it takes the list of hashes and uses them to reassemble the file. This can give you a dramatic reduction in bandwidth requirements. For example, consider PDB files, the debugging information generated by the Visual C++ compiler: every time you compile another object referencing the same PDB, new symbols are added to it and some indexes are updated, but most of the data remains unchanged.

Like I said, I don’t know what the solution to this problem is, but there are already some exciting ideas out there, and I’m sure we’ll see even more as cloud computing continues to evolve.