Using Markov Chains to Generate Test Input

One challenge that we’ve faced at Electric Cloud is how to verify that our makefile parser correctly emulates GNU Make. We started by generating test cases based on a close reading of the gmake manual. Then we turned to real-world examples: makefiles from dozens of open source projects and from our customers. After several years of this we’ve accumulated nearly two thousand individual tests of our gmake emulation, and yet we still sometimes find incompatibilities. We’re always looking for new ways to test our parser.

One idea is to generate random text and use that as a “makefile”. Unfortunately, truly random text is almost useless in this regard, because it doesn’t look anything like a real makefile. Instead, we can use Markov chains to generate random text that is very much like a real makefile. When we first introduced this technique, we uncovered 13 previously unknown incompatibilities — at the time that represented 10% of the total defects reported against the parser! Read on to learn more about Markov chains and how we applied them in practice.
Read the rest of this entry »

Getting data to the cloud

One of the problems facing cloud computing is the difficulty in getting data from your local servers to the cloud. My home Internet connection offers me maybe 768 Kbps upstream, on a good day, if I’m standing in the right place and nobody else in my neighborhood is home. Even at the office, we have a fractional T1 connection, so we get something like 1.5 Mbps upstream. One (just one!) of the VM images I use for testing is 3.3 GB. Pushing that up to the cloud would take about five hours under ideal conditions!

I don’t know what the solution to this problem is, yet, but it’s definitely something a lot of people are working on. I thought I’d point out a couple of interesting ideas in this area. First is the Fast and Secure Protocol, a TCP replacement developed by Aspera and now integrated with Amazon Web Services. The basic idea is to improve transmission rates by eliminating some of the inefficiencies in TCP. In theory this will allow you to more reliably achieve those “ideal condition” transfer rates, and if their benchmarks are to be believed, they’ve done just that. However, all this does is help me ensure that transferring my VM image really does take “only” 5 hours — so I guess that’s good, but this doesn’t seem like a revolution.

From my perspective, a more interesting idea is LBFS, the low-bandwidth filesystem. This is a network filesystem, like NFS, but expressly designed for use over “skinny” network connections. It was developed several years ago at MIT, but I hadn’t heard of it until today, so I imagine many of you probably haven’t either. The most interesting idea in LBFS is that you can reduce the amount of data you transfer by exploiting commonalities between different files or different versions of the same file. Basically, you compute a hash for every block of every file that is transferred, and then you only send blocks that haven’t already been sent. On the client side, it takes the list of hashes and uses them to reassemble the file. This can give you a dramatic reduction in bandwidth requirements. For example, consider PDB files, the debugging information generated by the Visual C++ compiler: every time you compile another object referencing the same PDB, new symbols are added to it and some indexes are updated, but most of the data remains unchanged.

Like I said, I don’t know what the solution to this problem is, but there are already some exciting ideas out there, and I’m sure we’ll see even more as cloud computing continues to evolve.

ElectricAccelerator Machine Configuration Tips, Part 2

In my last blog, I presented some higher level criteria for deciding what kind of machines to pick and how to set them up. This time I will offer some additional concrete tips for machine setup. This is a pretty loose collection of things I have found useful.

Read the rest of this entry »