Log in

No account? Create an account

Sat, Apr. 23rd, 2005, 10:16 pm
Version Control Shenanigans

For those of you who have been following the recent Bitkeeper shenanigans, I'm now going to give the inside scoop on what's happening in free distributed version control.

There's one main interface to distributed version control, which is essentially how Darcs, Monotone, and Codeville all work. Arch and its kin all have a much klunkier UI which basically makes them a non-contender for projects like the Linux kernel. There are some other potentially competitive projects, such as Vesta, which is extremely mature and powerful but currently doesn't do merging, Bazaar-ng, which is currently mostly vaporware, and svk, which I think has recently switched from an arch-style to a darcs-style interface, but I can't say for sure. There are probably other systems worth at least a mention, but for now I'll just stick to discussing the (to my knowledge) most mature and promising systems.

A new system is Git (more of a version control back end than a version control system), which Linus Torvalds hacked together and initially said was a 'stop-gap' measure, but now appears to be getting quite excited about, and is thinking may be a good long-term solution. Git was originally supposed to be simple and fast, except new developments are heading it in the direction of being not so simple and not so fast, and its network protocol is currently a disaster. The good news is that Git is basically a ripoff of a bunch of the architectural ideas behind Monotone, which are good ideas, so it can't be as big of a disaster as, say, Subversion, but it's currently extraordinarily similar to a very old version of Monotone, which makes parallel development of Git and Monotone seem like a waste of resources. The initially stated reason for the new system was that Monotone was too slow, but just a little bit of optimization has made Monotone many times faster than it was before, and most of the remaining performance difference is caused by sanity checks, which can simply be turned off. There are perfectly reasonable long-term strategies which involve separate Git development for the time being, and I'll get to those later, but we in the distributed version control world really have no idea what Linus is thinking at this point.

Darcs, Monotone, and Codeville implement one each of the known ways to approach distributed merge - patch commutation, three-way merge, and two-way merge with history, respectively. Most other projects wind up using three-way merge, although it's the least powerful of the three. Darcs and Codeville were both motivated by the invention of their merge algorithms, and neither of the approaches has to my knowledge been invented independently.

Now, for some comparisons -

Darcs is lacking in hash-based history, which means that past versions can't be reproduced and that an interloper could easily change what the history says. This is a major missing feature. There's currently talk of making Darcs use Git as a back-end, which would give it the hash-based history, but Git's whole-file view of the universe isn't a terribly good match for Darcs's patch-based one. I think a more Codeville-like history would be a much better fit, but doing inference of patches on the client side based on the history would be a workable solution. Darcs also suffers from containing some extremely bad asymptotic runtimes in algorithms which it uses, which turn up in not terribly uncommon cases. This is currently being worked on, but is an area of actual research rather than simple optimization. Because of these problems, Darcs isn't really ready for prime-time yet, but may be in the not too distant future. Darcs's big advantage right now is that it have very good extensive support for cherry picking, a feature which is planned for Codeville but not implemented yet, and I'm not sure if it's planned for Monotone.

Monotone is a fairly mature, mostly traditional three-way-merge-based system with a hash-based history. It has decent network protocol and rudimentary (but far from complete) support for renames. (Git doesn't have support for renames, a hard-to-change architectural decision which was made when it was supposed to be a quick hack temporary solution). Monotone's merge algorithm isn't as good as Codeville's, and there's been some talk of making Monotone use Codeville's merge algorithm, but that's an involved topic which noone's sure the future of. Monotone also supports some nice certification functionality, whose importance is unclear and which could be added to other systems. A hash-based history gets a lot of security to begin with, and the certs don't carry over between format changes, so they're causing a fair amount of possibly unnecessary pain for the time being.

Codeville is also fairly mature, and is having the last few rough edges polished up right now. It has a good network protocol, (technically, it will in about two weeks), good (but not quite complete) support for renames, a well-done hash-based history, and the best merging of any available system. There's a subtle architectural distinction in the history approach of Monotone and Codeville - Monotone records the secure hashes of all old versions, while Codeville records the changes from the old hashed versions. Monotone's approach is less simple in the end. The problem is that for efficient transfer over the wire you need to pass deltas rather than full copies, so you need to cache the deltas, or generate them on the fly, and they need to be integrity checked as they come down, which means a whole lot of hash checking of intermediate versions. The on paper advantage of storing complete copies is that if you cache full complete copies on disk you don't need to run regeneration code when making a new checkout, but that proves to be invalid in practice because the operation's performance is completely dominated by the number of hard drive seeks it has to do, which results in some weird results like a Codeville checkout being faster than a cp -a. Number of seeks optimization is an interesting subject which is beyond the scope of this entry.

As you've probably gathered, Git's quick hack nature is readily apparent, even ignoring that it hasn't even gotten started on implementing merge yet. A hopeful sign is the development of Cogito, which is a front end to Git with a reasonable interface. If everyone starts using Cogito, then it would be a simple matter to make a Codeville- or Monotone- based back end which was command identical to Git, but also had renames, and a non-sucky network protocol, and decent merging. That depends, of course, on people actually using Cogito as their standard Git interface, which is mostly dependant on what Linus wants, and like I said, we don't currently have any idea what Linus is thinking.

Monotone and Codeville have been growing closer over time. Whether they'll reach a complete unification at some point is an involved topic which hasn't been fully explored.

You can read much of the discussion which has been happening around version control systems at loglibrary. #codeville isn't logged yet though.

The old Linux kernel history is now readily available from SourcePuller, which was written by Tridge. It was the writing of this (extremely simple) script which caused the BitKeeper license to get yanked. Ironically, SourcePuller has helped with damage control from the BitKeeper license yanking by making the full old history (not just the linearized version available from CVS) be available. Also ironically, Git is currently using rsync for its network protocol, and rsync was written by Tridge. Rsync is actually quite dated, and the wrong tool for that particular job in any case, but that's a whole other subject.

Full disclosure, of course, is that I'm the founder of Codeville and a current contributor (although most of the work these days is done by Ross). While this makes me a bit biased, it's also resulted in me having a much better sense of what's going on.

By coincidence, the Monotone and Codeville web sites are hosted on the same server.

I could go more into some of the personalities involved and implementation details of some of the systems (like what languages they use) but I've already spent way more time on this than intended, so that's all for now.

Tue, Jul. 17th, 2007 12:52 am (UTC)

First of all, I'd like to say that this 'I'm smarter than you' bullshit is really sickening. I was distinctly on edge in that earlier discussion, mostly because Linus was being such a jerk. The blog post you link to is propogating the same crap, by framing all events since then in the context of picking out who it shows is smarter.

I haven't worked on bazaar - they just used some code I wrote. Some code which, I'd like to repeat, everybody else can and should as well, including git.

By any reasonable standard, both git and codeville are failures - git is hardly used outside of the linux kernel, and codeville is hardly used at all. It's a bit of an unfair comparison, in that git has had huge amounts of resources plowed into it, while codeville hardly any. I hadn't written any code on it for a while at the time that argument happened, and haven't since. I will readily admit that codeville had some serious issues (and probably still does) which make it inappropriate for significant use.

There were basically two arguments going on there, one having to do with architectural scalability and performance, and one having to do with merge algorithms.

The argument about performance was basically that Linus claimed that git was super-fast and super-scalable. That was, in fact, incorrect. Since then, the guts of how git behaves have been completely re-done so that the 'fast' operations are fast because they simply make note of what happened, and then you're expected to run a batch process where all the heavy lifting is done periodically. This is a perfectly reasonable trade-off, and one which I wish everything else would do, but far from a vindication of git's architecture - it's an approach which could be taken in any system. Git still has some nasty performance problems, by the way, basically having to do with networking (mercurial does it right).

The other argument was over what sort of merge algorithm is the correct one. Since that old argument I've done quite a bit more work on merge algorithms, and it turns out that there are basically two features, implicit cherry-picking and implicit undo, and you have to pick one. I did a lot of good work on how to implement implicit cherry-picking, but it turns out, disappointingly, that the vast majority of projects want implicit cherry-picking, which disappointingly means three-way merge. There are a lot of details of exactly how the three-way merge is done, which have had significant improvements made to them, hence the code adopted by bazaar.

The subtle edge case has to do with implicit moves between files. Basically Linus advocates a system which will take sloppy patches and try to apply them wherever if the changed file is no longer present. This was hardly a new idea, nor is it terribly hard to implement, but the real question is whether you trust it to not screw up, or suddenly change how it behaves during development. Obviously it can work fine in a few simple cases, but whether it can screw up has a lot to do with how branched a project can get and how frequently such features are used. The way development works in the linux kernel it hasn't become a problem (or at least, if it has noone's complained about it) but it isn't something I'd advocate using as default behavior for all projects, at least not without it differentiating between proper and heuristic merges, and warning you about exactly what it's going to do before a heuristic merge, and even then there are cases which will get borked no matter what, for example if someone moves and extensively changes a file as a single operation.

There's a list of possible approaches to file moving, all of which have their pluses and minuses, and none of which are a panacea. The one git uses isn't the simplest and most expedient, and it isn't the one I advocate, but it can be made workable for a lot of projects.

There's more version control theory I really ought to post about, it turns out if you decide to go with three-way merge it's possible to get much more coherent branch organization than total anarchy.