Bram Cohen (bramcohen) wrote,
Bram Cohen
bramcohen

Git Can't Be Made Consistent

This post complains about Git lacking eventual consistency. I have a little secret for you: Git can't be made to have eventual consistency. Everybody seems to think the problem is a technical one, of complexity vs. simplicity of implementation. They're wrong. The problem is semantics. Git follows the semantics which you want 99% of the time, at the cost of having some edge cases which it's inherently just plain broken on.

When you make a change in Git (and Mercurial) you're essentially making the following statement:

This is the way things are now. Forget whatever happened in the past, this is what matters.


Which is subtly and importantly different from what a lot of people assume it should be:

Add this patch to the corpus of all changes which have ever been made, and are what defines the most recent version.


The example linked above has a lot of extraneous confusing stuff in it. Here's an example which cuts through all the crap:

  A
 / \
B   B
|
A


In this example, one person changed a files contents from A to B, then back to A, while someone else changed A to B and left it that way. The question is: What to do when the two heads are merged together? The answer deeply depends on the assumed semantics of what the person meant when they reverted back to A. Either they meant 'oops I shouldn't have committed this code to this branch' or they meant 'this was a bad change, delete it forever'. In practice people mean the former the vast majority of the time, and its later effects are much more intuitive and predictable. In fact it's generally a good idea to make the separate branch with the change to B at the same time as the reversion to A is done, so further development can be done on that branch before being merged back in later. So the preferred answer is that it should clean merge to B, the way 3 way merge does it.

Unfortunately, this decision comes at significant cost. The biggest problem is that it inherently gives up on implicit cherry-picking. I came up with some magic merge code which allowed you to cut and paste small sections of code between branches, and the underlying version control system would simply figure out what you were up to and make it all work, but nobody seemed much interested in that functionality, and it unambiguously forced the merge result in this case to be A.

A smaller problem, but one which seems to perturb people more, is that there are some massively busted edge cases. The worst one is this:

  A
 / \
B   B
|   |
A   A


Obviously in this case both sides should clean merge to A, but what if people merge like this?

  A
 / \
B   B
|\ /|
A X A
|/ \|


Because of the cases we just went over, they should clean merge to B. What if they are then merged with each other? Since both sides are the same, there's only one thing they can merge to: B

  A
 / \
B   B
|\ /|
A X A
|/ \|
B   B
 \ /
  B


Hey, where'd the A go? Everybody reverted their changes from B back to A, and then via the dark magic of merging the B came back out of the ether, and no amount of further merging will get rid of it again!

The solution to this problem in practice is Don't Do That. Having multiple branches which are constantly pulling in each others's changes at a slight lag is bad development practice anyway, so people treat their version control system nicely and cross their fingers that the semantic tradeoff they made doesn't ever cause problems.
Subscribe
  • Post a new comment

    Error

    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

  • 26 comments
Git does not attempt to achieve "eventual consistency" because that requires an authoritative notion of The Truth, and Git was explicitly designed to AVOID that.

This is not a design flaw; this is the result of explicitly avoiding what you want.

Authority, in Git, is a *social* matter, not a technical one. And blind merging is explicitly frowned upon -- you should know what the heck you're merging, and be able to make intelligent decisions about it.

But in the event you want to simply disregard a branch, and make a different one "win" when doing a merge, git DOES provide that mechanism: "git merge -s ours".

In short: Stop treating Git like SVN -- it was never meant to be that, and projecting your desires onto it does not make it's unwillingness to meet those desires a "flaw"! If you can't suss out how to establish a social structure that provides the authoritative "single source of truth" you want -- or find it distasteful that you need to address that at the social level, then clearly Git does not meet your needs and you should find a tool that does.

That said, nitpicking about history-tracking as a lament for why what you want will never be -- it just misses the point. Badly. Git will never be what you want because it was designed to NOT be that, period.
If you find it so important to not have blind merging, I suggest you do all merges by using diff and manually selecting which hunk wins in each case. That will free you from the confines of a tool which actually keeps track of things for you.
I think Monotone's mark-merge escalates all these decisions to the user, FWIW.
I believe you're correct, and also that mark-merge will get horribly over-conservative in situations where two different branches keep pulling old versions of the other one for an extended period of time. It seems like nothing supports that use case well, and noone has ever really complained about it.

Deleted comment

I believe the terms you would like are 'commutative and associative'. What I mean by 'eventual consistency' is that if everybody eventually pulls in all the same history, they'll all wind up at the same value (assuming no merge conflicts). At least that's what I think I mean, I'm just trying to use the same terminology as earlier posts.

I suspect that every layer is a spoiler for eventual consistency by the way. It just plain won't happen unless you do some very artificial canonical reordering of changes when new history comes in, and that can result in bizarre codebase jumping around in some edge cases.
Your sixth sentence would read much better as:
Git follows the semantics which you want 99% of the time, at the cost of having some edge cases upon which it's inherently just plain broken.
Those are grammatical rules up with which I will not put.
> In this example, one person changed a files contents from A to B, then back to A, while someone else changed A to B and left it that way. The question is: What to do when the two heads are merged together?

Report a merge conflict.

Going back to the basics: what are the semantics of a 3-way merge? Well, we have three snapshots, the base and two branches. We make diffs between the base and the branches. What it actually means is that we reconstruct the programmers' actions from the snapshots: they added these lines, modified these lines, deleted these lines.

Then we either merge the diffs and produce the merged snapshot, or discover a conflict: both programmers modified the same line, or one deleted the line another modified, or both added some lines in the same place.

And if we look at this this way, then the root of the problem is that git (as well as Mercurial, SVN, etc) take a shortcut when computing the diffs to be fed into the 3-way merge, and that sometimes produces incorrect/inconsistent results.

An example of git doing it wrong: http://pastebin.com/SxmwpFkY

If I run the the script, switch to master and do "git diff master~2", git produces an incorrect diff:
@@ -3,3 +3,11 @@ B
 C
 D
 E
+G
+G
+G
+A
+B
+C
+D
+E
I mean, it's a correct diff between the two snapshots, but this is not what I did.

But when I run "git blame", it produces the correct "diff":
6584854c 1) A
6584854c 2) B
6584854c 3) C
6584854c 4) D
6584854c 5) E
c36e1ff8 6) G
c36e1ff8 7) G
c36e1ff8 8) G
^b43cba4 9) A
^b43cba4 10) B
^b43cba4 11) C
^b43cba4 12) D
^b43cba4 13) E

Here it examines all commits leading to the current one, and deduces the position of the lines from the initial commit (^b43cba4) correctly.

And that's it. Normal diffs are not associative, given successive snapshots a, b, c, diff(diff(a, b), c) != diff(a, diff(b, c)) != diff(a, c). While the output of `blame` is associative (except maybe for deleted lines).

So it seems that if git (and hg, and svn) used the output of a blame-like algorithm for merging, then the order of automatic merges wouldn't matter. And the problem becomes purely technical one: how to make this blame-like algorithm fast enough.

(by the way, of course the basic step -- that reconstruction of programmer's actions from the difference between the snapshot, is not infallible. But when you use it on two adjacent snapshots it's good enough, also pretty transparent. As the distance between the snapshots being diffed (by the number of intermediate commits) increases, the probability of getting it wrong grows).
Showing a conflict in that example would be clearly broken behavior. It could result in repeatedly showing the exact same merge conflict over and over again, between the exact same values, on later merges.
> It could result in repeatedly showing the exact same merge conflict over and over again, between the exact same values, on later merges.

Why? Any merge establishes definitive ancestries for each line of code, and when you are merging "A - B - A" with something, you are supposed to tell it that the conflicting lines come from the base snapshot, not from your "reversal". In fact, when you want to revert a commit, instead of re-committing the previous version you should merge with it, I think.

Was that the problem that you were thinking about?

bramcohen

8 years ago

faceted_jacinth

8 years ago

Dear Bram:

One man's "confusing crap" is another man's "nice example". I have always found your revert-based example to be, while technically interesting, not the sort of thing that I imagine running into a lot in practice. (Like you say, If it hurts when you do that then don't do that!) Also I tend to get confused when you get to the criss-cross scenario.

On the other hand my bugfix-based example that you link to at the top illustrates an issue that is relevant to pretty much every merge. The only reason people don't notice it in practice is that usually the "fuzzy target selection" algorithm gets lucky. That's the one in which you search for a hunk in the target which is near where the original hunk was located or has some of the same neighboring lines of code as the original hunk had.

Anyway, I'm kind of irritated that you alluded to my nice example (or possibly to Russell O'Connor's extension of it) as "confusing" and "crap". If you can think of a simplification or a clarification of the bugfix-based example, I would be interested to see it. Your revert-based example is not that, though--it is a different thing.

Regards,

Zooko
The problem with dealing directly with the positioning example is that my argument is completely semantic, I'm basically saying 'maybe the user really did mean for it to be a completely fresh version, and just ignore the history'. Which is basically an argument in favor of fuzzy matches in general. Examples where there's a lot more editing in the interim make it much more likely that the user really didn't follow all the line moves and simply wants the fuzzy match.

My point here is about the higher-level thesis - that consistent merges are just plain impossible. An argument can be made for it in the line ordering case as well, but it's a weaker one, hence my use of just this example.
On Reddit, a user quite reasonably asks:
Having multiple branches which are constantly pulling in each others's changes at a slight lag is bad development practice anyway
Wait, ain't that the scenario for which DVCS are meant?

Are we misunderstanding what you meant?

You should generally have a master, and have branches pull from master frequently and synch to master occasionally, and when they do synch to master it should be off the most recent version.