Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

Unscientific 4.0 benchmark test

Wednesday, April 27, 2011 , 5 Comments

As you all know by now we’re heavily working on Plastic SCM 4.0, the upcoming release where we’re trying to put together all the suggestions we got from our user base during the last years (specially the “big ones” which can’t easily be included in minor releases) together with a number of “hard-core” changes impacting the branching and merging engine and the replication (distributed) one.

Hence 4.0 will come with a number of new features, including new GUI and new Distributed Branch Explorer but also heavily improving performance. And that’s exactly what I’ll be sharing today: how 4.0 performs in some simple ops against well-known DVCS.

I asked to gather some numbers after reading the following tweet from Eric Sink (the brain behind Veracity () and the great Vault).

So, we tried a similar scenario on a Dell XPS 13 Laptop (4GB RAM, 7000rpm HD… well, you can find the full specs online, not a beast!) running Windows 7, using two different sets: a small repo (similar to the one used in the previous tweet) and a “huge” one (similar to the ones used in the gaming industry).
And here you’ve the results: first the small one, add+commit 2752 files, 45MB: hg: 7.8s git 3.6 s plastic 3.4s

It is a really tiny test but we’re happy to outperform the other DVCS already.
We’re using: Plastic 4.0.192 (internal release) using SQLite backend, Hg 1.6.2, Git 1.7.4.msysgit.0.

And now let’s give a try to a biggest test using 192k files and 33k directories and a total of 5.82GB. I’m happy to announce that under these circumstances Plastic is even better (our goal is to become the best DVCS handling big files): 192k files, 5.82GB: hg: 1563 s git 1256 s plastic 601 s



  1. looks good. I'd be interested in seeing how performance compares to perforce with large repositories. We use perforce at my work, but i'm really keen convincing the guys to switch us to plastic if it can handle large repos efficiently.


  2. Check-in is slower? And how does 4.0 compare to Plastic 3.0? (how much improvement versus current version)

  3. @broccula: this test is interesting (specially since we outperform competitors :P) but it is clear that it is not really important under real circumstances (ok, during evaluation it could be, but in real life you're not going to be doing this on a daily basis).

    What I can tell you is that P4, in this scenario (and it is very easy to check, we can add results later this week) runs much, much, much slower... And it is simply an overkill with SVN.

    But, if you're interested in a real life scenario, showing how Plastic outperforms P4 under real load, take a look at http://codicesoftware.blogspot.com/2010/07/version-control-scalability-shoot-out.html

  4. @André: no, checkin is not slower, take into account that Git adds data to the repository (well, the .git directory) during add, something Hg and Plastic don't do, so they "save time" for checkin.

    Also, it is important to note that while Git is "just" writing to a local directory, Plastic is using the network to send data plus a database (SQLite), so the performance gain is more at the design level than the implementation level... (also, C against C#, and C# winning the battle... specially with HUGE repos)...

    That being said, I'd never say "just" again when talking about Git, since it is obviously an excellent, really excellent, piece of software. Git's team is much bigger than ours, but obviously they have exceptional hackers...

  5. Any chance that we could see these benchmarks expanded to include Bazaar? We have some people who really like bzr, except that its so slow for large repos (10K+ files, 1GB+ total store). Could win some converts here...