Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

Command line Plastic Testing...

Sunday, May 06, 2007 Pablo Santos , 2 Comments

I already wrote about the way we are doing testing at DDJ months ago, when I first introduced PNUnit (Parallel NUnit).
We are currently using 3 layers of tests to check Plastic SCM:


* The first one is conventional unit testing using NUnit. Here we check the basics: database storage, merging, basic security, selector resolution and so on.
* The second one is PNUnit testing, and I will be talking about this one today, so let's go to the last one.
* The third one is GUI testing: we are using AutomatedQA's TestComplete to create and run graphical tests, with very good results. We use TestComplete to check all the functionalities that the GUI tool provides and we repeate the tests under different configurations: W2K, XP, W2003 and using different database backends. We also try different authentication setups: active directory, LDAP (running on a Solaris SPARC server) or just local user names. The good thing about TestComplete is that you can also create "concurrent" tests: you launch a client in one box, another client in another box and a server on a third one and you can sinchronize them to perform combined actions. The weak point here is that TestComplete doesn't run on Linux...
Another key part in our testing environment is VMWare (www.vmware.com): to try different setups (except performance tests) we use virtual machines. This way is easy to start testing at a certain known configuration, release after release.
We run NUnit, PNUnit and a reduced set of the GUI tests each time we finish a certain task. If they all pass, the task is marked as finished.
Once a week we create a new internal release, and then the same tests are run again, but now also trying different OS combinations, as I mentioned before.
There is a forth testing layer that we use to make stress testing: we use an external 50 Xeon Cluster (we rent it by hours) to put Plastic under heavy load simulated situations: with 50 CPUs you can really launch lots of clients against a single server... So this is a good benchmark for our system.
Ok, but, what did we go for PNUnit in the first place? What is it exactly?
Most of the time counting on a unit test framework like NUnit is enough, but what if you want to do the following:
- Start a server
- Launch a client and perform several commands: adding a couple of files, check them in and recover its contents to verify they are correctly stored. With Plastic you would time something like:

$ cm add file00.c file01.c
$ cm ci file00.c file01.c
$ cm getfile file00.c#br:/main#LAST -file=filetocheck
$ cm getfile file01.c#br:/main#LAST -file=filetocheck2

And then will check filetocheck and filetocheck2.
Ok, you could do the following with simple NUnit: you could start the server core using the code, and also a client, and perform the commands. But most likely instead of really "typing the commands" you would be using the internal APIs, which is good, but not exactly the same. And even more, what if now I want to start the server on one machine and the client in another one?
Ok, you could just create some sort of shell script and try to automate the process but:
* It won't be (most likely) portable between Windows and Linux
* You won't have all the NUnit commodities (like asserts, tests results and so on)
* What about sinchronization? You need to wait for the server to be correctly started before launching the client, otherwise the test will fail...
And these were basically the reasons why we decided to implement PNUnit at the end of 2005, to create the core of our internal testing system.
PNunit is some sort of NUnit wrapper, and provides:
* A way to start tests remotely on different machines (provided they are running the PNUnit "agent")
* A way to gather test results from the multiple testing machines
* Synchronization facilities (a barrier mechanism) so that you can easily sinchronize different tests
* A method to configure "test cases" based on XML.
The last point is very important because this way you can create many different test configurations based on the same code. Let's go back to the simple example adding just two files. It could be named AddTest. Imagine we have a test that justs launches the Plastic Server, named ServerTest. Both tests are implemented on the assembly cmtest.dll, and specifying the right configuration file we can create different test combinations, running the same code on the same machine (for instance as smoke tests for developers) or on different ones (checking real network operations).

Inside the TestConf block there is a param named Machine which specifies where the test has to be run. So configuring different scenarios is really, really simple.

Well, but how does the real "test code" really looks like? We have built some utility code to wrap our testing code, so that when we write a new test we don't have to care about barriers and so on (although it has to be done to create special scenarios from time to time). The test launching code looks like:



public static void RunTest(string methodTestName, ExecuteTestDelegate test)
{
string testName = PNUnitServices.Get().GetTestName();
try
{
string[] testParams = PNUnitServices.Get().GetTestParams();
string wkbasepath = testParams[0];
string servername = testParams[1];

PNUnitServices.Get().WriteLine("The client should wait until the server starts");
PNUnitServices.Get().InitBarrier(Names.ServerBarrier);
PNUnitServices.Get().InitBarrier(Names.EndBarrier);

// would wait for the server to start
PNUnitServices.Get().EnterBarrier(Names.ServerBarrier);

// execute the test
test(testName, servername, wkbasepath);

// notify the end
PNUnitServices.Get().EnterBarrier(Names.EndBarrier);
CmdRunner.TerminateShell();

}
catch( Exception e )
{
PNUnitServices.Get().WriteLine(
string.Format("{0} {1} FAILED, exception {2}", methodTestName, testName, e.Message));
throw;
}

}

As you can see we initialize a couple of barriers: this means telling to the "test coordinator" (the "launcher") that it will have to handle two different barriers, and then we enter the ServerBarrier: until the server doesn't reach this point, the client won't be able to proceed... So we ensure everything is ok when the client starts running. Then we execute the test, and at the end we pass through the EndBarrier notifying the end of the scenario.

And how does "real tests code" looks like?


private void DoAddCheckInCheckOut(
string testName,
string servername,
string wkbasepath)
{
string wkpath = null;
try
{
string repname = "mainrep-" + testName + "-DoAddCheckInCheckOut";
wkpath = TestHelper.CreateRepAndWorkspaceWithSelector(
testName, servername, wkbasepath,
repname,
new SelectorTest (SelectorTypes.SELECTOR, new string[] {repname}));

// check out parent directory
CmdRunner.ExecuteCommand(
string.Format("cm co . -wks={0}", servername), wkpath);

// add a file
string filepath = Path.Combine(wkpath, FILE_NAME);
FSHelper.WriteFile(filepath, FILE_CONTENT);
CmdRunner.ExecuteCommand(
string.Format("cm add {0} -wks={1}", filepath, servername), wkpath);

// check in
CmdRunner.ExecuteCommand(
string.Format("cm ci {0} -wks={1}", filepath, servername), wkpath);

// check out
CmdRunner.ExecuteCommand(
string.Format("cm co {0} -wks={1}", filepath, servername), wkpath);

// check the file content
Assert.IsTrue(FSHelper.ReadFile(filepath) == FILE_CONTENT,
"The file {0} doesn't have the expected content", filepath);

// check in parent dir
CmdRunner.ExecuteCommand(string.Format("cm ci . -wks={0}", servername), wkpath);
}
finally
{
// clean up the workspace

if( wkpath != null )
FSHelper.DeleteDirectory(wkpath);
}
}

So using specific testing classes (like CmdRunner) we end up "typing commands" at code, and easily being able to add new test cases to check Plastic's functionalities.

The good thing is that adding a new test case is so simple that before fixing a bug we always create a PNUnit test, so we somehow follow "test driven development" not only when adding new code but also during fixing...
Pablo Santos
I'm the CTO and Founder at Códice.
I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
And I love simple code. You can reach me at @psluaces.

2 comments:

  1. Hello,

    I am running selenium grid tests using the PNunit to achieve parallel execution. (Reference: https://testingbot.com/support/getting-started/pnunit.html)

    I am running them locally using VMs not on testingbot.

    I was somehow succeeded on executing them on small scale, but when i move to large scale i am getting error of "agent.exe has stopped working".

    Find snapshot of the error with agent console stack trace (agentstoppedworking.png). http://postimg.org/image/4rhzait9r/

    Need help on solving the problem. Thanks in adavance.

    Thanks,
    Shailesh

    ReplyDelete
    Replies
    1. Hi Shailesh,

      We're using the agent on a daily basis for thousands of tests. It might be that you have a much older version, by the way.

      Since it is part of NUnit, you could probably ask on their official list.

      Delete