Test Budgeting

I ran across this concept in an internal discussion group a while back and thought it would be worth sharing. Arlo Belshee is the coiner of the phrase as far as I know. The idea is simply that you come up with a threshold of time that you find acceptable for your tests to run in. One which allows you to still maintain a TDD workflow. Below is Arlo’s recommended test budget.
Test Budget
  1. Up to 3 tests that take longer than 2 seconds.
  2. Up to 100 tests that take longer than .02 seconds.
  3. Infinite tests that take less than .02 seconds.
I also think that you should have a ratio of the three, #1 should not comprise more than 1% of your tests and #2 should not be more than 10% of your tests. The idea is that as you’re writing unit tests you need to design your code in such a way so that it is easily testable in order to meet your test budgets. If you’re writing slow tests early you end up hitting a wall and the friction caused by the tests degrades the value proposition of TDD.

On Premature Optimization

I have heard the phrase “premature optimization is the root of all evil” many times but have never had a chance to consciously put it to the test before. Meta# has a few critical execution paths where performance is a very big concern and has a large impact on how performant the overall process of parsing goes.

I intentionally ignored all performance concerns up till this point however, choosing to trust in the wisdom that says to avoid premature optimization. I finally got to a point where most of the main features I wanted were in place and I have some very good test coverage (turns out I had 91% test coverage the first time I ran the code coverage tool). So I decided to embark upon a journey of performance optimization.

Before/After
Tests: 662, Failures: 0, Skipped: 1, Time: 32.402 seconds
Tests: 665, Failures: 0, Skipped: 0, Time: 16.977 seconds

I’d say that it was a huge success! The three new tests are actually parsing all of the .g files in meta# again and tracking their performance. Which means that the slowest tests are now run twice and the whole run is taking about half time time it was before.

I can tell you when I first went to look into where to do optimizations I almost panicked. I thought my code was perfect and that the performance flaw was in the design itself, I had a moment of crisis. But there were tons of low hanging fruit ready for optimization.

So I’m officially a believer in avoiding premature optimization at this point. I would include that I relied heavily on an excellent unit test base to prove that my changes still worked and that is a crucial piece to being able to make these types of systemic changes.

Also, I used the excellent TestDriven.NET performance tools to do give me my data. I highly recommend it. You just right click a test and select Run Test -> Performance. It gives you a very detailed report and the ability to find out your slowest calls very quickly. Optimize and try again! A very clean cycle.

Does process kill developer passion?

http://radar.oreilly.com/2011/05/process-kills-developer-passion.html

In general I agree with most of what he says, such as

In short, you’re spending a lot of your time on process, and less and less
actually coding the applications.

… having to shoehorn in shims to make unit tests work has reduced the
readability of the code.

Disaffected programmers write poor code, and poor code makes management add
more process in an attempt to “make” their programmers write good code. That
just makes morale worse, and so on.

The blind application of process best practices across all development is
turning what should be a creative process into chartered accountancy with a side
of prison.

And as an aside, if you’re going to say you’re practicing agile development,
then practice agile development! A project where you decide before you
start a product cycle the features that must be in the product, the ship date,
and the assigned resources is a waterfall project.

However I strongly disagree with this

But, for example, maybe junior (or specialized) developers should be writing
the unit tests, leaving the more seasoned developers free to concentrate on the
actual implementation of the application.

But I would like to say that I really do love TDD, as I am working on this new version of MetaSharp I am really driving it with tests as best as I can. Tests are critical for verifying that your code is actually correct and that some new feature doesn’t break something you have already done. That being said it’s really only useful in a project where you know where you are going already. When I first started MetaSharp there was a lot of experimentation and a plenty of dead ends and when I wrote a lot of tests it was a mostly just wasted effort to undo and redo the them, it was a pain in the ass frankly. But after a lot of prototyping and experimenting I finally decided that I knew where I wanted to be and started over. In this new iteration I have been writing as many tests as I can without slowing my momentum down too much, and the thing is  when you know the domain and where you need to end up the tests do not slow you down at all. It’s excellent in that scenario. So I think if you go into a coding phase with a prototyping mentality then, meh, maybe TDD is more of a hindrance but seriously be prepared to throw it all away. The quality just wont’ be high enough without extensive tests. It’s not a foundation to really build too much ontop of.

But TDD is really only part of the story. I’m not in a position to dictate our development process on my team at work so I’m trying to analyze it and find out what I like and don’t like because you can learn about as much from what doesn’t work as you do from what does work. I feel like we’re essentially waterfall even though we use scrum terminology, there are some things managers just can’t live without!

If I had it my way, if I had developers working under me, I would see my role as essentially a buffer. I would be as much of a dev as I could manage but the difference would be that I would also buffer my devs from unnecessary meetings from higher up. I would be the one to gather information from external groups and filter down what is imporant to whom. I would gather my knowledge of their progress by asking them in person, one at a time, and being active in the daily process of work items. I would encourage them to communicate direction and continually with each other rather than setup a mandatory scrum meeting. To me scrum is like a “spray and pray” information dispersal method. It’s a waste of time for most people in the room every time someone else is speaking. I would encourage pair programming and I would be the one, as much as possible to maintain the database of work items and keep the devs pipelines full.

Also, integration sprints? Waste of time, continuous integration should fix that. Planning sprints? The dev lead should be continuously doing that. At some point bugs end up at the top of the developers pipelines simply because of their importance and therefore you are continuously fixing them rather than dedicating some period of time to bugs and other to features. In fact the whole idea of a sprint seems arbitrary to me. Just always be working on what’s next. At each step, with each feature your application should be working. Bugs simply coming to the top of the pipeline is basically equivalent to an integration sprint in my mind.

Code reviews? I despise gated commits. Code reviews should be done post commit. The dev manager should just look at the list of commits and start reviewing them. Peers shouldn’t really need to do formal code reviews because they should be in constant communication with people who they are working closely with. If there is a smell in something that was committed then start a discussion, no reason it couldn’t be fixed post commit. I’m assuming we’re using good source control tools that allow us to merge and branch confidently.

I could go on and on but I’m still working out these ideas, I have never really said these thoughts out loud before, except perhaps over a pint of beer with a friend.

UnitDriven v0.0.5 Available

http://unitdriven.codeplex.com/releases/view/46068

I applied some updates to UnitDriven and released a new version recently. The updates provide some nicer nesting of namespaces in the test runner UI as well as improved disabling of ‘Run’ buttons and correctly functioning Timeouts.

Also the update is paired with updates to StatLight so you can run your Silverlight unit tests in a purely automated fashion.

Also, if you find yourself trying to remember why you would want to use UnitDriven instead of one of the other unit test frameworks for Silverlight, here are the main features.

  • First class support for asynchronous tests.
  • Supports the ability to author identical tests for Silverlight and .NET (file linking).
  • Parallels either MSTest or NUnit seamlessly.

NUnit for .NET 4.0

I recently upgraded one of my projects to .NET 4.0 only to find that NUnit would no longer run my tests. Rather than waiting for the next version of NUnit to come with a .NET 4 build I decided to download the source code and build it myself. I’m including a zipfile with the required solution and project files to build NUnit yourself.

This should tide you over until the official build is released.

http://cid-dfcd2d88d3fe101c.skydrive.live.com/embedrowdetail.aspx/blog/justnbusiness/vs2010.zip

Build Steps

  1. Download the latest source code of NUnit
  2. Place the unzipped contents of this folder at $(NUnit)\solutions\
  3. Open nunit.sln and build
    1. You will get some build errors on one of the assemblies but most of the core assemblies will work fine.
  4. Copy the newly built assemblies into your applications lib folder.

Specter – A Specification DSL for Boo

Specification frameworks are a great way to enable a smooth approach to test driven development. A nice and simple specification framework called Specter has been created specifically for Boo, which includes a simple and elegant internal Specification DSL. If you’re curious to learn about specification frameworks, Boo or DSLs you should check out Specter for an elegant and useful example of all 3.

Here is a really great video of the step-by-step process:

Unit Testing Asynchronous Code in Silverlight

My current job is a very enjoyable one. I have the pleasure of working for Rocky Lhotka on CSLA 3.6 and CSLA Light, more specifically CSLA Light but the two definitely bleed together.

CSLA Light is a project where we are trying to create a version of CSLA that will run on Silverlight. If you’re interested in hearing more details about this you should check out Rocky’s blog since it is the most up to date authority on CSLA development progress.

One of the problems we ran into right away was the ability to unit test our Silverlight code. Unit testing on Silverlight presents a number of limitations that are not present in a standard .net application. We initially tried using the unit test framework provided by Microsoft but found it impossible to test anything with a WCF service call in it due to threading.

To help illustrate the problem consider the following test:

[TestClass]

public class TestExample

{

    [TestMethod]

    public void Example()

    {

        ManualResetEvent mre = new ManualResetEvent(false);

        BackgroundWorker worker = new BackgroundWorker();

        worker.DoWork += (o, e) =>

        {

            // Do some processing…

        };

        worker.RunWorkerCompleted += (o, e) =>

        {

            mre.Set();

        };

        worker.RunWorkerAsync();

 

        mre.WaitOne();

    }

}

This test simulates a unit of work that is performed asynchronously. If you run this test in Silverlight what will happen? Also, suppose in your DoWork method there is bug and an Exception is thrown, what will happen? I’ll get back to this in a moment.

One of the features of CSLA is something called the “Data Portal” which is a concept that has been preserved in CSLA Light with only some slight differences, primarily all network calls in Silverlight must be done asynchronously. The Data Portal is the mechanism your CSLA business objects must use to transfer data across network separated application tiers. In CSLA 3.6 an asynchronous Data Portal has been created to provide parity with Silverlight, not to mention the fact that it’s just plain cool.

One interesting thing to know about Silverlight is that whenever you use WCF to make a network call it will dispatch that call onto the UI thread. I believe this is actually a limition of the browser rather than Silverlight itself, it must be piggy backing on top of the browser XmlHttpRequest functionality and therefore suffers from the same limitations.

This is a major problem for the Silverlight MSTest framework! Since your test is running on the UI thread if your test tries to make a WCF call it will need to be dispatched to the UI to work and you will end up with a deadlock. The above test will not work in Silverlight because we have used a ManualResetEvent to hold the UI thread since it too will dispatch to the UI thread.

So to respond to this we came up with a light weight unit testing framework that will allow you to easily test asynchronous code in Silverlight and to accompany NUnit or MSTest in .NET. The project is on Codeplex and it is called Unit Driven. It is designed to allow you to easily test asynchronous code in both Silverlight and .NET. In fact, in CSLA the test code we are writing is identical for both CSLA and CSLA Light despite the various differences in the environments. Here is an example of how you would write the previous test to address all of the questions I posed using Unit Driven:

[TestClass]

public class TestExample : TestBase

{

    [TestMethod]

    public void Example()

    {

        UnitTestContext context = GetContext();

        BackgroundWorker worker = new BackgroundWorker();

        worker.DoWork += (o, e) =>

        {

            // Do some processing…

        };

        worker.RunWorkerCompleted += (o, e) =>

        {

        };

        worker.RunWorkerAsync();

 

        context.Complete();

    }

}

The subtle differences in this approach are simply that you’re using the UnitTestContext object to block the thread, or not, for you depending on the framework your test is running in as well as having an AsyncAsserter to manage getting exceptions back to the test thread for you.

In the first example you would end up in a deadlock in Silverlight and if your DoWork method threw an exception in either framework it would be interpreted as an unhandled exception and would cause your test to hang. With UnitDriven we are able to manage this easily by using an Assert.Try( … )

[TestMethod]

[ExpectedException(typeof(InvalidOperationException))]

public void ExpectedExceptionExample()

{

    UnitTestContext context = GetContext();

    BackgroundWorker worker = new BackgroundWorker();

    worker.RunWorkerCompleted += (o, e) =>

    {

        // catches exception here and passes to the context.

        context.Assert.Try(() =>

        {

            throw new InvalidOperationException();

        });

        context.Assert.Fail();

    };

    worker.RunWorkerAsync();

 

    // When the context is triggered it will find the

    // exception and re-throw it in .NET

    // and simply pass it back in Silverlight.

    context.Complete();

}

For some working examples check out our AsyncTests.cs on codeplex.

Happy Testing!