- Up to 3 tests that take longer than 2 seconds.
- Up to 100 tests that take longer than .02 seconds.
- Infinite tests that take less than .02 seconds.
I have heard the phrase “premature optimization is the root of all evil” many times but have never had a chance to consciously put it to the test before. Meta# has a few critical execution paths where performance is a very big concern and has a large impact on how performant the overall process of parsing goes.
I intentionally ignored all performance concerns up till this point however, choosing to trust in the wisdom that says to avoid premature optimization. I finally got to a point where most of the main features I wanted were in place and I have some very good test coverage (turns out I had 91% test coverage the first time I ran the code coverage tool). So I decided to embark upon a journey of performance optimization.
Tests: 662, Failures: 0, Skipped: 1, Time: 32.402 seconds
Tests: 665, Failures: 0, Skipped: 0, Time: 16.977 seconds
I’d say that it was a huge success! The three new tests are actually parsing all of the .g files in meta# again and tracking their performance. Which means that the slowest tests are now run twice and the whole run is taking about half time time it was before.
I can tell you when I first went to look into where to do optimizations I almost panicked. I thought my code was perfect and that the performance flaw was in the design itself, I had a moment of crisis. But there were tons of low hanging fruit ready for optimization.
So I’m officially a believer in avoiding premature optimization at this point. I would include that I relied heavily on an excellent unit test base to prove that my changes still worked and that is a crucial piece to being able to make these types of systemic changes.
Also, I used the excellent TestDriven.NET performance tools to do give me my data. I highly recommend it. You just right click a test and select Run Test -> Performance. It gives you a very detailed report and the ability to find out your slowest calls very quickly. Optimize and try again! A very clean cycle.
In general I agree with most of what he says, such as
In short, you’re spending a lot of your time on process, and less and less
actually coding the applications.
… having to shoehorn in shims to make unit tests work has reduced the
readability of the code.
Disaffected programmers write poor code, and poor code makes management add
more process in an attempt to “make” their programmers write good code. That
just makes morale worse, and so on.
The blind application of process best practices across all development is
turning what should be a creative process into chartered accountancy with a side
And as an aside, if you’re going to say you’re practicing agile development,
then practice agile development! A project where you decide before you
start a product cycle the features that must be in the product, the ship date,
and the assigned resources is a waterfall project.
However I strongly disagree with this
But, for example, maybe junior (or specialized) developers should be writing
the unit tests, leaving the more seasoned developers free to concentrate on the
actual implementation of the application.
But I would like to say that I really do love TDD, as I am working on this new version of MetaSharp I am really driving it with tests as best as I can. Tests are critical for verifying that your code is actually correct and that some new feature doesn’t break something you have already done. That being said it’s really only useful in a project where you know where you are going already. When I first started MetaSharp there was a lot of experimentation and a plenty of dead ends and when I wrote a lot of tests it was a mostly just wasted effort to undo and redo the them, it was a pain in the ass frankly. But after a lot of prototyping and experimenting I finally decided that I knew where I wanted to be and started over. In this new iteration I have been writing as many tests as I can without slowing my momentum down too much, and the thing is when you know the domain and where you need to end up the tests do not slow you down at all. It’s excellent in that scenario. So I think if you go into a coding phase with a prototyping mentality then, meh, maybe TDD is more of a hindrance but seriously be prepared to throw it all away. The quality just wont’ be high enough without extensive tests. It’s not a foundation to really build too much ontop of.
But TDD is really only part of the story. I’m not in a position to dictate our development process on my team at work so I’m trying to analyze it and find out what I like and don’t like because you can learn about as much from what doesn’t work as you do from what does work. I feel like we’re essentially waterfall even though we use scrum terminology, there are some things managers just can’t live without!
If I had it my way, if I had developers working under me, I would see my role as essentially a buffer. I would be as much of a dev as I could manage but the difference would be that I would also buffer my devs from unnecessary meetings from higher up. I would be the one to gather information from external groups and filter down what is imporant to whom. I would gather my knowledge of their progress by asking them in person, one at a time, and being active in the daily process of work items. I would encourage them to communicate direction and continually with each other rather than setup a mandatory scrum meeting. To me scrum is like a “spray and pray” information dispersal method. It’s a waste of time for most people in the room every time someone else is speaking. I would encourage pair programming and I would be the one, as much as possible to maintain the database of work items and keep the devs pipelines full.
Also, integration sprints? Waste of time, continuous integration should fix that. Planning sprints? The dev lead should be continuously doing that. At some point bugs end up at the top of the developers pipelines simply because of their importance and therefore you are continuously fixing them rather than dedicating some period of time to bugs and other to features. In fact the whole idea of a sprint seems arbitrary to me. Just always be working on what’s next. At each step, with each feature your application should be working. Bugs simply coming to the top of the pipeline is basically equivalent to an integration sprint in my mind.
Code reviews? I despise gated commits. Code reviews should be done post commit. The dev manager should just look at the list of commits and start reviewing them. Peers shouldn’t really need to do formal code reviews because they should be in constant communication with people who they are working closely with. If there is a smell in something that was committed then start a discussion, no reason it couldn’t be fixed post commit. I’m assuming we’re using good source control tools that allow us to merge and branch confidently.
I could go on and on but I’m still working out these ideas, I have never really said these thoughts out loud before, except perhaps over a pint of beer with a friend.
I applied some updates to UnitDriven and released a new version recently. The updates provide some nicer nesting of namespaces in the test runner UI as well as improved disabling of ‘Run’ buttons and correctly functioning Timeouts.
Also the update is paired with updates to StatLight so you can run your Silverlight unit tests in a purely automated fashion.
Also, if you find yourself trying to remember why you would want to use UnitDriven instead of one of the other unit test frameworks for Silverlight, here are the main features.
- First class support for asynchronous tests.
- Supports the ability to author identical tests for Silverlight and .NET (file linking).
- Parallels either MSTest or NUnit seamlessly.
I recently upgraded one of my projects to .NET 4.0 only to find that NUnit would no longer run my tests. Rather than waiting for the next version of NUnit to come with a .NET 4 build I decided to download the source code and build it myself. I’m including a zipfile with the required solution and project files to build NUnit yourself.
This should tide you over until the official build is released.
- Download the latest source code of NUnit
- Place the unzipped contents of this folder at $(NUnit)\solutions\
- Open nunit.sln and build
- You will get some build errors on one of the assemblies but most of the core assemblies will work fine.
- Copy the newly built assemblies into your applications lib folder.