On Premature Optimization

I have heard the phrase “premature optimization is the root of all evil” many times but have never had a chance to consciously put it to the test before. Meta# has a few critical execution paths where performance is a very big concern and has a large impact on how performant the overall process of parsing goes.

I intentionally ignored all performance concerns up till this point however, choosing to trust in the wisdom that says to avoid premature optimization. I finally got to a point where most of the main features I wanted were in place and I have some very good test coverage (turns out I had 91% test coverage the first time I ran the code coverage tool). So I decided to embark upon a journey of performance optimization.

Before/After
Tests: 662, Failures: 0, Skipped: 1, Time: 32.402 seconds
Tests: 665, Failures: 0, Skipped: 0, Time: 16.977 seconds

I’d say that it was a huge success! The three new tests are actually parsing all of the .g files in meta# again and tracking their performance. Which means that the slowest tests are now run twice and the whole run is taking about half time time it was before.

I can tell you when I first went to look into where to do optimizations I almost panicked. I thought my code was perfect and that the performance flaw was in the design itself, I had a moment of crisis. But there were tons of low hanging fruit ready for optimization.

So I’m officially a believer in avoiding premature optimization at this point. I would include that I relied heavily on an excellent unit test base to prove that my changes still worked and that is a crucial piece to being able to make these types of systemic changes.

Also, I used the excellent TestDriven.NET performance tools to do give me my data. I highly recommend it. You just right click a test and select Run Test -> Performance. It gives you a very detailed report and the ability to find out your slowest calls very quickly. Optimize and try again! A very clean cycle.

Programming and Scaling

tele-TASK Video: Programming and Scaling.

If you’ve heard me talk about DSLs but just haven’t quite been sold on the idea yet, watch this video. In fact, watch it anyway. Dr. Alan Kay gives a very inspirational and interesting speech about the past, present and future of Computer Science, technological innovation and creativity. The grand finale ties all of his ideas together in a beautiful example of the power of domain specific languages.

I found myself nodding throughout this entire presentation and even though I didn’t know where it was going I could see how it applied to my own personal research in meta#. Thank you Dr. Kay, I may never need to spend my time explaining the why’s of DSLs again, I will simply forward them to this presentation.