Language Workbench in Visual Studio

image

This is far from polished and complete but here you can see the grammar defined in foo.g being dynamically compiled and used to parse the text in the grammar console. You can see how the [Keyword] attribute is used to generate syntax highlighting in that grammar window also, as well as automatic error generation for the errant ‘!’.

Next up is visualization of the node graph produced by the parse as well as error information. Very promising progress though!

Error until and Error unless

I’ve been working on improving error handling in meta# for the last couple of weeks. Previously there was no support and basically your code would either parse correctly or it would not.

So I cracked open the Purple Dragon book, dug into Martin Fowlers DSL book, asked on the OMeta forums and read about how some other grammar tools do error handling.

It would probably have helped to have someone sit down and show me but I had a real hard time understanding exactly what the various solutions were. But other than not fully getting the specifics I think I got the general idea and was able to add two new error handling semantics to meta#.

Parsing errors basically all boil down to a single type of problem: the thing that is next in the stream is not what you were expecting. I decided that you could look at this problem two different ways. You could either say that something you expect is missing or something you didn’t expect is present.

So I added two new pattern semantics “error unless X” and “error until X”, where X is any pattern.

Error Unless

In the case of “error unless”, that is like saying something you expected is missing. In this case an error will be logged, input from the stream will not be consumed and null will be returned rather than fail. This will let whatever rule that uses these semantics to still project and give you a very specific message and if the following code is well formed the parser could even recover without any additional errors. This is very useful for missing ‘;’ at the end of statements.

image

image

Error Until

For “error until”, it is like saying something you were not expecting is present. In this case all of the input will be consumed until the X pattern is matched. An error will be reported and fail will be returned. This is very good for sync’ing to the next close bracket because it will read all unknown input and treat it as an error.

image

image

LogError

One last way to report errors is just by calling context.LogError(…) from within a projection. Then you can handle more complex cases and log arbitrary error messages. I might have to expand the api for this but here is an example of how I’m using it so far.

image

On Premature Optimization

I have heard the phrase “premature optimization is the root of all evil” many times but have never had a chance to consciously put it to the test before. Meta# has a few critical execution paths where performance is a very big concern and has a large impact on how performant the overall process of parsing goes.

I intentionally ignored all performance concerns up till this point however, choosing to trust in the wisdom that says to avoid premature optimization. I finally got to a point where most of the main features I wanted were in place and I have some very good test coverage (turns out I had 91% test coverage the first time I ran the code coverage tool). So I decided to embark upon a journey of performance optimization.

Before/After
Tests: 662, Failures: 0, Skipped: 1, Time: 32.402 seconds
Tests: 665, Failures: 0, Skipped: 0, Time: 16.977 seconds

I’d say that it was a huge success! The three new tests are actually parsing all of the .g files in meta# again and tracking their performance. Which means that the slowest tests are now run twice and the whole run is taking about half time time it was before.

I can tell you when I first went to look into where to do optimizations I almost panicked. I thought my code was perfect and that the performance flaw was in the design itself, I had a moment of crisis. But there were tons of low hanging fruit ready for optimization.

So I’m officially a believer in avoiding premature optimization at this point. I would include that I relied heavily on an excellent unit test base to prove that my changes still worked and that is a crucial piece to being able to make these types of systemic changes.

Also, I used the excellent TestDriven.NET performance tools to do give me my data. I highly recommend it. You just right click a test and select Run Test -> Performance. It gives you a very detailed report and the ability to find out your slowest calls very quickly. Optimize and try again! A very clean cycle.

Programming and Scaling

tele-TASK Video: Programming and Scaling.

If you’ve heard me talk about DSLs but just haven’t quite been sold on the idea yet, watch this video. In fact, watch it anyway. Dr. Alan Kay gives a very inspirational and interesting speech about the past, present and future of Computer Science, technological innovation and creativity. The grand finale ties all of his ideas together in a beautiful example of the power of domain specific languages.

I found myself nodding throughout this entire presentation and even though I didn’t know where it was going I could see how it applied to my own personal research in meta#. Thank you Dr. Kay, I may never need to spend my time explaining the why’s of DSLs again, I will simply forward them to this presentation.