Response to Ted Neward at Oredev

I was just listening to the Ted Neward talk on .NET Rocks during Oredevfrom a while back. It’s a very interesting discussion and I would definitely recommend listening to it. Ted, Carl and Richard discuss a variety of things such as Oslo, functional programming and DSLs. There were a few things I wanted to comment on since I spend a lot of time thinking about DSLs; perhaps I can add to the discussion.

Here’s a little paraphrase that Ted boldly proclaims towards the beginning of the conversation:

The next 5 to 10 years will be about programming languages… the renaissance of the programming language.

I couldn’t agree more.

There are a lot of things said and I agree with almost everything. There was one question outstanding that I would like to try to respond to. I don’t remember who said it exactly but the generalized question was “what is the business need DSLs are trying to solve?”.

Here is my response as succinctly as I can put it:

 

Constraint increases scalability.

 

This maxim has a multitude of implications, because in order to increase scalability we are talking about code that is easier to maintain, understand and author. We are also talking about code that is consistent and accurate while still increasing productivity. There are many factors that go into scalability and I think, in general, DSLs are the solution to the broader problems. Because when you think about it the most powerful aspect of a DSL is the fact that it is a constrained universe.

So to put it in more practical terms, I think as projects become more and more complex we will need DSLs in order for it to even be possible. This idea was somewhat given to me by a talk at the Lang.NET symposium I heard by Roman Ivantsov related to the failure rate for extremely large projects.

I would also add that while it may be necessary to have DSLs for larger projects it may be merely helpful for smaller ones. But in a competitive market any little advantage you can gain in productivity is usually a critical factor in the success of your projects overall. After all, most general purpose languages we use today could be thought of as a DSL over machine code itself, it’s simply the domain of programming in this case. Another way to think about is to say that a while loop is just a construct to help us safely use the jump instruction. But nobody today would argue that using a popular general purpose language is worse than manually writing assembly code. The sorts of productivity boosts given to you by a modern programming language is undeniable and used by nearly everyone.

But back to the idea of constraint as a good thing. We normally think of flexibility in a programming language as a good thing but I would like to go ahead and claim that the opposite might likely by true. Perhaps flexibility in languages in the past was necessary because of the general nature of their purposes as well as the rapidity with which they can change (slow). There are entire classes of solutions that require lots of flexibility in order to be solved by a general purpose language.

However think of the value that design patterns bring to a general purpose language. As you’re implementing your application you start noticing patterns, places where you tend to need to do the same thing over and over, you may abstract that into a design pattern. Unfortunately due to the general nature of the languages we tend to use, it is very easy to stray from the patterns we have created and do something that breaks that pattern due to ignorance, or haste or whatever. This usually results in an increase in complexity and maintenance. And there is nothing more permanent than a temporary fix.

For example, just because you’ve gone through great pains to create a Data Access Layer and Business Logic Layer, it doesn’t mean that somebody won’t spin up a SqlConnection in the middle of your view and screw everything up. There are ways to mitigate this, code reviews, static analysis, education,  etc. but these are all simply other forms of constraint. What if the very language you were using to create your application didn’t even have the capability to stray from the accepted design patterns. What if your view was written in a declarative DSL specifically for authoring views where accessing data was completely agnostic to the implementation, and interpreted at design time or runtime to always go through the accepted channels? This is how a DSL can increase scalability.

 

Design patterns are DSLs.

 

Any where you can abstract your application into a design pattern you should be able to create a DSL to express that design pattern. Additionally, those DSLs should be re-usable and implementable in any application. An interesting example of this is the Axum programming language, specifically designed to solve the problem of concurrency by creating a language constrained to enable concurrent code safely. Under the hood the code created is something you could have done manually in any general purpose language but the more we can be constrained and declarative about such things the less error prone the underlying code will be. Additionally it helps us to easily understand and implement highly complex code, which increases productivity. Even the smartest developers have a hard time getting concurrency right, we really need constraint in this domain because its incredibly easy to do something unsafe.

There are a few things we need in the programming community in order to make any of this feasible, which we are currently lacking. I have been working on MetaSharp specifically to solve some of these issues, but it has a long way to go. Here is a brief list off the top of my head of problems needing to be solved by a DSL tool:

  • An open, transparent, extensible, language agnostic compiler.
  • A common, extensible AST.
  • An excellent, runtime based grammar parser.
  • Common transformations.
  • IDE support, for debugging transformations as well as author time feedback and visualization.

I could go on and on… but look forward to our future of programming languages. In the near future we may finally be equipping ourselves with the right tool for the job at hand.

Structured Procrastination

I was in a big argument on last friday at Pracna where I took the position “laziness is a virtue” (I’ll save that for another post).

This article on Structured Procrastination might be a better way of expressing what I was trying to say, or a least an alternate way.

Specifically:

“Procrastinators often follow exactly the wrong tack. They try to minimize their commitments, assuming that if they have only a few things to do, they will quit procrastinating and get them done. But this goes contrary to the basic nature of the procrastinator and destroys his most important source of motivation. The few tasks on his list will be by definition the most important, and the only way to avoid doing them will be to do nothing. This is a way to become a couch potato, not an effective human being.”

haha!

Competing With Your Own Product

This seems more like a business focused subject than a strictly programming related topic and as such I feel obligated to add a disclaimer: I’m not really qualified to talk about this subject with any authority but this is a thought I’ve been having for a while so I thought I’d just throw it out there. Also, these are totally my opinions and not necessarily the opinions of my employer. With that out of the way I’ll get to what I’m really trying to say.

It seems like there is a pretty consistent pattern in the software world where someone creates something really clever and innovative then after a short time, as the implementation of that program begins mature, the ideas of how it should be start to become well known yet the actual application gets bogged down with backwards compatibility concerns, and increasing complexity, slowing it’s velocity.

It seems like maintaining that compatibility and reusing that source base becomes a necessity to maintain current users so you end up stuck between a rock and a hard place as you try to innovate and change without changing too much too fast.

What’s really interesting is that, not burdened with backwards compatibility, or existing codebases your competitors are free to create their own implementation of what they envision to be a more ideal solution to the problem that your application is trying to solve… and they have a tendency to actually do it much better.

They cycle is almost Darwinian and it takes quite a special application to resist the inevitable undertow over time. The classic application I think about when I’m pondering these ideas is Lotus Notes, though I think it’s true of nearly every piece of software ever created. As far as I understand it Lotus Notes was one of the first document editors and spreadsheet applications, then came Office not too long after. And while it’s only my opinion I think it’s clear which is really the king. My limited experience with Lotus Notes was as a worn down, buggy, ugly, highly idiosyncratic application not intended for use by mere mortals.

You could potentially make the same argument for Internet Explorer, first there was Netscape Navigator then there was Internet Explorer and now there is Firefox. While what is “better” is still largely subjective it’s still easy to see the pattern of competitors, free from backwards compatibility, are free to innovate very quickly and overtake their more aged competition.

So my main point of this post is to suggest the idea that it’s important to identify when an applications velocity is suffering, and also to suggest that becoming your own competitor might be necessary for survival. What I mean by this is not to suggest that your current application should be dropped suddenly but that it could be healthy to start up a completely parallel effort free from all of the malaise affecting your current application. If your competitor can do it then so can you… in fact if you don’t it could be fatal. While your aged application begins to fade gracefully into maintenance mode you should begin to divert resources fully towards the successor (Darwinian metaphors galore!).

I think a few potential reason it may be hard for companies to come to this conclusion is to think that A.) they take it as a sign of weakness and B.) they tend to make the mistake that their software is their most valuable asset. My argument to these two points are related, I believe that it’s actually the developers of the software that are the real assets, and by creating your own competing application you can reuse the truly important aspects of the software: the developers. Bringing all of the domain knowledge with you and starting from a clean slate could only result in amazing things and it’s not a sign of weakness to show intelligent, pro-active development for the future. After all, if you don’t do it some other company will.

Obviously, from a pragmatic perspective you can’t afford to do this for every release. Likewise, why bother with a thriving well liked application in its prime? I think the key here is, dying, slow moving, bogged down applications need to know when to let go and start over.

From a more micro perspective I think that the DRY principle is related and brings up some interesting thoughts. As a programmer, the DRY principle has been hammered into my head since the very beginning of my education but at some point you just have to come to the conclusion that reuse can result in decreased value when that thing you’re trying to reuse is done poorly. I often times think about the DRY principle as simply the output of a given candidate for reuse. For example the thought process “if we have libraryX and it’s task is to do X then from now on, whenever we need to do X we can reuse this library”. Well this sounds good in principal, but how libraryX does X is just as important as the result. You are not repeating yourself if you do X differently.

The DRY principal says Do Not Repeat Yourself, which does not necessarily mean Do Reuse Yourself.

I would love to hear the thoughts of others on this topic.

Staged Pipelines

In an effort to make the MetaSharp pipelines more powerful I’m about to add the concepts of stages and connectors. I’ve been thinking about it a bit and I drew up some diagrams to help me express how the pattern should work.

At a high level it’s pretty simple, for every pipeline there are multiple stages and for each stage there are multiple steps. Each stage has 1 or many input connectors and 1 or many output connectors, which connects to the next stage of the pipeline.

image

With this in mind there are four possible types of stages, defined by their input and output connectors. Stages must be chained together with matching input and output connections. You want multiple types because there are certain types of operations that are simply not possible to do simultaneously but there are other types that are completely isolated and are perfectly acceptable to run asynchronously.

image

Many to Many

For each type of input a complete inner pipeline of steps is created. Meaning each input value from a previous stage will be processed by the same steps. Each inner pipeline will run asynchronously and should not communicate between each other. The stage will complete when all steps have completed running.

image

1 to 1

This type of stage will accept one input value and produce one output value. It will create exactly one chain of steps and execute synchronously.

image

1 to Many

This type of stage will accept one input value and have exactly one chain of steps but will produce many output values.

image

Many to One

This type of stage will accept many values and run them all through exactly one chain of steps.

image

 

From this I should be able to make any type of compilation pipeline imaginable. For example a typical pipeline might be something like this:

  • Parse files
  • Combine AST
  • Resolve References
  • Generate Assembly

In which case you might end up with the following stages:

  • M:M, Parse files all at once
  • M:1, Combine the ASTs into one tree.
  • 1:1, Resolve and transform the tree.
  • 1:1, Transform into IL

You could also imagine that last step transforming into multiple objects or multiple files or something like that quite easily. Also the good news is that I think this shouldn’t actually be that complicated. The pipeline simply deals with connecting stages and each stage has a very simple strategy for processing the steps. The real work will lie in the implementing the stages but even then each stage is completely modular and singularly focused.

An Alternative to the Building Construction Metaphor for Software Development

I am currently reading “The Pragmatic Programmer” while riding the bus to work in the mornings. It’s pretty good and I read something earlier today that I thought was especially interesting, something I hadn’t thought about before at all and I would like to share it here.

This excerpt is on the subject of refactoring (Chapter 6 pg. 184). He begins by using the standard metaphor of construction of buildings for the process of software development, which I found interesting because I have heard this exact metaphor several times from various software architects. But then he goes on to say:
Well, software doesn’t quite work that way. Rather than construction, software is more like gardening – it is more organic than concrete. You plant many things in a garden according to an initial plan and conditions. Some thrive, others are destined to end up as compost. You may move plantings relative to each other to take advantage of the interplay of light and shadow, wind and rain. Overgrown plants get split or pruned, and colors that clash may get moved to more aesthetically pleasing locations. You pull weeds, and you fertilize plantings that are in need of some extra help. You constantly monitor the health of the garden, and make adjustments (to the soil, the plants, the layout) as needed.
Which is an amazing metaphor, I’ve never been able to quite put my finger one what I didn’t like about the construction metaphor but for lack of anything better I’ve been unable to refute it. The organic metaphor appeals to me much more. There have been a lot of good things in this book but this is the first thing I have read that is a completely new idea to me so I thought I would share it with you.

Appearance in CoDe Magazine

If you haven’t already taken a look at the Nov / Dec 2008 issue of CoDe magazine I would highly recommend it 😉 On my last gig at Magenic I had the pleasure of working for Rocky Lhotka, Sergey Barskiy and Nermin Dibek on CSLA Light. Along the way we managed to crank out an article for CoDe Magazine related to the work we were doing. Here is a link to the article online, Using CSLA .NET for Silverlight to Build Line-of-Business Applications.

I got a copy of this magazine at the last Twin Cities Code Camp and didn’t even know that I was a co-author of one of the articles in it! It wasn’t until the following monday that a coworker of mine pointed out to me that I was in the magazine and he only knew because he recognized my picture. That was pretty funny.

Now that I’m famous if anyone wants me to autograph their copy of CoDe magazine just let me know!