C# as seen by languages with Type Inference…

via @HamletDRC

tumblr_kvdv7l891W1qz9bgeo1_400[1]

 

Ever since I started playing around with Boo this is exactly how I feel about declaring types for my variables…

C#

IFoo foo = (IFoo)Fetch(arg);

 

Boo

foo = Fetch(arg) as IFoo

 

IFoo only ever needs to be written once on any given line. And if Fetch returned IFoo it wouldn’t need to be written at all. Being explicit is great and all, but being redundant is absolutely not.

Ship It!

phone 033

 

Because my phone sucks so bad it’s a tiny picture but this is my first Ship It award from Microsoft for being apart of the team that shipped Expression Studio 3. Also, even though it looks like copper its actually a metallic silver. The inscriptions says:

Ship It

Every time a product ships, it takes us one step closer to the vision: empower people through great software-any time, any place and on any device. Thanks for the lasting contribution you have made to Microsoft History.

Steve Ballmer    Bill Gates

Justin Chase

 

Thanks Steve and Bill!

Response to Ted Neward at Oredev

I was just listening to the Ted Neward talk on .NET Rocks during Oredevfrom a while back. It’s a very interesting discussion and I would definitely recommend listening to it. Ted, Carl and Richard discuss a variety of things such as Oslo, functional programming and DSLs. There were a few things I wanted to comment on since I spend a lot of time thinking about DSLs; perhaps I can add to the discussion.

Here’s a little paraphrase that Ted boldly proclaims towards the beginning of the conversation:

The next 5 to 10 years will be about programming languages… the renaissance of the programming language.

I couldn’t agree more.

There are a lot of things said and I agree with almost everything. There was one question outstanding that I would like to try to respond to. I don’t remember who said it exactly but the generalized question was “what is the business need DSLs are trying to solve?”.

Here is my response as succinctly as I can put it:

 

Constraint increases scalability.

 

This maxim has a multitude of implications, because in order to increase scalability we are talking about code that is easier to maintain, understand and author. We are also talking about code that is consistent and accurate while still increasing productivity. There are many factors that go into scalability and I think, in general, DSLs are the solution to the broader problems. Because when you think about it the most powerful aspect of a DSL is the fact that it is a constrained universe.

So to put it in more practical terms, I think as projects become more and more complex we will need DSLs in order for it to even be possible. This idea was somewhat given to me by a talk at the Lang.NET symposium I heard by Roman Ivantsov related to the failure rate for extremely large projects.

I would also add that while it may be necessary to have DSLs for larger projects it may be merely helpful for smaller ones. But in a competitive market any little advantage you can gain in productivity is usually a critical factor in the success of your projects overall. After all, most general purpose languages we use today could be thought of as a DSL over machine code itself, it’s simply the domain of programming in this case. Another way to think about is to say that a while loop is just a construct to help us safely use the jump instruction. But nobody today would argue that using a popular general purpose language is worse than manually writing assembly code. The sorts of productivity boosts given to you by a modern programming language is undeniable and used by nearly everyone.

But back to the idea of constraint as a good thing. We normally think of flexibility in a programming language as a good thing but I would like to go ahead and claim that the opposite might likely by true. Perhaps flexibility in languages in the past was necessary because of the general nature of their purposes as well as the rapidity with which they can change (slow). There are entire classes of solutions that require lots of flexibility in order to be solved by a general purpose language.

However think of the value that design patterns bring to a general purpose language. As you’re implementing your application you start noticing patterns, places where you tend to need to do the same thing over and over, you may abstract that into a design pattern. Unfortunately due to the general nature of the languages we tend to use, it is very easy to stray from the patterns we have created and do something that breaks that pattern due to ignorance, or haste or whatever. This usually results in an increase in complexity and maintenance. And there is nothing more permanent than a temporary fix.

For example, just because you’ve gone through great pains to create a Data Access Layer and Business Logic Layer, it doesn’t mean that somebody won’t spin up a SqlConnection in the middle of your view and screw everything up. There are ways to mitigate this, code reviews, static analysis, education,  etc. but these are all simply other forms of constraint. What if the very language you were using to create your application didn’t even have the capability to stray from the accepted design patterns. What if your view was written in a declarative DSL specifically for authoring views where accessing data was completely agnostic to the implementation, and interpreted at design time or runtime to always go through the accepted channels? This is how a DSL can increase scalability.

 

Design patterns are DSLs.

 

Any where you can abstract your application into a design pattern you should be able to create a DSL to express that design pattern. Additionally, those DSLs should be re-usable and implementable in any application. An interesting example of this is the Axum programming language, specifically designed to solve the problem of concurrency by creating a language constrained to enable concurrent code safely. Under the hood the code created is something you could have done manually in any general purpose language but the more we can be constrained and declarative about such things the less error prone the underlying code will be. Additionally it helps us to easily understand and implement highly complex code, which increases productivity. Even the smartest developers have a hard time getting concurrency right, we really need constraint in this domain because its incredibly easy to do something unsafe.

There are a few things we need in the programming community in order to make any of this feasible, which we are currently lacking. I have been working on MetaSharp specifically to solve some of these issues, but it has a long way to go. Here is a brief list off the top of my head of problems needing to be solved by a DSL tool:

  • An open, transparent, extensible, language agnostic compiler.
  • A common, extensible AST.
  • An excellent, runtime based grammar parser.
  • Common transformations.
  • IDE support, for debugging transformations as well as author time feedback and visualization.

I could go on and on… but look forward to our future of programming languages. In the near future we may finally be equipping ourselves with the right tool for the job at hand.

Structured Procrastination

I was in a big argument on last friday at Pracna where I took the position “laziness is a virtue” (I’ll save that for another post).

This article on Structured Procrastination might be a better way of expressing what I was trying to say, or a least an alternate way.

Specifically:

“Procrastinators often follow exactly the wrong tack. They try to minimize their commitments, assuming that if they have only a few things to do, they will quit procrastinating and get them done. But this goes contrary to the basic nature of the procrastinator and destroys his most important source of motivation. The few tasks on his list will be by definition the most important, and the only way to avoid doing them will be to do nothing. This is a way to become a couch potato, not an effective human being.”

haha!

Competing With Your Own Product

This seems more like a business focused subject than a strictly programming related topic and as such I feel obligated to add a disclaimer: I’m not really qualified to talk about this subject with any authority but this is a thought I’ve been having for a while so I thought I’d just throw it out there. Also, these are totally my opinions and not necessarily the opinions of my employer. With that out of the way I’ll get to what I’m really trying to say.

It seems like there is a pretty consistent pattern in the software world where someone creates something really clever and innovative then after a short time, as the implementation of that program begins mature, the ideas of how it should be start to become well known yet the actual application gets bogged down with backwards compatibility concerns, and increasing complexity, slowing it’s velocity.

It seems like maintaining that compatibility and reusing that source base becomes a necessity to maintain current users so you end up stuck between a rock and a hard place as you try to innovate and change without changing too much too fast.

What’s really interesting is that, not burdened with backwards compatibility, or existing codebases your competitors are free to create their own implementation of what they envision to be a more ideal solution to the problem that your application is trying to solve… and they have a tendency to actually do it much better.

They cycle is almost Darwinian and it takes quite a special application to resist the inevitable undertow over time. The classic application I think about when I’m pondering these ideas is Lotus Notes, though I think it’s true of nearly every piece of software ever created. As far as I understand it Lotus Notes was one of the first document editors and spreadsheet applications, then came Office not too long after. And while it’s only my opinion I think it’s clear which is really the king. My limited experience with Lotus Notes was as a worn down, buggy, ugly, highly idiosyncratic application not intended for use by mere mortals.

You could potentially make the same argument for Internet Explorer, first there was Netscape Navigator then there was Internet Explorer and now there is Firefox. While what is “better” is still largely subjective it’s still easy to see the pattern of competitors, free from backwards compatibility, are free to innovate very quickly and overtake their more aged competition.

So my main point of this post is to suggest the idea that it’s important to identify when an applications velocity is suffering, and also to suggest that becoming your own competitor might be necessary for survival. What I mean by this is not to suggest that your current application should be dropped suddenly but that it could be healthy to start up a completely parallel effort free from all of the malaise affecting your current application. If your competitor can do it then so can you… in fact if you don’t it could be fatal. While your aged application begins to fade gracefully into maintenance mode you should begin to divert resources fully towards the successor (Darwinian metaphors galore!).

I think a few potential reason it may be hard for companies to come to this conclusion is to think that A.) they take it as a sign of weakness and B.) they tend to make the mistake that their software is their most valuable asset. My argument to these two points are related, I believe that it’s actually the developers of the software that are the real assets, and by creating your own competing application you can reuse the truly important aspects of the software: the developers. Bringing all of the domain knowledge with you and starting from a clean slate could only result in amazing things and it’s not a sign of weakness to show intelligent, pro-active development for the future. After all, if you don’t do it some other company will.

Obviously, from a pragmatic perspective you can’t afford to do this for every release. Likewise, why bother with a thriving well liked application in its prime? I think the key here is, dying, slow moving, bogged down applications need to know when to let go and start over.

From a more micro perspective I think that the DRY principle is related and brings up some interesting thoughts. As a programmer, the DRY principle has been hammered into my head since the very beginning of my education but at some point you just have to come to the conclusion that reuse can result in decreased value when that thing you’re trying to reuse is done poorly. I often times think about the DRY principle as simply the output of a given candidate for reuse. For example the thought process “if we have libraryX and it’s task is to do X then from now on, whenever we need to do X we can reuse this library”. Well this sounds good in principal, but how libraryX does X is just as important as the result. You are not repeating yourself if you do X differently.

The DRY principal says Do Not Repeat Yourself, which does not necessarily mean Do Reuse Yourself.

I would love to hear the thoughts of others on this topic.