This seems more like a business focused subject than a strictly programming related topic and as such I feel obligated to add a disclaimer: I’m not really qualified to talk about this subject with any authority but this is a thought I’ve been having for a while so I thought I’d just throw it out there. Also, these are totally my opinions and not necessarily the opinions of my employer. With that out of the way I’ll get to what I’m really trying to say.
It seems like there is a pretty consistent pattern in the software world where someone creates something really clever and innovative then after a short time, as the implementation of that program begins mature, the ideas of how it should be start to become well known yet the actual application gets bogged down with backwards compatibility concerns, and increasing complexity, slowing it’s velocity.
It seems like maintaining that compatibility and reusing that source base becomes a necessity to maintain current users so you end up stuck between a rock and a hard place as you try to innovate and change without changing too much too fast.
What’s really interesting is that, not burdened with backwards compatibility, or existing codebases your competitors are free to create their own implementation of what they envision to be a more ideal solution to the problem that your application is trying to solve… and they have a tendency to actually do it much better.
They cycle is almost Darwinian and it takes quite a special application to resist the inevitable undertow over time. The classic application I think about when I’m pondering these ideas is Lotus Notes, though I think it’s true of nearly every piece of software ever created. As far as I understand it Lotus Notes was one of the first document editors and spreadsheet applications, then came Office not too long after. And while it’s only my opinion I think it’s clear which is really the king. My limited experience with Lotus Notes was as a worn down, buggy, ugly, highly idiosyncratic application not intended for use by mere mortals.
You could potentially make the same argument for Internet Explorer, first there was Netscape Navigator then there was Internet Explorer and now there is Firefox. While what is “better” is still largely subjective it’s still easy to see the pattern of competitors, free from backwards compatibility, are free to innovate very quickly and overtake their more aged competition.
So my main point of this post is to suggest the idea that it’s important to identify when an applications velocity is suffering, and also to suggest that becoming your own competitor might be necessary for survival. What I mean by this is not to suggest that your current application should be dropped suddenly but that it could be healthy to start up a completely parallel effort free from all of the malaise affecting your current application. If your competitor can do it then so can you… in fact if you don’t it could be fatal. While your aged application begins to fade gracefully into maintenance mode you should begin to divert resources fully towards the successor (Darwinian metaphors galore!).
I think a few potential reason it may be hard for companies to come to this conclusion is to think that A.) they take it as a sign of weakness and B.) they tend to make the mistake that their software is their most valuable asset. My argument to these two points are related, I believe that it’s actually the developers of the software that are the real assets, and by creating your own competing application you can reuse the truly important aspects of the software: the developers. Bringing all of the domain knowledge with you and starting from a clean slate could only result in amazing things and it’s not a sign of weakness to show intelligent, pro-active development for the future. After all, if you don’t do it some other company will.
Obviously, from a pragmatic perspective you can’t afford to do this for every release. Likewise, why bother with a thriving well liked application in its prime? I think the key here is, dying, slow moving, bogged down applications need to know when to let go and start over.
From a more micro perspective I think that the DRY principle is related and brings up some interesting thoughts. As a programmer, the DRY principle has been hammered into my head since the very beginning of my education but at some point you just have to come to the conclusion that reuse can result in decreased value when that thing you’re trying to reuse is done poorly. I often times think about the DRY principle as simply the output of a given candidate for reuse. For example the thought process “if we have libraryX and it’s task is to do X then from now on, whenever we need to do X we can reuse this library”. Well this sounds good in principal, but how libraryX does X is just as important as the result. You are not repeating yourself if you do X differently.
The DRY principal says Do Not Repeat Yourself, which does not necessarily mean Do Reuse Yourself.
I would love to hear the thoughts of others on this topic.
You must log in to post a comment.