Wednesday, February 21, 2007

Why do software tools still suck?

Why is it that software tools have only inched along in the last 15 years? Fifteen years ago we were using similar debuggers and text editors. Fifteen years ago the software development cycle looks strikingly different to todays. So where is the innovation in software development?

Well there has been a good amount of innovation in software development, it just hasn't been in the tools. In the past fifteen years we've seen the rise of managed code like Java and Microsoft's CLR. We've seen software modeling tools like UML. We've also seen a rise of powerful scripting languages starting with VBScript and now Python, PHP, and the rest. More recently we've starting seeing static analysis tools like Coverity and Klocwork.

Of all of the technologies listed above, the only one which could accurately be described as a tool is static analysis (which I'll get to later). Everything else is a language or meta-language. So the real innovation in the past 15 years has been languages and the extra features that they support. The motivation for language innovation has been the travesty which is C/C++. Of the thousands of languages in existence C/C++ is probably the most hated [see here here here here ...] .

In the newer languages they've tried to build on all of the good things that were in C/C++ and strip out all of the crappy parts. Some advantages of the newer languages are easier memory management, syntactic sugar, and the supplied libraries.

Without a question the real advance in these languages are their libraries. Nearly every single new language that comes out also comes with a huge set of powerful libraries that make application development much much easier. Nuff said about that.

Complexity may be managed a bit better in the newer languages by pushing programmers to use more object oriented design. However, object oriented spaghetti code is often more unreadable then a flat design simply because OO programming gives the impression of modularity.

the newer memory management models are also not fool-proof. In many languages you can still corrupt memory in a similar way that you can in C/C++, however you get a programmer friendly exception instead of a crash. An end user doesn't know or care about the difference between an uncaught exception and a crash.

In some languages there are built-in types to handle multi-threaded constructs. The only real advance in multi-threading has been the use of moniters. I have yet to meet anyone who uses moniters in practice and they are really just syntactic sugar, not a real advance in programming. But most importantly when you have a multi-threaded application you still get the same race conditions and dead locks that you got in C/C++.

It isn't enough to focus on languages. If we really want to advance software development to the next level and create more reliabile and feature-packed software faster, then we need better tools. The main focus here should be handling complexity, error diagnosis (aka debugging), and multi-threading.

I'm going to stress multi-threading because if the semiconductor industry continues on its current track in a couple years we're not going to be able to write single-threaded applications. Processors will come with many cores and we're going to have to learn to take advantage of them whether we like it or not.

So do I have suggestions on how to get us out of this rut? Of course I do....