Taking the long view

I’ve been reading some books that revolve around taking long term perspectives. How Buildings Learn, by Stewart Brant, looks a few centuries into the past, to see how some buildings adapt to changing circumstances and some don’t. The Clock of the Long Now by the same author inspires to look many millenia into the future. The Art of the Long View by Peter Schwartz is about scenario planning, a way to identify multiple possible futures and create options for those futures.

What might we get if we apply a long view to software develoment? What if we wouldn’t build software just for tomorrow, but for the coming several hundred years, like we build railroads and bridges for instance. Marc Evers pointed me to an article by Dan Bricklin, Software that lasts 200 years in which he identifies some possible answers to these questions. He argues, amongst other things, that no single company will be able to support an application for that long (and as applications are networked, that also means connections to other appications) and that in fact a durable kind of support eco-system is needed. This has interesting consequences on e.g. the future of funding Open Source. As he puts it in his conclusion Open source software discussion should be about keeping the trains running on time and not just saying it should run on Linux. The discussions should be about funding the companies needed in such an ecosystem and assuring their sources of healthy revenue. The code is not the only part of the equation, and leadership for all aspects of the ecosystem need to be addressed.

An aspect a friend of mine suggested that goes relatively unnoticed, is the tendency of some companies (e.g. Microsoft) to start patenting their data-formats and even encrypting data you created with ‘your’ word processor (if you read the licence contract, you will know it is not yours…). This will make it extremely hard to get to this data in the future. My friend suggests to put legislation in place, that forces any application that stores data, to be able to export that data to a plain text format, so it can be reconstructed in the future. I think that open standards for protocols and data formats are essential for longevity of data and applications.

It might also be necessary to archive not only the data and the programs, but also the hardware they run on. The Clock of the Long Now has an instructive example of someone trying to re-enact a virtual reality show he created on a Commodore 64. Eventually, this person got lucky because volunteers have created commodore 64 emulators, and someone also put his virtual reality program online, complete with the manual. If we want to keep an accurate historical record for the future then, as Dan Bricklin puts it: serendipitous volunteer labor must not be a major required element.. For instance, tens of thousands of open source projects now rely on SourceForge. What happens if the company supporting it looses interest? Will only the cool projects be saved by enthusiasts?

Being occupied with agile software development, I notice a tension. On one hand, we focus on creating software for today, not building things that are not yet needed. On the other hand, we involve all stakeholders (which might uncover long term consequences), strive to deliver a minimal feature set (so in the future, less features need to be re-created if software archeologists would like to re-run the program) and prevent defects.

Comments are closed.