I ran into this paper which addresses the topic of software reliability from a (to me) very surprising point of view. In a nutshell, it defends the notion that our computing model (Turing’s) ties together software complexity and unreliability, but there are fundamentally different approaches where increased complexity actually increases reliability as well. Integrated circuits and our very own brain use such approaches.
It is true that Alan Turing and Fred Brooks are among the worshipped gods of modern software engineering. Being a deeply convinced agnostic and wanabee atheist, I jumped at the opportunity to read about ways to question and (possibly) get rid of our gods.
Aside from a few remarkably funny sentences, like the politically incorrect “all connectors should be unidirectional, i.e., they should be either male (sender) or female (receiver)," the article is intriguing. The ideas are related to neural networks, self-adjusting systems, and I can even identify a touch of functional / declarative concepts in there; it’s a shame that the article never explores these parallelisms.
Now, my two main problems with the article are:
– I’m not sure that integrated circuits are so fundamentally complex. Size is not the same as complexity, and the fact that modern CPUs have hundreds of millions of elements does not automatically make them more complex.
– The brain may be seen as reliable in the sense that it is able to perform fairly well in very obscure, ambiguous and imprecise circumstances. However, it’s output is equally imprecise, and worse, unpredictable. No two people are alike and all that. But predictability and precision are among the most common traits we require of our software systems.
Turing computing philosophy typically addresses the problems of reliability and faulty software by brute force: strict specification and domain limitations, development protocols and verification procedures. They are not without a price: efficiency and speed of development greatly increase. It’s common for fault-safe developers to produce functionality at a tenth (or worse) of the speed of their less strict counterparts. Software Engineering techniques such as Object Orientation or Test-Driven Development try to provide helpful improvements (rather than full-fledged solutions) in software quality without incurring in great costs. The old 90/10 rule again.
It is unclear if Savain’s approach can produce a real, working, and more importantly, practical development environment and process. It is even less clear that this model can be applied equally well in all areas of computing, or it would end up restricted to specific domains like pattern recognition and creative problem-solving (which is where the brain excels).
But it’s always a good idea to consider alternatives and think outside the box. Especially on a rainy day, and it’s pouring outside my window.
Edit: Reading this description of the synchronous, signal-based Esterel language should clarify some of the concepts and approaches described by Savain.