No, they won't - or rather if they do, nearly all of them will be speaking from a position of near-complete ignorance. Only a small fraction of programmers ever learn what formal methods entails, never mind actually applies them. They may say honestly say that they weren't able to prove their software correct, but that says nothing about whether appropriately trained engineers could.Durandal wrote:That's the desire. But anyone who's actually worked on large, complex systems will tell you that, in practice, it's simply not achievable.
Ok, which formal methods frameworks have you used? I've used Z, did a course on VDM (never actually applied it though) and am currently working on an intelligent code generation engine that uses techniques quite similar to formal verification (for a start, the basis is essentially a Petri net). This is the basis for my opinion 'the methods work in principle but they're tedious and fiddly and the tools suck'.Frankly, I've never put much stock in formal software engineering.
Operating systems are an exception; for one thing the OS is often blamed for the failings of drivers (Windows in particular; drivers for consumer hardware are usually awful) and for another the amount of abstraction that can be used is heavily limited by performance concerns (backwards compatability also imposes crippling constraints). That said, it is still possible to engineer an OS with formal methods and hard reliability guarentees. It's just that no one in their right minds would do this for a major OS because they're already very expensive and time consuming to create from scratch. Microkernels are a step in this direction and that concept mostly failed due to unacceptable performance and (whole-system) complexity overhead.Operating systems are a prime example of this because,
Strictly, formal methods have to be applied to the whole system. It's harder to do this for OSes than nearly anything else. That said there's much reduced scope for formally engineered components interfering with each other even if they're developed and verified completely separately, assuming reasonably defensive assumptions are used.even if each component is tested in its own little environment and works, they can still come together to cause the system to fail catastrophically
Actually formal verification eliminates the need for address space protection - this is exactly what the Java VM does (well, strictly it does it by a combination of verification and weakened instruction set expressivity), and people have seriously proposed 'Java operating systems' that don't need memory protection. This isn't a practical proposition at this time as it's hard to even contemplate replacing the massive investment in conventional memory-protected systems. This is why projects like Tunes will remain pie-in-the-sky theory or at best toy prototypes unless and until we have the tools to heavily automate massive replacement of the existing software base (with formal methods).(Yes, even if you've designed your operating system to segregate user-space processes from each other and from the kernel and all those other good design practices.)
Computers are carefully engineered to be close to deterministic. Obviously concurrent systems aren't deterministic by default but much of the art of parallel programming deals with various mechanisms to guarantee specific required types of determinism. What you're saying is just 'humans can't always judge the effect of small changes to huge systems by intuition/experience'. Well no shit sherlock. Guess what, it isn't practical to write Linux entirely by typing machine code into a hex editor - yet we do build massively complex software systems, because compilers (and to a lesser extent, IDEs) are major 'force multipliers' for the amount of complexity a programmer can handle. Meanwhile, most programmers use no verification tools beyond maybe static type checking (though ignoring even this minimal level of verification seems to be trendy these days).At the end of the day, operating systems and computers are extraordinarily complex systems. They can't be reliably predicted.
Have you tried creating a VDM model of your system and asking the prover that question? *I've seen simple, one-line changes result in kernel panics and hugely complex changes go in without any problems. You just can't guarantee anything.
And here we see the crux of the problem. All of your 'oh, you can't be sure' whining is based on the concept of testing as the sole quality assurance mechanism. This is exactly what DW was slamming; programmers may use math to make their systems work, but (almost all of them) they don't use maths to prove their systems will work, the way engineers prove that a bridge won't collapse. This is understandable for a whole host of reasons, but claiming that it is impossible simply because your definition of 'quality assurance' is locked in to 'functional testing alone' is just ignorance.And it's not because software developers are incompetent or don't know how to test, it's just that achieving exhaustive coverage in your testing is impossible.
Again you are zeroing in on testing when that is not what I was talking about at all. Again, I imagine this is because you have no real experience with formal verification. CPUs are formally verified to a certain extent, both because the relatively straightforward and formal spec (the instruction set definition, mostly) makes it easier and because CPU design teams have relatively high budgets and skill levels.There haven't been very many, and there's a good reason. For all the difficulties of CPU design and hardware logic design, the end product is a whole hell of a lot more predictable in its behavior. It's a lot easier to develop a comprehensive test suite for a CPU
It is impossible to exhaustively test even a CPU for all possible instruction sequences that create unique internal states. As of 64-bit operations, it isn't even possible to exhaustively test individual functional units. What you can do is prove the design works on paper, then go on to prove that a specific set of functional tests will achieve full coverage of the physical gates - no one wants to leave this to guesswork and no manager is going to be satisfied with 'pretty much' when billions of dollars are on the line. Unfortunately the currently practical techniques still can't cover everything, and with the extreme pressure to get new CPU designs out on schedule mistakes are still made.After all, on a CPU, making sure that a given set of inputs produces the correct set of outputs pretty much covers the entire set of functionality.
All of which can be dealt with by temporal logic constructs in principle. As I said, the problem is just that the current tools are lagging what people actually want to develop by quite a way.Making sure all your methods return the proper values is only a small part of how the system works. You have to deal with things like threading issues, asynchronous behavior, blocking, possibly weird error conditions in the system generating exceptions, etc.
This is simply an argument for ensuring that malloc() always works. Though >90% of the time automatic memory management is more appropriate (yes, kernels are an exception) and not coincidentally more amenable to formal verification.I shudder to think how many components in any operating system would just puke themselves in malloc() suddenly started failing.
Well, yes, granted. I think this a major part of that was that the microcomputer revolution of the 80s moved too fast (much faster than even the rise of electronic engineering) for a professional structure similar to other professions to take hold. The dot-com boom certainly didn't help matters either.The problem is that so much software development is done by amateurs. Anyone with a computer can pretty much dive right in. And the trend in the industry is to make development more easily accessible, which encourages more amateurs to enter.
Not yet, but I'm surprised at the progress currently being made in '3D printing' systems, well before nanoscale machinery can become relevant.And there is no trend in mechanical engineering that has the effect of making entry into the field easier.