The article is a straightforward and easy read with fascinating results for error rates in both C and FORTRAN code. This study used both static analysis of the code as well as a runtime comparison of 2 implementations of the same algorithms acting on the same input data with the same parameters. It is arguable that the results may not be limited to those 2 languages.
The error rates were not a surprise, similar error rates have been demonstrated over and over again in typical software. The types of errors were interesting as well as the impact that the sum total of the errors can have. In effect
"....
these 2 experiments suggest that the results of scientific calculations involving significant amounts of software should be treated with the same measure of disbelief as an unconfirmed physical experiment"
That is not a cheap dilemma.
We start talking about independent verification of complex software calculations, the costing man-hours and money goes up drastically. Yet I think it is obvious that the results of this research points strongly to that being the case.
Another interesting result is that there appears to be a clear relationship between the complexity of the language specification and the number of holes for a programmer to fall into. Probably not a surprise, but clearly not something that many language developers pay much attention to. In general, most programming languages produced these days have a much larger number of language rules than preceding languages.
The author makes an argument that program standards adherence in scientific software is laughable. I leave to your imagination and judgment how applicable that conclusion is to your own workplace.
Here is a pointer to the PDF article: THE T-EXPERIMENTS: ERRORS IN SCIENTIFIC SOFTWARE. Enjoy !
No comments:
Post a Comment