Verification and validation practices championed by functional safety, security and coding standards place considerable emphasis on showing how much of an application under test has been exercised. Experience has shown us that if code has been shown to perform correctly, then the probability of failure in the field is considerably lower. And yet almost without exception, the focus of this laudable endeavour is on the high level source code – whether that is written in C, C++, or whatever. Such an approach places a great deal of faith in the ability of the compiler to create object code that faithfully reproduces what the developers intended.
It is i nevitable that the control and data flow of object code will not be an exact mirror of the source code from which it was derived , and so proving that all source code paths can be exercised reliably does not prove the same thing of the object code. Worse, a nd despite their undeniable value, source code low-level test s (sometimes known as “unit tests”) can also be misleading because the object code derived from the compilation of a function wrapped in a test harness can differ markedly from that generated in the context of a complete system.
Uniquely amongst the functional safety standards across the sectors, DO-178C shines a spotlight on the potential for dangerous inconsistencies between developer intent and executable behaviour – and even then, it is not difficult to find advocates of workarounds with clear potential to leave th ose inconsistencies undetected. H owever such approaches are excused, th e fact remains that the differences between source and object code can have devastating consequences in ANY critical application. This paper explains why o bject code verification , or OCV ( Figure 1 ) , represents best practice for any system for which there are dire conseq uences associated with failure – and indeed, for any system where only best practice is good enough.
Figure 1: Applying object code verification with the LDRA tool suite