Dynamic analysis techniques (which involve the execution of some or all of the code) are helpful in both software unit testing, and system integration and testing.
Test procedures need to be authored, reviewed, and executed to ensure the software unit meets design criteria but does not contain any undesired functionality. Unit tests can then be executed on the target hardware or simulated environment based on the verification plan and verification specification. Once the test procedures are executed actual outputs are captured and compared with the expected results. Pass/ Fail results are then reported and software safety requirements are verified accordingly.
Integration testing is designed to ensure that when the units are working together in accordance with the software architectural design, they meet the related specified requirements. In practice, these integration tests typically involve the verification of functional, safety and cybersecurity functions.
It is desirable for all dynamic testing to use environments which correspond closely to the target environment and hence test dependencies between hardware and software. However, that is not always practical. One approach involves developing the tests in a simulated environment and then, once proven, re-running them on the target.
Neither standard insists that any of these tests deploy software test tools. However, once again, such tools are capable of making the test process far more efficient.
Figure 6 shows how the software interface is exposed at the function scope allowing the user to enter inputs and expected outputs as the basis for a test harness, which is compiled and executed on the target hardware. Actual outputs are captured and compared.
SAE J3061 recommends performing penetration (or “pen”) testing and fuzz testing in accordance with software security requirements. Pen testing is usually a system test techniques, performed late in the lifecycle.
Robustness testing is closely related to fuzz testing, and is complementary to it in that it can be performed at unit or integration level, and can generate structural coverage metrics. Both robustness and fuzz testing are designed to test response to unexpected inputs. Robustness testing can incorporate techniques such as boundary value analysis, conditional value analysis, error guessing, and error seeding techniques.
Figure 6: Performing requirement based unit-testing using the LDRA tool suite
Structural Coverage Metrics
System and unit test can also operate in tandem, so that (for instance) coverage can be generated for most of the source code through a dynamic system test and complemented using unit tests to exercise normally inaccessible code, such as defensive constructs (Figure 7).
Figure 7: Examples of representations of structural coverage within the LDRA tool suite