It’s been claimed that interoperability testing is essential. It’s easy to see how people burned by incompatible (yet supposedly standards compliant) products might conclude this. It’s also easy to prove, however, that interoperability testing is not necessarily any better than compliance testing.
Compliance testing generally consists of a device under test (DUT) that’s stimulated by various inputs while the outputs are measured. But what exactly is stimulating the DUT, and what is measuring the outputs?
When the DUT is strictly software, it’s often configured into a test fixture or jig. That jig, however, can be thought of as a composition of independent components that are sometimes called mock objects (also called mocks). These mocks are simulations of other software modules that the DUT interacts with. The mocks are configured to produce the necessary stimuli, or instrumented to measure outputs. Of course the situation is not very different when the DUT is physical device.
So compliance tests involve a DUT interacting with a bunch of mocks (that are not necessarily recognized as mocks). Now take that compliance testing setup, and replace the mocks one at a time until the DUT is interacting only with “real” devices. Isn’t that an interoperability testing setup?
The real difference between compliance and interoperability testing is the test vectors. If your interoperability testing scenarios are no different from your compliance testing scenarios, you won’t get anything extra out of the interoperability tests - the interoperability tests just use a different mechanism to stimulate and monitor the DUT. In other words, if1 you beef up your compliance test scenarios to the point of good interoperability tests, you can learn everything you need to learn from compliance tests.
That can be a pretty big “if”, of course. ↩