The Food and Drug Administration (FDA) pays special attention to software because it is now embedded in a large percentage of electromedical devices, and the amount of device functionality controlled by software is continually growing. Software also controls many of a medical device manufacturer’s design, development, manufacturing, and quality processes, regardless of whether software is a part of the manufactured device.
Software failures often can be invisible and difficult to detect; thus, these failures can have disastrous consequences on the operation or quality of medical devices. For this reason, the FDA specifically requires validation of both device and quality-system automation software. Validation activities are meant to keep defects from getting into the software, as well as to detect and correct any defects that do end up in the software.
The FDA’s control over software used by medical device manufacturers is detailed in the Quality System Regulations (QSRs) found in FDA regulation 21 CFR 820. Software regulations focus on the development and use of two large categories: (1) software that is part of the device being manufactured and (2) software that is used to design, develop and manufacture the product or otherwise automate the quality system.
Guidelines for complying with the FDA’s regulations are published by the agency as a “Guidance Document”. These documents are updated approximately every five years, and new guidance is issued as the need arises. While compliance with the guidelines is voluntary, the device manufacturer should be prepared to explain and defend any deviation from the guidances.
The most important FDA guidance available for the validation of software is the General Principles of Software Validation (GPSV), and can be obtained for free from the FDA’s website (www.fda.gov/cdrh/comp/ guidance/938.pdf). This is a “must read” for all software engineers and quality engineers working with software in the medical device industry.
Validation: More Than Testing
A common misperception is that validation of software is synonymous with the testing of software. This is not at all accurate.
Federal regulation requires software validation, not software testing. Validation, by the FDA’s definition, is the “confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled.”
Certainly, testing activity may be a component of validation, but note that the definition above does not use the word “test” at all. In fact, the definition mentions specifications and requirements specifically, assuming they exist, and therefore creates a de facto linkage between validation and requirements.
The GPSV describes at length the definitions of, and differences between, software validation and software verification. Only a few of the related activities would be considered test activities. Similarly, verification activities, though narrower in scope, involve reviews, evaluations, and testing activities.
Keep in mind that all verification and test activities are validation activities, with other activities making up the remainder. Some testing is considered a verification activity, but there are verification activities that are not testing activities, and there is also testing that is not verification testing. Stay mindful that validation is not the same as testing, but for the remainder of this article we will be talking about testing specifically.
It All Starts With Requirements
Is writing requirements a validation activity? Of course it is! What has really been verified if validation testing (i.e., verification testing) is attempted without written requirements? The GPSV points out: “Success in accurately and completely documenting software requirements is a crucial factor in successful validation of the resulting software.”
There is some room for debate on what constitutes a requirement and what constitutes design detail. While much benefit derives from solid, well reviewed requirements that have nothing to do with testing, the testing effort should be based on verifying the correct implementation of requirements.
Tests should test requirements.
Too often, tests are written without detailed requirements (or not enough detail in requirements), and the test developer is forced to refer to the software for details regarding how the software works. The end result is that the test ends up documenting the way the software works, not necessarily the way it is supposed to work. In essence, the requirements for the software are embedded in the test, not in a requirements document. Many problems are associated with this, but perhaps most alarming is that potential problems can get embedded into tests as expected behavior—thus, making it difficult to identify them as problems, and guaranteeing that the behavior will be accepted forever after.
Inadequate requirements usually result in inadequate testing.
Here are some telltale signs of inadequate requirements:
- Software developers need constant access to market and clinical specialists to understand how they want the software to work.
- Code writing begins before requirements are written or approved.
- Test authors need frequent access to the software developers to understand how the software should work.
- Formalized testing finds few if any defects, while “experts” (e.g. developers) with the system find many defects in ad hoc testing. (It could be inadequate testing too, but usually it is a combination of poor requirements and poor testing.)
- Review of test results is time-consuming and requires “experts” with the software to determine whether tests passed or failed.
- A preponderance of tests fail by the testers but are subsequently passed because the “testers didn’t understand how the software works.”
If these red flags fly, you need to go back and rewrite the requirements, making them specific enough so that they can be tested properly. Adequate requirements are the foundation of testing, and a little extra up-front effort can prevent the whole structure from collapsing later. While testing alone doesn’t amount to validation, proper validation demands testing, and testing depends on testable requirements.
This completes the first portion of my two-part series on medical device validation. In my next column, I will discuss steps to develop and implement a sound testing strategy and reveal some surprises—for instance, why it’s just as crucial to have inexperienced as well as experienced testers on the team. It will show you how to avoid an all-too-common problem: over-testing simple functionality while under-testing complex functions far more likely to cause defects.