Evolving Digital Health Impacts EP
The expanding impact of Digital Health and Digital Therapeutics is being felt by the EP sector. Remote monitoring of implantable cardiac devices (ICDs), cardiac resynchronization therapy devices (CRT-Ds), and pacemakers have been shown to improve patient outcomes. More recent work applying convolution neural network (CNN) based artificial intelligence (AI) to the interpretation of ECG signals has shown to improve recognition of silent A-Fib and asymptomatic left ventricular dysfunction (ALVD), among other indications.
The work of integrating the traditional tools of electrophysiology with the rapidly evolving tools of digital health is only just starting. The rate of change is expected to accelerate over the coming months and years. The impact on EP device manufacturers is also only just starting to be felt. The need to integrate these historically stand-alone devices more fully into sophisticated cloud-based systems will require significant thought and rework to existing product platforms. Designing these new platforms for the inclusion of the almost-manically changing AI tools will add even more complexity to the rework process.
These changes, being felt broadly in the medical device and digital health world, are forcing a number of changes into R&D groups with increased urgency. Recognition of the relative immaturity of the AI industry, coupled with the fact that AI essentially requires increased connectivity has impelled the FDA to release new guidelines in both areas over the past year.
The changes introduced by the new cybersecurity and AI guidelines will be significant. There are changes at all stages of the product lifecycle; development, testing and post-deployment support.
Additionally, the mere fact that more devices will need to support cloud connectivity, not to mention the development of the cloud-based functionality, will put even more pressure on medical device R&D groups to become more efficient in their product development processes.
Across the spectrum of medical diagnostic equipment in particular, there has been an explosion of connectivity integrated into devices to support the collection of data across a network; essentially becoming a required feather of nearly all modern diagnostic devices – driving very fast market change and associated growth. The rapid adoption of remote cardiac and ICD monitoring over the past few years has created a market of over $5 billion dollars in 2022, with projections that the market could reach as much as $31 billion by 2028 (a CAGR of over 30% per year). The AI-ECG market is also projected to accelerate the ECG market from $8 billion dollars in 2020 to more than $20 billion in 2030. This kind of market growth represents a real opportunity for companies that are poised to take advantage.
Unlocking this potential, however, requires more than an extension of the status quo. Connected, distributed software systems have little in common with the static, self-contained device software of old. Delivering scalable, secure, stable software connected to a fast-paced cloud ecosystem necessitates a different approach to engineering. The key challenges a medical device company must overcome include:
- Connected systems architectures
- The ability to track and monitor constantly updating dependencies
- A more effective approach to verification
- More complex and evolving regulatory requirements.
All of these issues interconnect to make the successful development of remote monitors a much more complex problem. Below we introduce some of the core issues to be addressed.
Architectural support for scalability and inclusion of AI and better cybersecurity.
Distributed systems have different architectural needs than self-contained control software. Systems designed to interact with a broad range of devices, often installed around the globe and, most likely, with different product types and versions, must be cleverly flexible, highly scalable and always available. Additionally, since these new systems must support a decade or more or product evolution, the resulting complexity; the architecture needs to be designed for flexibility (extensibility, modularity,), maintainability, reliability and testability. The true test of these systems will be their ability to maintain performance and stability through a evolutionary development process.
The cloud industry, which has strong experience building large scale applications, has solved these problems through the microservices architecture pattern. Large applications are broken up into small, well-defined services on domain boundaries. Each service operates independently, with a strong API contract to dictate how it interfaces both with other services and outside callers.
Integration of AI engines into new diagnostic systems is critical, however, an added challenge is that AI engines are evolving incredibly rapidly. Architecting systems to allow modular replacement of these engines is going to be required to allow product evolution along with the AI industry, always fielding the most capable technology. Failure to design for this inevitable change risks being locked into ‘yesterday’s technology’, rapidly falling behind competition.
Finally, a very important component that is often overlooked: the impact of architectural choices on operating costs can be extreme. Many problems of scale and availability appear easy to solve by implementing a reference architecture that has no regard for an application’s cost constraints. Economical solutions require careful up-front consideration and design. Efforts to build distributed, connected systems too often fail at scale due to financial factors.
The FDA’s March 29, 2023 announcement on “cyber devices”, has shortened the timeline for nearly all shipping medical devices. Any device that has the ability to connect to the cloud or the internet will now need to meet the new cybersecurity requirements, including field surveillance and aggressive monitoring of 3rd-party software dependencies.
The need for security in a distributed system is self-evident, however, simplistic patterns for achieving security are not sufficient. For connected systems integrating with medical devices, security considerations must be addressed at all architectural levels. Obvious concerns include connections between devices and servers, which must implement industry standard encryption in the transport layer. But that’s just the beginning.
Within the device, software must be signed to ensure installed updates were produced by an authorized source. Access through a user interface must implement authentication and authorization, while simultaneously avoiding the risk of locking out a provider during a life-threatening emergency.
For cloud-connected distributed systems, the industry must adopt a policy of “zero trust”, meaning it is no longer acceptable to assume that once access into the system is granted that the caller can be considered trustworthy. Instead, in order to limit the “blast radius”, each component is responsible for restricting access and validating operations. This is a key approach that can only be effective if built from the start, as retrofitting zero trust requires significant architectural rework.
Even properly guarded against unauthorized access, a connected system can easily fall victim to a DDoS or other external attack. Deployment of an incorrect system architecture and insufficient tools will leave the system vulnerable to being taken offline by a remote attacker. The result can have life-threatening implications to patients, in addition to downtime, loss of data, and damage to the hospital’s reputation.
In addition to requiring a more robust treatment of cybersecurity within the fielded products, a robust cyber security support process must be demonstrated. A plan to monitor the products’ cyber performance is coupled with the requirement to ensure a thorough vetting of all utilized third-party components, along with any patches or updates to those products – throughout the entire life of the product.
More detailed management of all the components used in cloud systems is an increasingly critical requirement for device manufacturers. Nearly all cloud systems are built on off-the-shelf software stacks that comprise numerous 3rd party dependencies. Simple single page applications built using most popular web frameworks may include hundreds, sometimes thousands, of libraries once the dependency tree is fully explored. Usually well maintained and open source, many of these libraries are in a constant state of flux due to security patches and bug fixes. A “finished” application can become almost hopelessly out of date in a matter of months without the right maintenance strategy. This represents a fundamental change to software maintenance for device manufacturers. Annual, or even quarterly updates are insufficient to keep pace with components that must constantly respond to security issues and other defects.
The FDA’s new guidance makes it clear that manufacturers will be responsible for vetting and managing the risks associated with 3rd-party software components. These risks are constantly evolving as issues are discovered over time. Developing processes to keep pace with change is more than just good hygiene for a manufacturer, it is now a requirement. Updates must be made by engineering staff on a regular basis, complete with regression testing and a staged deployment strategy to avoid production outages. This will require continuous monitoring, and a commitment to investing in maintenance as well as adoption of new DevOps philosophies.
The pace of change in cloud-based systems is fundamentally different than classic medical products and as such verification has the potential to introduce crippling bottlenecks. The waterfall approach of executing a comprehensive test plan after completing development does not scale to a release cycle that can be measured in months or even weeks.
DevOps and test automation is key to breaking this bottleneck, but is more than just writing test scripts. Cost-effective test strategies for complex, distributed software systems should be considered at the architectural level. While testing less-well architected systems is possible, engineering teams that attempt to shoehorn automation into existing code often produce cumbersome, unreliable test scripts that achieve poor code coverage and challenge both time and budget restrictions. Detailed system analysis must be performed before developing a test strategy to avoid being one of the manufacturers with ineffective automation that abandon their test scripts, leaving them where they started with expensive, ineffective manual procedures.
Additionally, the product code itself must be written in a manner that facilitates automated testing, with components having clearly defined code contracts and being capable of running in isolation. It’s important to not forget that the test scripts themselves also require their own updating, maintenance and code reviews. This means development teams must adapt to a new set of responsibilities. Testing must become a core part of what a software team delivers, not an outside function operating on its own schedule.
And finally, addressing these topics is increasingly on the FDA’s radar as well. A well-architected system, with strong deployment and field monitoring processes is crucial to the safe and effective operation of a connected medical device. A comprehensive assessment of patient risks and cyber security risks, paired with a design built around cyber security and updatability, is required for any modern connected medical device.
With the concern for a security breach affecting a medical device increasing, this March the FDA announced that it will immediately start applying the cybersecurity guidelines released last year (see our Insights article https://fullspectrumsoftware.com/fda-steps-up-its-cyber-security-vigilance) with complete adoption by this October. Creating a secure healthcare environment requires all players to adequately consider potential threats while adopting a zero-trust posture.
In the realm of AI, the FDA has also been moving quickly. It released its Software as a Medical Device (SaMD) and AI Action plan in 2021 and confirmed its adoption last month. Moving forward all products integrating AI capabilities will need to submit Predetermined Change Control Plans (PCCP), which are required to completely outline the range of anticipated future changes in the AI algorithm. Changes outside of these ranges will require a full resubmittal before release.
The benefits of a well thought out evolution plan for medical software systems has always been clear, with large reductions in costs related to support and inclusion of new functions. The FDA has now made it even clearer that these plans will save large amounts of money and time.
The new generation of remote ICD monitors and AI-ECG devices will save enormous numbers of lives, of this we can be sure. However, profitably deploying these new devices will challenge many of the existing approaches to medical device development. The technology around distributed systems is rapidly evolving and its impact on the medical device industry is unique.
The fundamental changes to system architecture, especially for cyber security, AI and risk analysis, paired with the short cycle nature of today’s tech stacks will create barriers to entry for monitoring equipment manufacturers. Existing product development teams must rebuild skillsets to adapt. As we have seen, this is far more than learning a new language or platform. The fundamental approach to designing, building, and maintaining a distributed system connecting medical devices represents a sea change for the industry.