Remote Cardiac Monitoring: New Challenges with the New Normal
Pressure continues to mount to reduce the use of in-hospital resources in order to improve the cost performance of healthcare. Developing more cost-effective methods of addressing cardiac-related illness, still the number one cause of death in the US, is a high priority. Remote monitoring and diagnostics are seen as providing tremendous advantages to cardiologists and patients, including the promise of earlier detection, while also keeping patients out of the hospital environment.
Across the spectrum of medical diagnostic equipment, there has been an explosion of connectivity integrated into devices to support the collection of data across a network; essentially becoming a required feather of nearly all modern monitoring devices and most medical devices in general. Being able to provide complete patient cardiac information to cardiac care professionals, along with specific, actionable insights has the potential to drive the Holy Grail of better outcomes and lower costs.
The rapid adoption of remote cardiac monitoring over the past few years has created a market of over $5 billion dollars in 2022, with projections that the market could reach as much as $31 billion by 2028 (a CAGR of over 30% per year).
The trend toward extending managed care across both the hospital and home environment creates tremendous opportunities but also generates a new generation of significant technical challenges. Across all facets of our lives, the connection of devices to central cloud-based servers is so ubiquitous as to not even be noticed. Smartphones have integrated users so completely with web-based systems and applications that we have become inured to the problems we run into on a daily basis; from websites crashing to cyber-attacks breaching server data with such a regularity that they are barely even newsworthy.
Consumers are remarkably accepting of these recurring issues in cloud-based systems. In the field of medical devices, however, products that must meet FDA guidelines with regard to product reliability, patient privacy and cyber security. However, the rapid increase in ‘fitness devices’ that do not necessarily have the same requirements as medical devices have created a problem with large numbers of false positives.
Remote cardiac monitors with more sophisticated connectivity capabilities offer an important potential improvement: data collection allows providers to improve patient outcomes by analyzing a broader set of patient patterns. Augmenting device behavior with off-device functionality and analytics, such as AI, can provide a more complete understanding of patient condition, including context of the input data.
Additionally, constant connections provide Product Managers with a wealth of data produced by connected devices to feed future innovation. Disused functionality can be replaced with new features. Workflow optimization based on real-world feedback is more easily identified and deployed. And through it all, engineers can fine tune algorithms with larger datasets to produce better, more accurate results.
Unlocking this potential, however, requires more than an extension of the status quo. Connected, distributed software systems have little in common with the static, self-contained device software of old. Delivering scalable, secure, stable software connected to a fast-paced cloud ecosystem necessitates a different approach to engineering. The key challenges a medical device company must overcome include:
- Distributed system architecture
- A more expansive view of data and system security
- Better usability for untrained users
- The ability to adapt to constantly updating dependencies
- A more nimble approach to verification
- Fast-paced development methodologies
- More complex and evolving regulatory requirements.
All of these issues interconnect to make the successful development of remote monitors a much more complex problem. Below we introduce some of the core issues to be addressed.
Architectural support for scalability and inclusion of emerging technologies such as AI.
Distributed systems have different architectural needs than self-contained system software. Systems designed to interact with a broad range of devices, often installed around the globe and, most likely, with different product types and versions, must be cleverly flexible, highly scalable, and always available. And with the resulting complexity, a monolithic approach to architecture quickly becomes an inflexible maintenance and quality nightmare, inhibiting the fielding of new functionality and follow-on products.
Software companies building large scale applications in the cloud have solved these problems through the microservices architecture pattern. Large applications are broken up into small, well-defined services on domain boundaries. Each service operates independently, with a strong API contract to dictate how it interfaces both with other services and outside callers.
Integration of AI engines into remote cardiac monitoring systems is increasingly seen as a critical evolution. Smarter analysis of input streams to better understand the context of the data and the actual patient condition is required to reduce false positives and provide a higher level of service to the hospitals.
AI engines are evolving at an incredibly rapid pace. Architecting your system to allow easy replacement of these engines will allow your product to evolve along with the AI industry, always fielding the most capable technology. Failure to design for this inevitable change risks being locked into ‘yesterday’s technology’ and falling behind competition.
Finally, the impact of architectural choices on operating costs can be extreme. Many problems of scale and availability appear easy to solve by implementing a reference architecture that has no regard for an application’s cost constraints. Economical solutions require careful up-front consideration and design. Efforts to build distributed, connected systems can easily fail at scale due to financial factors.
The FDA’s April 8, 2022, draft guidance on Cyber Security in Medical Devices represents only the latest step in the journey making security top of mind for the industry. The need for security in a distributed system has become self-evident, however, simplistic patterns for achieving security are not sufficient. Fortunately, security can be treated as the default option – but only if designed correctly from the start. Distributed systems that implement security as an afterthought often result in costly rework to achieve the desired protection.
For connected systems integrating with medical devices, security considerations must be addressed at all architectural levels. Obvious concerns include connections between devices and servers which must implement industry standard encryption in the transport layer. But that’s just the beginning.
Within the device, software must be signed to ensure any updates installed were produced by an authorized source. Access through a user interface must implement authentication and authorization, while simultaneously avoiding the risk of locking out a provider during a life-threatening emergency.
For cloud-connected distributed systems, the industry has adopted a policy of “zero trust”. This means it is no longer acceptable to assume that once a caller is allowed into the system that the caller can be considered trustworthy. Instead, all components of a large system are responsible for restricting access and validating operations. This limits the “blast radius” of a breach, as a security hole in one part of the system does not become a problem for the whole system. In tandem with the principle of “least privilege”, this ensures that security in systems incorporating cloud services is now the rule and not the exception. This is a key approach that can only be effective if built from the start, as retrofitting zero trust requires significant rework.
Closely related to security concerns, data privacy has been long established as a key architectural requirement. This topic may cover multiple areas, including PHI and proprietary data. And while cardiac monitors are already focused on HIPAA concerns, the EUs’ General Data Protection Regulation (GDPR) brings additional constraints accessing the European market. Correctly architected, a cloud application can store sensitive data securely with access allowed only to authorized requesters. Developers must take care to avoid leaking sensitive data through logs, and access controls require thoughtful design to correctly gate PHI.
Even properly guarded against unauthorized access, a connected system can easily fall victim to a distributed denial of service (DDoS) or other external attack. Without the right system architecture and the correct tools to mitigate an attack, a system can be taken offline by a remote attacker. The result could have life-threatening implications to patients, in addition to downtime, loss of data, and damage to the company’s reputation.
Modern software systems are increasingly built on off-the-shelf software stacks that comprise countless 3rd party dependencies. A simple single page application built using any of the most popular web frameworks may include thousands of libraries once the dependency tree is fully explored. Often well maintained and open source, many of these libraries are in a constant state of flux due to security patches and bug fixes. A “finished” application can become almost hopelessly out of date in a matter of months without the right maintenance strategy. This represents a fundamental change to software maintenance for device manufacturers. Annual, or even quarterly updates are insufficient to keep pace with components that must constantly respond to security issues and other defects.
The FDA’s April 8, 2022 draft guidance makes it clear that manufacturers are ultimately responsible for vetting and managing the risks associated with 3rd party software components: “all software, including that developed by the device manufacturer… and obtained from third parties should be assessed for cybersecurity risk and that risk should be addressed.” These risks are constantly evolving as issues are discovered over time. Keeping pace with change is more than just good hygiene for a manufacturer, it is a necessity.
The risks of letting a code base become stale are twofold. First, security issues in external libraries are certain to be exploited in a matter of time. Second, and perhaps more concerning, modern libraries used in web and cloud applications move fast and are quick to introduce non-backwards compatible changes. Small, incremental upgrades to components are relatively painless, but once an application has fallen too far behind it becomes costly and time consuming to catch up. A company that waits until they have a problem at hand will almost certainly be unable to respond in a timely fashion.
The software industry has produced tools to solve these problems, with dependency scanners built into cloud platforms and source control systems that will catch components that are out of date in both software and operating systems. However, making use of these tools is far from automatic. Updates must be made by engineering staff on a regular basis, complete with regression testing and a staged deployment strategy to avoid production outages. This requires continuous monitoring, and a commitment to investing in maintenance. The DevOps philosophy that has taken root in the software industry in recent years is becoming increasingly relevant in the medical device industry.
Given the pace of change common in distributed systems built on the cloud, verification has the potential to introduce crippling bottlenecks. The waterfall approach of executing a comprehensive test plan after completing development does not scale to a release cycle that can be measured in days or even hours.
Test automation is key to breaking this bottleneck. But this is more than just writing test scripts. Cost-effective test strategies for complex, distributed software systems should be considered at the architectural level. While testing systems that are not well architected systems is possible, Engineering teams that attempt to shoehorn automation into existing code often produce cumbersome, unreliable test scripts that achieve poor code coverage. Detailed system analysis should be performed before developing a test strategy to avoid being one of the Manufacturers with ineffective automation that abandon their test scripts, leaving them where they started with expensive manual procedures.
Code must be written in a manner that facilitates automated testing, with components having clearly defined code contracts and being capable of running in isolation. Test scripts also require their own maintenance and code reviews. This means development teams must adapt to a new set of responsibilities. Testing must become a core part of what a software team delivers, not an outside function operating on its own schedule.
The approaches described above represent a different way of thinking about software development. Engineers working in the medical device space have traditionally approached problem solving in a top-down waterfall fashion that assumes a more deliberate and carefully planned approach, with a division of labor that breaks up work across teams by function. Working at the pace required to adapt and update distributed systems is incompatible with this philosophy.
Software teams working in this environment must be cross-functional, with the skills necessary to rapidly design, build, test, and deploy small components. This has become commonplace in IT shops but is a challenge when applied to regulated medical devices. Teams must still satisfy regulatory requirements, but without falling back to old habits that assume a monolithic project. This is an uncommon combination of skills in the industry.
This change of approach extends beyond the development team. Maintaining a distributed system requires an around-the-clock focus on operations. An after-hours disruption cannot wait for Monday morning, and certainly can’t be put off until the next planned release cycle. Companies operating in this space must be prepared with the people and tools necessary to monitor systems, identify problems, and deploy solutions with a sense of immediacy.
In addition to being a good practice in general, addressing these topics is increasingly on the FDA’s radar. A well-architected system is crucial to the safe and effective operation of a connected medical device. A comprehensive assessment of patient risks and cyber security risks, paired with a design built around cyber security and updatability, is required for any modern connected medical device.
With the shift to more software intensive medical systems, the FDA is evolving its approach to regulation. FDA guidance around Software as a Medical Device (SaMD) extends many of the concerns around device software to pure software solutions, but with increased flexibility in defining which aspects of the software system should be regulated. Where critical functionality and algorithms are implemented in the software behind connected devices, Manufacturers must apply the same scrutiny and process to distributed systems as would be expected on the device. This is especially true for security risks.
The concern for a security breach affecting a medical device is obvious, but the FDA in the April 8, 2022 guidance also warns of the risk to other networks by a compromised component of a connected system. Creating a secure healthcare environment requires all players to adequately consider potential threats while also adopting a zero-trust posture.
Whether connected to the public internet or limited to a hospital network, connected medical devices operate in an ecosystem rife with pitfalls and in a constant state of change. To design a system that is both effective and economical, system architects must carefully draw domain boundaries with consideration for patient and security risks. Done correctly, the maintenance burden of a large system is minimized, avoiding crippling verification and operational costs.
The new generation of remote cardiac monitors will save enormous numbers of lives, of this we can be sure. However, profitably deploying these new devices will challenge many of the existing approaches to medical device development. The technology around distributed systems is rapidly evolving and its impact on the medical device industry is unique.
The fundamental changes to system architecture, especially for cyber security, AI and risk analysis, paired with the short cycle nature of today’s tech stacks will create barriers to entry for monitoring equipment manufacturers. Existing product development teams must rebuild skillsets to adapt. As we have seen, this is far more than learning a new language or platform. The fundamental approach to designing, building, and maintaining a distributed system connecting medical devices represents a sea change for the industry.
Interested in delving into how these changes and considerations impact your organization? Contact Full Spectrum Software for a discussion.