The pressure to extend product capabilities within the healthcare information systems industry has gone through cycles over the past few decades, but it has never been higher than it is right now. If you were to look at the HIMSS education track from ten years ago the focus was on getting broader adoption of existing technology: expanded use of EHR and HIE, the definition of “Meaningful Use,” as promulgated by the ACA was being interpreted in a dozen different ways.
But the technology discussions were mostly related to approaches to leverage the newly emerging “Mobile Apps” and what platforms they could run on, BYOD or dedicated hardware. That’s about it.
Today, there is an explosion of technology and a regulatory foot race to keep up with it that is just starting to impact the industry. A look at this year’s HIMSS educational topics gives an insight into some of those new pressures: AI and ML, Digital Health Transformation, Cybersecurity, and Data Science in Healthcare Information, just to name a few. Not covered in this year’s meeting are the newly proposed guidelines from the FDA relating to potential long-term liability in cybersecurity breaches and tighter integration with medical devices.
All of this has, or shortly will, put more pressure on the healthcare information systems developers than they’ve felt for quite some time. Many of today’s products are based on large software codebases, which have evolved over, in some cases, decades. This is not an easy starting point for adjusting to rapid market forces, requiring integration of new technology and new concepts for decision support while at the same time hardening aging systems against malicious penetration. Clearly, new platforms and new approaches to development will be required in the very near future.
Where do we go, and how do we get there (quickly)?
The trend toward extending care management from the hospital through urgent care clinics, providers offices, and to the home environment has provided for tremendous improvements in care and a reduction in operating costs, but that trend also has generated a new generation of significant technical challenges and has not yet bent the curve on overall healthcare costs.
Overcoming these challenges, however, may require more than an extension of the status quo. Delivering scalable, secure, stable software connected to a fast-paced cloud ecosystem necessitates a different approach to engineering. The key challenges a medical device or information systems company must navigate include:
• Architecting for rapidly evolving AI technology and functionality
• Managing data provenance and hygiene in an AI environment
• A more expansive view of data and system security
• Dependency management in the context of cybersecurity
• A more cost-effective approach to verification in a faster turn environment
• Fast-paced development methodologies
• More complex and evolving regulatory requirements.
All of these issues interconnect to make the successful development of Digital and Connected Health Systems a much more complex problem. Below we introduce some of the core issues to be addressed:
Architectural support for scalability and inclusion of emerging technologies such as AI.
Integration of AI engines into healthcare systems, especially decision support systems, is increasingly seen as a critical evolution. These new capabilities, demonstrated almost simplistically by ChatGPT, will revolutionize the industry. But integrating this functionality won’t be a ‘one-and-done’ development exercise.
AI engines are evolving at an incredibly rapid pace. New approaches will appear with frightening regularity, and it will take more than a decade for the eventual winners to be determined. Not having the luxury of waiting for total clarity and to avoid being locked into a non-competitive AI solution, healthcare solutions need to be designed to specifically plan for full replacement of the AI engine. Architecting systems to allow easy replacement of these engines will allow your product to evolve along with the AI industry, always fielding the most capable technology. Failure to design for this inevitable change risks being locked into ‘yesterday’s technology’ that will quickly fall behind competition.
In addition, the impact of architectural choices on operating costs can be extreme. Many problems of scale and availability appear easy to solve by using models deployed in the consumer and retail sectors. However, basing a system on the wrong reference architecture, for example, one that has no regard for an application’s cost constraints, can lead to product challenges as it expands in the marketplace. Appropriate, and economical solutions require careful up-front consideration and understanding downstream impacts of various architectural and technology trade-offs. Efforts to build distributed, connected systems can easily fail at scale due to unforeseen financial factors.
Modern systems are designed to be deployed globally, with high levels of interactivity from unsecured access points, and able to support different product types and versions,. These powerful new systems must be cleverly flexible, highly scalable, and always available. However, these attributes can create a challenge to the development of the consistent security model required to meet the fielded environment which impacts usability and inhibits the fielding of new functionality and follow-on products.
The FDA’s April 8, 2022, draft guidance on Cyber Security in Medical Devices represents only the latest step in the journey of moving security to top of mind for the industry. The need for security in a distributed system has become self-evident, however, overly simplistic patterns for achieving security are not sufficient. Fortunately, security can be treated as the default option – but only if designed correctly from the start. Distributed systems that implement security as an afterthought often result in costly rework to achieve the desired protection. It isn’t necessarily possible to avoid this problem in legacy systems, but it will need to be a core consideration for new systems and even for new functionality of old systems.
For connected systems integrating with distributed users as well as medical devices, security considerations must be addressed at all architectural levels. Obvious concerns include connections between devices and servers, which must implement industry standard encryption in the transport layer. But that’s just the beginning.
Within any connected devices, software must be signed to ensure any updates installed were produced by an authorized source. Access through a device or web-based user interface must implement authentication and authorization, while simultaneously avoiding the risk of locking out a provider during a life-threatening emergency.
For cloud-connected distributed systems, the industry has adopted a policy of “zero trust”. This means it is no longer acceptable to assume that once a caller is allowed into the system that the caller can be considered trustworthy. Instead, all components of a large system are responsible for restricting access and validating operations. This limits the “blast radius” of a breach, as a security hole in one part of the system does not become a problem for the whole system. In tandem with the principle of “least privilege,” this ensures that security in systems incorporating cloud services is now the rule and not the exception. This is a key approach that can only be effective if built from the start, as retrofitting zero trust requires significant rework.
Even properly guarded against unauthorized access, a connected system can easily fall victim to a distributed denial of service (DDoS) or other external attack. Without the right system architecture and the correct tools to mitigate an attack, a system can be taken offline by a remote attacker. The result could have life-threatening implications to patients, in addition to downtime, loss of data, and damage to the company’s reputation.
Data Security Expands to AI Systems
Closely related to system security concerns, data privacy has been long established as a key architectural requirement. This topic may cover multiple areas, including PHI and proprietary data. And while cardiac monitors are already focused on HIPAA concerns, the EUs’ General Data Protection Regulation (GDPR) brings additional constraints when accessing the European market. Correctly architected, a cloud application can store sensitive data securely with access allowed only to authorized requesters. Developers must take care to avoid leaking sensitive data through logs and access controls require thoughtful design to correctly gate PHI.
However, as anyone in the technology industry can tell you, expanded AI capability is a function of its expanded training data. What too many people fail to understand, however, is that all data is not created equal. High profile failures of AI systems have almost all been related to issues within the data set. Unknown performance limitations result from data provenance, unseen bias contained within the data, or data skew from the anticipated userbase.
Data management in the new AI world will be fundamentally different than in the past. The Healthcare Industry has learned hard lessons in managing PHI over the past decade, but AI training data will need to be handled very differently. In addition to systems used to ‘promote’ deployed software forward from development through various levels of testing, then ultimately deployment, AI data will need to have a reversed system developed. Real-world data must be continuously collected and anonymized, propagating backward through different levels of the organizations from the field and ultimately used in product development and testing.
Managing this data flow will be a new paradigm to many organizations, however, failure to properly manage this process will result in a minimum of a degraded AI capability, and worse, significant liability related to the model’s operation in the field.
Modern software systems are increasingly built on combinations of off-the-shelf software stacks that are comprised of countless 3rd party dependencies. A simple single-page application built using any of the most popular web frameworks may include thousands of libraries once the dependency tree is fully explored. Often well-maintained and open source, many of these libraries are in a constant state of flux due to security patches and bug fixes. A “finished” application can become almost hopelessly out of date in a matter of months without the right maintenance strategy. This represents a fundamental change to software maintenance for device and information system manufacturers. Annual, or even quarterly updates are insufficient to keep pace with components that must constantly respond to security issues and other defects.
The FDA’s April 8, 2022 draft guidance makes it clear that manufacturers are ultimately responsible for vetting and managing the risks associated with 3rd party software components: “all software, including that developed by the device manufacturer… and obtained from third parties should be assessed for cybersecurity risk and that risk should be addressed.” These risks are constantly evolving as issues are discovered over time. Keeping pace with change is more than just good hygiene for a manufacturer, it is now a necessity.
The risks of letting a code base become stale are twofold. First, security issues in external libraries are certain to be exploited in time. Second, and perhaps more concerning, modern libraries used in web and cloud applications move fast and are quick to introduce non-backwards compatible changes. Small, incremental upgrades to components are relatively painless. But once an application has fallen too far behind it becomes costly and time consuming to catch up. A company that waits until they have a problem at hand will almost certainly be unable to respond in a timely fashion.
The software industry has produced tools to solve these problems, with dependency scanners built into cloud platforms and source control systems that will catch components that are out of date in both software and operating systems. However, making use of these tools is far from automatic. Updates must be made by engineering staff on a regular basis, complete with regression testing and a staged deployment strategy to avoid production outages. This requires continuous monitoring, and a commitment to investing in maintenance. The DevOps philosophy that has taken root in the software industry in recent years is becoming increasingly relevant in the healthcare industry.
Given the pace of change common in distributed systems built on the cloud, verification has the potential to introduce crippling bottlenecks. The waterfall approach of executing a comprehensive test plan after completing development does not scale to a release cycle that can be measured in days or even hours.
Test automation is key to breaking this bottleneck, but this is more than just writing test scripts. Cost-effective test strategies for complex, distributed software systems should be considered at the
architectural level. While testing software systems that are not well architected is possible, Engineering teams that attempt to shoehorn automation into existing code often produce cumbersome, unreliable test scripts that achieve poor code coverage. Detailed system analysis should be performed before developing a test strategy to avoid ineffective automation that ultimately leads to abandonment of the test scripts, relying increasingly on expensive manual procedures.
Additionally, code must be written such that it facilitates automated testing, with components having clearly defined code contracts and being capable of running in isolation. Test scripts also require their own maintenance and code reviews. This means development teams must adapt to a new set of responsibilities. Testing must become a core part of what a software team delivers, not an outside function operating on its own schedule.
The pressure to expand and improve the capabilities of healthcare information systems will only increase over the near future. The new capabilities will certainly create exciting new care delivery opportunities and significantly expand the current healthcare information systems market. However, this will bring with it increased development challenges; profitably deploying AI and improved cybersecurity will challenge many of the existing approaches to healthcare information systems. The technology around these systems is rapidly evolving and its impact on our industry will be unique.
The fundamental changes to system architecture, especially for cyber security, AI, and risk analysis, paired with the short cycle nature of today’s tech stacks will create barriers to entry for healthcare information systems suppliers. Existing R&D teams must rebuild skillsets to adapt. As we have seen, this is far more challenging than learning a new language or platform. The fundamental approach to designing, building, and maintaining a distributed system with connected medical devices represents a sea change for the industry.
Interested in delving into how these changes and considerations impact your organization? Contact Full Spectrum Software for a discussion.