Artificial Intelligence in Medical Imaging: Architecture, Interoperability, and other Key Considerations

Artificial Intelligence in Medical Imaging: Architecture, Interoperability, and other Key Considerations


After a period of relative calm in the medical imaging market, we are now in the early stages of another era of rapid evolution. Efforts are underway on multiple fronts, to improve the effectiveness of treatment, reduce costs, and prevent diagnostic errors.   At the same time there is a push to facilitate interoperability of different devices to drive new capabilities and value streams into many critical aspects of the imaging market. This introduction of Digital Health capabilities in medical imaging, of which Artificial Intelligence (AI), and increased interoperability are merely two, has dramatically accelerated in just the past few years, and an increasingly large percentage of overall clinical and product value creation is coming from the development of sophisticated software capabilities.


Integration of AI in particular is rapidly affecting the functionality of imaging systems. In addition to enhancing the ability to identify extremely subtle features, such as microfractures in the hips of elderly patients, new AI-based Virtual Natural Enhanced (VNE) imaging has the ability to improve cardiac image quality over standard contrast-dye MRI for cardiac imaging, while reducing imaging time from 30-45 minutes to just 15.


Fast Health Interoperability Resources (FHIR), first released in 2012, has significantly simplified the exchange of richer data sets, resulting in better and more complete use of HL7 and DICOM enhancing interoperability between the diverse and often dispersed parts of the healthcare ecosystem; image collection, analysis and diagnostic decision support.  This stronger integration of different parts of the imaging supply chain has enabled faster adoption of cutting-edge AI and machine learning (ML) software that delivers better radiology images and improves their reading, in shorter times, together with an integrated AI-powered QA process. These new software-driven capabilities have the potential to fundamentally change the industry.


How to integrate new AI and Interoperability Functionality into Existing Products


Although AI in imaging can be controversial, Original Equipment Manufacturers (OEMs) in the imaging industry can’t afford to wait.  They are moving to rapidly incorporate a broad array of new software-based functionality into their existing products; a challenging endeavor. The control software in modern imaging systems is extremely large, complex, and, often, aging codebase. Determining how best to do this represents a significant hurdle to rapid deployment of new functionality.


The question for OEMs is two-fold: is it possible for my software system architecture to expand to include this new functionality and, if so, what is the most minimally invasive approach to re-architecting my product to include AI?


MRI and CT imagers have exceedingly complex software control systems which are fully integrated with other parts of the imaging ecosystem, including PACS, EMR (Electronic Medical Records) systems and, increasingly, treatment planning systems. The rapidly evolving AI capabilities can significantly impact both the operation of the imaging equipment and the workflows for reading of the results. This represents one of the most difficult classes of technology to incorporate into medical equipment: fast-moving technology into a slower- moving, complex and highly regulated sector.


AI functionality is improving at an exponential rate, a rate that is difficult for even those dedicated to the technology to fully comprehend. The change in engine capability is measured in hundreds of percent per year – far outside the range of anything that has been integrated into healthcare systems in the past. Imaging OEMs are faced with real-world challenges regarding how to effectively integrate these new AI capabilities into their existing products, when the new technology is evolving at such a rate that by the time they design-in a new AI engine, it will most likely be obsolete.


OEMs are being presented with difficult decisions regarding evaluating not only the current state of different AI engines, but also predicting the rate of improvement of the various AI technologies. Betting on the wrong engine could have very negative impacts on competitive position in the very near future.


This predicament is a classic example of the need to approach the situation with systems thinking and formalized architecture assessment. The future challenge with AI will be to excise and replace core functionality while minimizing the stress on the overall system design, thereby reducing the ripple effects into other areas of the system.


Properly architected, OEM’s can put themselves into the position to be effectively “AI technology agnostic,” allowing the AI industry to continue to evolve at its own rate while periodically integrating “best in class” capabilities into their own product. To achieve this, OEMs must properly architect their software system now – but how?


The first step is to define the new AI capabilities in a lexicon that aligns with their system design. The new functionality must fit into the existing system within the context of the current architectural design. The AI capabilities should be integrated in an incremental manner; at first augmenting, then ultimately replacing current control and analytical functionality. By aligning the new functionality to the design concepts of the existing system it is usually possible to approach the redesign in a minimalist fashion.


The next step requires assessing the existing software system from an architectural perspective. Using techniques such as Architecture Tradeoff Analysis Methodology (ATAM) or similar, a system can be easily decomposed to illuminate design tradeoffs that may be either supportive or inhibitive to inclusion of new functionality. This assessment will effectively create a roadmap to potential approaches for enhancing the system design to support integration of an AI module (or various AI modules) into the system in a more ‘loosely coupled’ manner.


An example of where this will provide a benefit is allowing for each of the major components of the control system – the imager control, image data collection and reconstruction, post-acquisition processing and review, etc. – to have its own more robustly defined inputs and outputs, it is possible to wrap AI engines in a “generic AI wrapper” that can inure the system from the vagaries of specific AI implementations. As new AI approaches evolve, using this approach would support the process of swapping engine in and out. 


This approach also supports the increasingly difficult problem of interoperability. Since its creation, each of the four revisions of FHIR has improved the overall interoperability of all parts of the imaging and diagnostics workflow. But more work needs to be done from OEMs on the system design level to ease the integration of new advanced capabilities as they arise.




Inclusion of rapidly evolving AI engines provides an enormous opportunity and challenge to the medical imaging industry. Properly approached, however, this challenge can provide a significant market advantage for OEM’s, especially in the mid-to-long term.

Assessing the new technology, to better understand its alignment with the industry lexicon and addressing specific integration and interoperability limitations is critical to ensure integration in an interoperable manner.


Formally assessing existing software systems to understand how best to incorporate AI while minimizing structural dislocation will accelerate the new functionality to market, reducing design fatigue, improving reliability, and, most importantly, providing a competitive advantage.


Full Spectrum helps healthcare companies navigate the evolution of a “device-centric” industry to a “digital health-centric” one. Let us know if we can help with yours.