Fundamentals of testing microservices architecture

Increased adoption of digital has pushed the need for speed to the forefront. The need to conceptualize, develop, launch new products and iterate to make them better, much ahead of the competition and gain customer mindshare, has become the critical driver of growth. Adoption of agile principles or movement towards scrum teams for increased agility are all steps in this direction. Disruptive changes have also taken place on the application front with the 3-tier architecture of the late 90s and subsequent 2 tier monolithic architecture giving way to one that is based on microservices.

Having a single codebase made a monolithic architecture less risky but slow to adopt changes, the exact opposite of a services-based architecture. Microservices architecture makes it easier for multiple development teams to make changes to the codebase in parallel. By transforming an application into a distributed set of services that are highly independent, yet interdependent, provides the ability to create new services/functionalities and modify services without impacting the overall application. These changes can be achieved by teams cutting across geographies or locations and makes it easier for them to understand functional modules rather than the humongous application codebase. However, the highly distributed nature of services also gives them a heightened ability to fail.

Breaking it down

At the core, a microservices architecture comprises of 3 layers – a REST layer that allows the service to expose APIs, the database layer, and the service layer. A robust testing strategy needs to cover all these layers and ensure that issues are not leaked to production. The further an issue moves across stages, the impact increases on account of multiple teams getting affected. Hence the test plan must cover multiple types of testing like service testing, subsystem testing, client acceptance testing, performance testing, etc. Subsequent paragraphs outline key aspects of service level testing and integration testing in a microservices architecture-based application landscape.

In service level testing, each service forming a part of the application architecture needs to be validated. Each service has dependencies on other services and transmits information to others based on need. In a monolith architecture, since connections are being established from one class to the other within the same Java Virtual machine (JVM), chances of failure are far lower. However, in a services architecture, these are distributed, driving the need for network calls to access other services and makes it more complex.

Functional Validation: The primary goal in services testing is the functionality validation of a service. Key to this is the need to understand all events the service handles through both internal as well as external APIs. At times this calls for simulating certain events to ensure that they are being handled properly by the service. Collaboration with the development team is key to understand incoming events being handled by the service as part of its functionality. A key element of functional validation – API contract testing, tests the request and response payload along with a host of areas like pagination and sorting behaviors, metadata, etc.

Compatibility: Another important aspect is recognizing and negating backward compatibility issues. This happens during the launch of a changed version of the service that breaks existing clients running in production. Changes that happen to API contracts need to be evaluated in detail to understand if they are mandatory and capable of breaking clients in production. An addition of a new attribute or a parameter may not classify as a breaking change; however, changes to response payload, behavior, error codes, or datatypes have the ability to break. A change in value typically changes the logic behind it as well. They need to be uncovered much earlier in the service testing lifecycle.

Dependencies: Another aspect of focus is external dependencies, where one would test both incoming as well as outgoing API calls. Since these are heavily dependent on the availability of other services and hence other teams, there is a strong need to obviate dependency through the usage of mocks. Having conversations with developers and getting them to insert mocks while creating individual services will enable testing dependencies without waiting for the service to be available. It is imperative to make sure the mocks are easily configurable without needing access to the codebase. Usage of mocks also drives ease in automation giving teams the ability to run independently with no configuration.

Bringing it all together

Once each service is tested for its functionality, the next step is to move onto validate how the various collaborating services work together end to end. Known as subsystem testing or integration testing, it tests the whole functionality exposed together. Understanding the architecture or application blueprint by discussions with the development team is paramount in this stage. Further, there is a strong need to use real services deployed in the integration environment rather than mocks used for external dependencies.

As part of integration testing, there is a need to validate if the services are wired very closely and talking to each other. The event stream and inter-service API calls need to be configured properly so inter-service communication channels are proper. If the service functionality level testing is proper, the chances of finding errors are minimal in this stage, since the required mocks created in the functionality testing stage would have ensured that the services function properly.

Looking in-depth, we find that the testing strategies for microservices are not extremely different from those adopted for a monolith application architecture. The fundamental difference comes in the way the interdependencies and communication between multiple services forming a part of the larger application are tested to ensure that the application as a whole function in line with expectations.

Steps to Achieve EHR/EMR Interoperability to Put Patient at the Center of Healthcare

The US healthcare system has been battling quite a few challenges as they continue to track outbreaks, and stay abreast of the latest developments on vaccines and the spread of the disease. But what became glaringly evident during the pandemic was the lack of EHR/EMR interoperability that made sifting through patient information and providing seamless quality care pretty difficult. Although the federal government pumped in billions of dollars to accelerate the adoption of electronic health records, we are still far away from rising to the information challenges clinicians are facing on a day-to-day basis.

A classic case in point – California! It went through public health crises in 2020 as the state with the second-highest number of COVID-19 cases, pinning its hopes on a robust health data exchange. As Claudia Williams, CEO of Manifest MedEx (MX) points out, “Smaller practices don’t know what kind of hospital care the patient received, they don’t know what drugs the patient is on, and they don’t have the tools to conduct that level of risk stratification.”

The Department of Health and Humans Services (HHS) recently published its 2020-2025 Federal Health IT Strategic Plan based on recommendations from more than 25 federal organizations.

Quality of data, user interfaces, and usability concerns, along with the inability of data to adequately support discovery and interoperability among systems – all underline the need to have better EHR/EMR interoperability to put patients at the heart of healthcare.

It’s time we dive deeper into the challenges stakeholders are facing as they proceed towards achieving EHR/EMR interoperability and how we can work towards making it a reality.

EHR and EMR: The fundamental difference

An electronic health record (EHR) is an electronic version of a patient’s medical history that includes test results, present illness and its history, progress notes, immunization, medications, etc. Often confused with an electronic medical record (EMR), an EHR is much broader in scope and offers a comprehensive view of the patient’s health. An EMR also contains medical history along with a treatment plan but it’s often pertaining to one practice and the details will therefore stay with that particular physician or provider and is never really shared when the patient moves on to another physician or provider.

The fact that EHR travels with the patient wherever they go, it gets shared with other physicians and providers helping them make informed decisions. EHR helps maintain continuity of medical care even when patients are moved to a different facility.

But in a complex healthcare environment, EHR integrations are not so easy. EHR solutions used by different medical facilities can differ in features, capabilities, workflows, and infrastructure requirements. Seamless sharing of information will therefore be possible only when we introduce interoperability into the system. This would require stakeholders to tide over the many challenges in attaining healthcare data interoperability.

The top ones include:

  • Absence of a unique patient identifier – Absolutely no or minimum standardization for identifying patients makes data exchange between EMR and EHR extremely tedious.
  • Lack of standardized data – With different standard formats for collating data, the information exchanged varies in format. This poses a barrier to analyzing, storing, and exchanging data seamlessly.
  • Slow FHIR adoption – The use of Fast Healthcare Interoperability Resources (FHIR) is recommended since it describes data formats and APIs for health record exchange and integrates the best of HL7, v2, HL7v3, and CDA while leveraging the best of web service technologies. It provides agility, efficiency, and security to data exchange with perfect standardization of data. The adoption of FHIR application programming interfaces (APIs) has a long way to go before it touches the finish line. While FHIR apps do extract data, they lack the ability to write data back.
  • Data privacy and security issues – Healthcare compliances such as HIPAA can impose limitations on how stakeholders share and exchange data amongst each other and third-party vendors.
  • The relatively high cost of integration – Traditional models can be a tad out of reach of small and mid-sized organizations from a cost perspective.

Interoperability for patient-centric care

Interoperability allows patients to be informed all the time irrespective of which vendor they choose. It ensures:

  • Better patient health outcomes
  • Better quality of care
  • Lower healthcare costs
  • Tailored treatments based on individual history and preferences
  • Greater patient engagement
  • Reduced ambiguity
  • Data devoid of redundancies

Interoperability initiatives should be patient-centric and revolve around improving patient care. The chief objective should be to safely and securely exchange patient information across the healthcare ecosystem where interoperability serves as the linchpin.

As Dr. Farzad Mostashari, the National Coordinator for Health Information Technology at the U.S. Department of Health and Human Services iterates, “(the agency wants to ensure that) information follows the patient regardless of geographic, organizational, or vendor boundaries.”
A CHIME KLAS report suggests 67% (up from 28% in 2017) of providers admitted they often or nearly always had access to the needed patient records in 2020 while only 15% (up from 6% in 2017) believe data exchange has impacted patient care. The Cures Act and many other federal initiatives are now focused on improving patient care through data sharing. Significant progress has been noticed in data sharing across disparate EMRs.

The way to interoperability

There are certain milestones to touch on the road to attaining interoperability. Just like the banking sector where current systems are modified instead of being recreated, the EHR too will benefit from suitably modified systems wrapped in applications and added capabilities.

Here’s what we need to do:

  • Use a population health management system – This will make providers accountable for caring for populations with common health conditions. The system will use data from various sources including EHRs, EMRs, claims, monitoring devices, etc. to give a 360-degree view to providers while helping patients with regular alerts and messages.
  • Leverage the services of Health Information Exchange (HIE) – HIE connects healthcare organizations across the state to allow them to exchange patient data. So if a patient gets admitted into an emergency room, the HIE will access data from other care centers too so as to give an accurate clinical picture of the patient to providers and alert them when a patient checks in to some other facility.
  • Deploy health management apps designed for patients – These are typically expected to help patients aggregate their health data, get health status, track appointments, manage healthcare plans, etc.
  • Employ big data analytics systems – These systems are expected to review large amounts of data to compare the effectiveness of treatments, aid medical discovery, analyze shifts in patterns of diseases and response to diseases, safety issues pertaining to healthcare equipment, etc. They rely on artificial intelligence for automatic correction of data inconsistencies and other chores such as extracting data from images, free text, etc.
  • Integrate APIs in healthcare – APIs allow developers to build applications quickly and protect patient data from malware and other malicious threats. They save storage space and allow users to pinpoint the exact source of data and get precise data. APIs are thus playing a pivotal role in alleviating clinical burden helping third-party apps and programs analyze data and enhance clinical decisions. As an integral part of healthcare, they now lead the way for successful interoperability.

Tread on the road to interoperability with Trigent

It’s easy to get lost in the shuffle, but with Trigent by your side, you can surely adopt best practices to shift your focus and achieve EHR/EMR interoperability. No matter how far you are on the road to interoperability, we will take you there with the necessary solutions. A few workflow changes and technologies should get us started.

Allow us to tell you how the new interoperability standards can help your practice. Call us today.