Fundamentals of testing microservices architecture

Increased adoption of digital has pushed the need for speed to the forefront. The need to conceptualize, develop, launch new products and iterate to make them better, much ahead of the competition and gain customer mindshare, has become the critical driver of growth. Adoption of agile principles or movement towards scrum teams for increased agility are all steps in this direction. Disruptive changes have also taken place on the application front with the 3-tier architecture of the late 90s and subsequent 2 tier monolithic architecture giving way to one that is based on microservices.

Having a single codebase made a monolithic architecture less risky but slow to adopt changes, the exact opposite of a services-based architecture. Microservices architecture makes it easier for multiple development teams to make changes to the codebase in parallel. By transforming an application into a distributed set of services that are highly independent, yet interdependent, provides the ability to create new services/functionalities and modify services without impacting the overall application. These changes can be achieved by teams cutting across geographies or locations and makes it easier for them to understand functional modules rather than the humongous application codebase. However, the highly distributed nature of services also gives them a heightened ability to fail.

Breaking it down

At the core, a microservices architecture comprises of 3 layers – a REST layer that allows the service to expose APIs, the database layer, and the service layer. A robust testing strategy needs to cover all these layers and ensure that issues are not leaked to production. The further an issue moves across stages, the impact increases on account of multiple teams getting affected. Hence the test plan must cover multiple types of testing like service testing, subsystem testing, client acceptance testing, performance testing, etc. Subsequent paragraphs outline key aspects of service level testing and integration testing in a microservices architecture-based application landscape.

In service level testing, each service forming a part of the application architecture needs to be validated. Each service has dependencies on other services and transmits information to others based on need. In a monolith architecture, since connections are being established from one class to the other within the same Java Virtual machine (JVM), chances of failure are far lower. However, in a services architecture, these are distributed, driving the need for network calls to access other services and makes it more complex.

Functional Validation: The primary goal in services testing is the functionality validation of a service. Key to this is the need to understand all events the service handles through both internal as well as external APIs. At times this calls for simulating certain events to ensure that they are being handled properly by the service. Collaboration with the development team is key to understand incoming events being handled by the service as part of its functionality. A key element of functional validation – API contract testing, tests the request and response payload along with a host of areas like pagination and sorting behaviors, metadata, etc.

Compatibility: Another important aspect is recognizing and negating backward compatibility issues. This happens during the launch of a changed version of the service that breaks existing clients running in production. Changes that happen to API contracts need to be evaluated in detail to understand if they are mandatory and capable of breaking clients in production. An addition of a new attribute or a parameter may not classify as a breaking change; however, changes to response payload, behavior, error codes, or datatypes have the ability to break. A change in value typically changes the logic behind it as well. They need to be uncovered much earlier in the service testing lifecycle.

Dependencies: Another aspect of focus is external dependencies, where one would test both incoming as well as outgoing API calls. Since these are heavily dependent on the availability of other services and hence other teams, there is a strong need to obviate dependency through the usage of mocks. Having conversations with developers and getting them to insert mocks while creating individual services will enable testing dependencies without waiting for the service to be available. It is imperative to make sure the mocks are easily configurable without needing access to the codebase. Usage of mocks also drives ease in automation giving teams the ability to run independently with no configuration.

Bringing it all together

Once each service is tested for its functionality, the next step is to move onto validate how the various collaborating services work together end to end. Known as subsystem testing or integration testing, it tests the whole functionality exposed together. Understanding the architecture or application blueprint by discussions with the development team is paramount in this stage. Further, there is a strong need to use real services deployed in the integration environment rather than mocks used for external dependencies.

As part of integration testing, there is a need to validate if the services are wired very closely and talking to each other. The event stream and inter-service API calls need to be configured properly so inter-service communication channels are proper. If the service functionality level testing is proper, the chances of finding errors are minimal in this stage, since the required mocks created in the functionality testing stage would have ensured that the services function properly.

Looking in-depth, we find that the testing strategies for microservices are not extremely different from those adopted for a monolith application architecture. The fundamental difference comes in the way the interdependencies and communication between multiple services forming a part of the larger application are tested to ensure that the application as a whole function in line with expectations.

The Role of QA in an Agile Model

Quality Assurance & Software testing over the years has mostly been treated as an isolated function in project development. However, in agile methodology, testing is an integral part of the software or system development lifecycle.

Agile methodology involves continuous iterations of both the development and software testing activities for a project. It requires the involvement of all the developers working on a project to work in parallel with the testing team, to ensure that the business requirements of the customer are met on schedule.

Among teams that do not adhere to the agile manifesto, the role of the tester is limited to writing test cases, executing them, logging the defects, and verifying them. But in agile methodology, the tester works as a part of the development team and ensures that software quality assurance is built into the end product by working closely with the development team.

With the agile methodology gaining popularity, system testing, or application testing as a process, has transformed and testers today play a key role in the overall project development process. This requires testers to not only have strong testing skills but also good domain knowledge. Testing engineers, therefore, need to get adjusted to this new test strategy of rapidly changing requirements.

Key attributes of testers working on an Agile model are as follows:

Testers need to do a lot more than just building test cases:

The testers in the traditional waterfall model are involved at the end of the project when the coding is complete when the QA is expected to execute the test cases to verify if the built features match the requirements or not.

But in the Agile model, the QA adds more value to the project. Her role is not just restricted to the building and execution of the test cases but works closely with the developers, BA, Product Owner, and so forth. In this scenario, the QA can write acceptance test cases for the Product Owner and work and interact with the Product Owner in order to ask questions and clarifications regarding the business requirements.

Testers need to collaborate and coordinate with the developers and the end customer:

In an Agile model the QA tester needs to continuously provide the testing feedback to the customer and in turn collect feedback from them after each sprint. Agile testers need to look from different perspectives i.e. end-user, business, developer, support and in order to achieve this, the quality assurance needs to coordinate with all of them. In some cases, the QA tester works as the Proxy Product Owner too to help them to develop the acceptance criteria for their user stories.

In the Agile model, developers and testers at times collaborate to write test cases as the acceptance criteria. The coordination among the team especially quality assurance and developers reduces doubts and ensures that all are on the same page. Also it saves the coding time of developers.

Testers need to have automation testing skills:

It is always an add-on if a tester knows about automation testing tools. Testers with automation skills can help prepare the test scripts and test plans for better coverage which helps in an agile project with a sprint of 2-4 weeks. Automation also helps during the regression testing of the project by providing quick feedback.

Every time a new build is given for testing the QA can run the automation scripts and provide rapid feedback on whether the new features, as well as the old features, are working correctly or not.

Testers help in estimation:

The QA always writes the test cases/scenarios for an application which covers both the happy path and unhappy paths. This helps for accurate estimation of user stories based on the clarity they have after identifying the positive and negative flows.

Testers needs to participate in demos:

After the completion of each sprint, a demo is given to the customer and other stakeholders of all the features completed in that sprint. As we know, a typical sprint lasts for 2-4 weeks and in such a short span all the people involved are busy in completing their own things; the developers are busy in developing the assigned user stories and the QA is busy testing the released items, clarifying the questions from Product Owners and automating the same. In such a short span the developers sometimes find it difficult to finish the complete functionality of the assigned user stories. So the developers sometimes consult the QA as they have a better understanding of the application. Hence it would be a good practice if the demo to the client is carried out by the QA and thereafter quality assurance can answer all the business related questions coming from the client. This way the developers can take care of technical queries from the customer.

Testers need to analyze the requirements:

The QA in the Agile model, is in a good position, after the BA to analyze requirements because the application is always used by the QA from the end user’s point of view. Hence the QA helps the customer by providing them timely feedback based on their testing experiences.

The QA should be a part of all the retrospectives and review meetings to contribute to overall process improvement and requirement understanding.

To summarize, the QA is an important and integral part of the team who is involved in all phases of the software development cycle. To put it in the right perspective, testers do more than “Just Testing”!