Fundamentals of testing microservices architecture

Increased adoption of digital has pushed the need for speed to the forefront. The need to conceptualize, develop, launch new products and iterate to make them better, much ahead of the competition and gain customer mindshare, has become the critical driver of growth. Adoption of agile principles or movement towards scrum teams for increased agility are all steps in this direction. Disruptive changes have also taken place on the application front with the 3-tier architecture of the late 90s and subsequent 2 tier monolithic architecture giving way to one that is based on microservices.

Having a single codebase made a monolithic architecture less risky but slow to adopt changes, the exact opposite of a services-based architecture. Microservices architecture makes it easier for multiple development teams to make changes to the codebase in parallel. By transforming an application into a distributed set of services that are highly independent, yet interdependent, provides the ability to create new services/functionalities and modify services without impacting the overall application. These changes can be achieved by teams cutting across geographies or locations and makes it easier for them to understand functional modules rather than the humongous application codebase. However, the highly distributed nature of services also gives them a heightened ability to fail.

Breaking it down

At the core, a microservices architecture comprises of 3 layers – a REST layer that allows the service to expose APIs, the database layer, and the service layer. A robust testing strategy needs to cover all these layers and ensure that issues are not leaked to production. The further an issue moves across stages, the impact increases on account of multiple teams getting affected. Hence the test plan must cover multiple types of testing like service testing, subsystem testing, client acceptance testing, performance testing, etc. Subsequent paragraphs outline key aspects of service level testing and integration testing in a microservices architecture-based application landscape.

In service level testing, each service forming a part of the application architecture needs to be validated. Each service has dependencies on other services and transmits information to others based on need. In a monolith architecture, since connections are being established from one class to the other within the same Java Virtual machine (JVM), chances of failure are far lower. However, in a services architecture, these are distributed, driving the need for network calls to access other services and makes it more complex.

Functional Validation: The primary goal in services testing is the functionality validation of a service. Key to this is the need to understand all events the service handles through both internal as well as external APIs. At times this calls for simulating certain events to ensure that they are being handled properly by the service. Collaboration with the development team is key to understand incoming events being handled by the service as part of its functionality. A key element of functional validation – API contract testing, tests the request and response payload along with a host of areas like pagination and sorting behaviors, metadata, etc.

Compatibility: Another important aspect is recognizing and negating backward compatibility issues. This happens during the launch of a changed version of the service that breaks existing clients running in production. Changes that happen to API contracts need to be evaluated in detail to understand if they are mandatory and capable of breaking clients in production. An addition of a new attribute or a parameter may not classify as a breaking change; however, changes to response payload, behavior, error codes, or datatypes have the ability to break. A change in value typically changes the logic behind it as well. They need to be uncovered much earlier in the service testing lifecycle.

Dependencies: Another aspect of focus is external dependencies, where one would test both incoming as well as outgoing API calls. Since these are heavily dependent on the availability of other services and hence other teams, there is a strong need to obviate dependency through the usage of mocks. Having conversations with developers and getting them to insert mocks while creating individual services will enable testing dependencies without waiting for the service to be available. It is imperative to make sure the mocks are easily configurable without needing access to the codebase. Usage of mocks also drives ease in automation giving teams the ability to run independently with no configuration.

Bringing it all together

Once each service is tested for its functionality, the next step is to move onto validate how the various collaborating services work together end to end. Known as subsystem testing or integration testing, it tests the whole functionality exposed together. Understanding the architecture or application blueprint by discussions with the development team is paramount in this stage. Further, there is a strong need to use real services deployed in the integration environment rather than mocks used for external dependencies.

As part of integration testing, there is a need to validate if the services are wired very closely and talking to each other. The event stream and inter-service API calls need to be configured properly so inter-service communication channels are proper. If the service functionality level testing is proper, the chances of finding errors are minimal in this stage, since the required mocks created in the functionality testing stage would have ensured that the services function properly.

Looking in-depth, we find that the testing strategies for microservices are not extremely different from those adopted for a monolith application architecture. The fundamental difference comes in the way the interdependencies and communication between multiple services forming a part of the larger application are tested to ensure that the application as a whole function in line with expectations.

Responsible Testing – Human centricity in Testing

Why responsibility in testing?

Consumers demand quality and expect more from products. The DevOps culture emphasizes the need for speed and scale of releases. As CI/CD crisscrosses with quality, it is vital to engage a human element in testing to foresee potential risks and think on behalf of the customer and the end-user.

Trigent looks at testing from a multiplicity of perspectives. Our test team gets involved at all stages of the DevOps cycle, not just when the product is ready. For us, responsible testing begins early in the cycle.

Introduce the Quality factor in DevOps

A responsible testing approach goes beyond the call of pre-defined duties and facilitates end-to-end stakeholder assurance and business value creation. Processes and strategies like risk assessment, non-functional tests, and customer experiences are baked into testing. Trigent’s philosophy of Responsible Testing characterizes all that we focus on while testing for functionality, security, and performance of an application.

Risk coverage: Assessing the failure and impact early on is one of the most critical aspects of testing. We work along with our clients’ product development teams to understand what’s important to stakeholders, evaluate and anticipate risks involved early on giving our testing a sharp focus.

Collaborative Test Design: We consider the viewpoints of multiple stakeholders to get a collaborative test design in place. Asking the right questions to the right people to get their perspectives helps us in testing better.

Customer experience: Responsible Testing philosophy strongly underlines customer experience as a critical element of testing. We test for all promises that are made for each of the customer touchpoints.

Test early, test often: We take the shift-left approach early on in the DevOps cycle. More releases and shorter release times mean testing early and testing often that translates into constantly rolling out new and enhanced requirements.

Early focus on non-functional testing: We plan for the non-functional testing needs at the beginning of the application life cycle. Our teams work closely with the DevOps team’s tests for security, performance, and accessibility – as early as possible.

Leverage automation: In our Responsible Testing philosophy, we look at it as a means to get the process to work faster and better. Or to leverage tools that can give better insights into testing, and areas to focus on testing. The mantra is judicious automation.

Release readiness: We evaluate all possibilities of going to the market – checking if we are operationally ready, planning for the support team’s readiness to take on the product. We also evaluate the readiness of the product, its behavior when it is actually released, and prepare for the subsequent changes expected.

Continuous feedback: Customer reviews, feedback speaks volumes of their experience with the application. We see it as an excellent opportunity to address customer concerns in real-time and offer a better product. Adopting the shift-right approach we focus on continuously monitoring product performance and leveraging the results in improving our test focus.

Think as a client. Test as a consumer.

Responsibility in testing is an organizational trait that is nurtured into Trigent’s work culture. We foster a culture where our testers imbibe qualities such as critical thinking on behalf of the client and the customer, the ability to adapt, and the willingness to learn.

Trigent values these qualitative aspects and soft skills in a responsible tester that contribute to the overall quality of testing and the product.
Responsibility: We take responsibility for the quality of testing of the product and also the possible business outcomes.

Communication: In today’s workplace, collaborating with multiple stakeholders, teams within and outside the organization is the reality. We emphasize not just on the functional skill sets but the ability to understand people, empathize with different perspectives, and express requirements effectively across levels and functions.

Collaboration: We value the benefits of a good collaboration with BA/PO/Dev and QA and Testing – a trait critical to understand the product features, usage models, and work seamlessly with cross-functional teams.

Critical thinking: As drivers of change in technology, it is critical to developing a mindset of asking the right questions and anticipating future risks for the business. In the process, we focus on gathering relevant information from the right stakeholders to form deep insights about the business and consumer. Our Responsible Testing approach keeps the customer experience at the heart of testing.

Adaptability & learning: In the constantly changing testing landscape, being able to quickly adapt to new technologies and the willingness to learn helps us offer better products and services.

Trigent’s Responsible Testing approach is a combination of technology and human intervention that elevates the user experience and the business value. To experience our Responsible Testing approach, talk to our experts for QA & Testing solutions.

Learn more about responsible testing in our webinar and about Trigent’s software testing services.

Web Application Testing with Selenium WebDriver

Selenium WebDriver is a popular Open Source tool used for the automation of web applications. It is a simple, concise programming interface that supports dynamic web pages where the elements of a page may change before the page reloads. It overcomes the limitations of Selenium RC and its unique value proposition is its ability to drive the web browser using the native support of browser for automation.

WebDriver is the main interface in Selenium using which tests are written. This interface is implemented by various classes as listed below
  • FirefoxDriver
  • Chrome Driver
  • InternetExplorerDriver
  • SafariDriver
  • RemoteWebDriver
  • EventFiringWebDriver
  • HtmlUnitDriver
  • AndriodDriver

Selenium WebDriver is available in several programming languages such as Java, C#, Ruby, Perl, Python, and PHP. We can use the programming language of our choice when using Selenium.

In this blog, I am focusing on the features provided by Selenium for automation of web applications.

Native support for web browsers

As indicated earlier Selenium WebDriver drives the browser using the native support of the browser. Firefox browser has in-built support for it. However, for Internet Explorer and Chrome we need to use third-party drivers.

Declaration and usage are as below

  • Firefox
WebDriver driver = new FirefoxDriver();
  • Chrome
WebDriver driver = new ChromeDriver();
 System.setProperty("", "path to chrome browser driver");
  • InternetExplorer
WebDriver driver = new InternetExplorerDriver();
 System.setProperty("", "path to IE browser driver");

Multiple locator strategies

Selenium WebDriver provides multiple locator strategies to find the element on a web page. Some of these are listed below

  • Id
  • Name
  • Xpath
  • CSS
  • Linktext
  • Tagname

Of all the locator types, Id is the most preferred because of its performance, followed by CSS.

Extensive set of simple commands to drive the web

Selenium WebDriver has an extensive set of commands that can be used to perform interaction with the web page. These commands are simple and can be wrapped around a wrapper to make them reusable methods. Some common commands are as follows:

  • findElement – Find the first web element of the page
  • By – Class used with locator to find a web element
  • sendKeys – To pass text content to an input field
  • click – Perform a click action
  • submit – To perform submit action on the page
  • getText – Get the text content from an web element
  • getAttribute – Get attribute values from the web element
  • getCurrentUrl – Get the current page url
  • switchTo() – To Switch to window, frame or an alert
  • navigate() – To navigate to an URL, forward or backward
  • get – To navigate to the web page
  • isEnabled, isDisplayed, and isSelected – Get status of the web element

Select Class to handle dropdowns

WebDriver provides Select class to handle dropdowns in the web page. The web element can be  single-select or multi-select and several methods are available to handle them appropriately. Commonly used commands are as below


Implicit and Explicit Waits

Automated tests need a way to wait for some time on the web page to perform some action and then move to the next step. Selenium WebDriver provides both Implicit and Explicit Waits for this purpose.

  • Implicit Wait waits for an element to appear on the page until the time provided by the Implicit Wait is exhausted. This wait is common across the project.
  • Explicit Wait is used in situations where we need to wait for some condition to occur, prior to moving further. The simplest example of Explicit Wait is Thread.sleep method and there are other sophisticated waits such as WebDriverWait used along with ExpectedConditions where we can specify the duration in seconds.

Action Classes for advanced interaction

Selenium WebDriver provides Action class to perform  advanced user interactions with the keyboard such as Key up, Key Down along with ALT, CONTROL OR SHIFT, mouse actions such as Double Click, Drag and Drop. There are several other actions available to make automation easier and effective.

Support for Ajax

Selenium WebDriver supports Ajax effectively and can be used to perform actions on the dynamically loaded web elements on a page. It provides various `waitFor’  methods to wait for an element to appear and disappear on the page. These waitFor methods help us to overcome the problem of placing a manual delay and thus reduces chances of script failure if the element has not appeared within the delay provided.

Support for distributed test execution

Selenium WebDriver is available in two forms

  • WebDriver
  • Selenium Server

WebDriver provides the ability to run our tests on our local machine and Selenium server helps to run our tests on remote machines where we can distribute tests over the network on multiple machines.

Get screenshot

We can take a screenshot of the application whenever required using Selenium WebDriver. Screenshots are usually captured on failure conditions  to get the details about a failure scenario.

Taking a screenshot is very simple and it is described below:

File screenshotFileName = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE);
 FileUtils.copyFile(screenshotFileName, new File("Location where screenshot needs to be saved"));


The main advantage of Selenium WebDriver is that a  single script built can be used by various operating systems and browsers. Thus, to summarize, based on an understanding of Selenium WebDriver’s features, you will agree that web application automation can be done effectively and easily using Selenium WebDriver. To make automation  consistent and reliable we need to build a framework utilizing all the above features along with some tools such as loggers, unit testing frameworks, listeners and reporting utilities.

Read Other Blogs on Selenium:

Getting Started – Selenium with Python Bindings

Introduction to Selenium with C Sharp

How to Get Started with Test Automation?

In the recent past, software companies have shown their preference to release products/ applications faster with focused emphasis on quality assurance.  To ensure rapidity and accuracy, i.e. zero bug fixes/ failures, the testing methodology often adopted focuses on newly added features with minimal attention being paid to regression testing as it cannot be accommodated within the limited time available.

However, based on my experience, I recommend Test Automation as the right choice to overcome the above challenge. Test Automation is best suited for long term projects as automation requires  time for setting up and to maximize benefits. Test Automation needs dedicated effort, time, support and constant monitoring to be successful. Here in this blog, I will discuss the steps to be followed for test automation.

Decision of when to start with Automation

For test automation to work efficiently, the application has to be functionally stable. If we perform automation on an  application which is work-in-progress, then automation tests can fail. Automation should therefore be launched only when manual testing cycle has been completed. Tests developed on a functionally stable application helps to find regression failures/ bugs easily.

Choose the right resources to perform test automation

Test automation is equivalent to  software development, hence the right resources need to be chosen. Engineers with manual testing skills should ideally  not be chosen for automation as they may not have the requisite skills. A sample list of required skillsets are as follows:

  • Automation Architect – Builds frameworks, chooses tool and sets scripting standards.
  • Automation Engineer – Creates test scripts.

If we are able to find the right people for an automation architect’s role then most of the problems related to introducing automation in the project will be resolved.

 Select the right tool

Types of tools is an important criteria for test automation and there are various open-source and commercial tools available in the market. Some important criteria to be considered when choosing tools are:

  • Operating systems, application types to be supported
  • Objects recognition capability and recovery mechanisms
  • Reports provided
  • Support for third party controls and plug-ins
  • Integration support for other test management tools
  • Customer service support

Before confirming the tool,  perform a POC on AUT to ensure that the tool is able to work with the application.

Some popular tools of my choice include Selenium Web Driver, SilkTest, Test Complete, and Telerik Test Studio. Automated regression testing tools for web applications are also the same.

Build the Automated Regression Testing Framework

Test automation framework plays a vital role in test automation and is created by the automation architect. The automation framework is the deciding factor behind the success/ failure of an automation project. At the beginning of the automation project build a framework that provides minimum required capabilities to execute and provide the test results. Going forward the framework can be enhanced to include additional capabilities. Script standards have to be established and followed to be consistent.

Following are the desired capabilities to have in the framework:

  • Ability to configure the test configuration
  • Platform to create test scripts with externalized test data
  • Ability to configure the test cases that needs to be executed
  • Ability to rerun failed test cases
  • Ability to get screenshot/ log trace on failure
  • Report that indicates the pass/ fail count

I strongly believe that it is always better to build the hybrid framework with the above mentioned capabilities

Choose and build proper tests

Choosing tests for automation is very important to derive benefits from test automation. We need to be selective in choosing the right test plan. Following are the criteria that can be used to choose the tests for automation:

  • Business critical tests
  • Integration tests where application integration can fail easily
  • Repetitive tests which are error prone
  • Tests that needs to be tested with multiple sets of data
  • Tests that needs to be executed on multiple hardware and software combinations

Have continuous integration for feedback

For test automation to be effective the tests scripts needs to be executed constantly and the failures needs to be fixed. The automated tests should be configured and executed in the continuous integration system and deliver results. The failures needs to be analyzed to find whether it is the application bug or the script issue and should be fixed accordingly.

Jenkins is a popular open source continuous tools that can be used for this purpose.

Learn more about Trigent’s automation testing services.