Fundamentals of microservices architecture testing

The importance of microservices architecture testing

Increased adoption of digital has pushed the need for speed to the forefront. The need to conceptualize, develop, launch new products and iterate to make them better, much ahead of the competition and gain customer mindshare, has become the critical driver of growth. Adoption of agile principles or movement towards scrum teams for increased agility are all steps in this direction. Disruptive changes have also taken place on the application front with the 3-tier architecture of the late 90s and subsequent 2 tier monolithic architecture giving way to one that is based on microservices.

Having a single codebase made a monolithic architecture less risky but slow to adopt changes, the exact opposite of a services-based architecture. Microservices architecture makes it easier for multiple development teams to make changes to the codebase in parallel. By transforming an application into a distributed set of services that are highly independent, yet interdependent, provides the ability to create new services/functionalities and modify services without impacting the overall application. These changes can be achieved by teams cutting across geographies or locations and makes it easier for them to understand functional modules rather than the humongous application codebase. However, the highly distributed nature of services also gives them a heightened ability to fail.

Breaking it down – Testing strategies in a microservice architecture

At the core, a microservices architecture comprises of 3 layers – a REST layer that allows the service to expose APIs, the database layer, and the service layer. A robust testing strategy needs to cover all these layers and ensure that issues are not leaked to production. The further an issue moves across stages, the impact increases on account of multiple teams getting affected. Hence the test plan must cover multiple types of testing like service testing, subsystem testing, client acceptance testing, performance testing, etc. Subsequent paragraphs outline key aspects of service level testing and integration testing in a microservices architecture-based application landscape.

In service level testing, each service forming a part of the application architecture needs to be validated. Each service has dependencies on other services and transmits information to others based on need. In a monolith architecture, since connections are being established from one class to the other within the same Java Virtual machine (JVM), chances of failure are far lower. However, in a services architecture, these are distributed, driving the need for network calls to access other services and makes it more complex.

Functional Validation: The primary goal in services testing is the functionality validation of a service. Key to this is the need to understand all events the service handles through both internal as well as external APIs. At times this calls for simulating certain events to ensure that they are being handled properly by the service. Collaboration with the development team is key to understand incoming events being handled by the service as part of its functionality. A key element of functional validation – API contract testing, tests the request and response payload along with a host of areas like pagination and sorting behaviors, metadata, etc.

Compatibility: Another important aspect is recognizing and negating backward compatibility issues. This happens during the launch of a changed version of the service that breaks existing clients running in production. Changes that happen to API contracts need to be evaluated in detail to understand if they are mandatory and capable of breaking clients in production. An addition of a new attribute or a parameter may not classify as a breaking change; however, changes to response payload, behavior, error codes, or datatypes have the ability to break. A change in value typically changes the logic behind it as well. They need to be uncovered much earlier in the service testing lifecycle.

Dependencies: Another aspect of focus is external dependencies, where one would test both incoming as well as outgoing API calls. Since these are heavily dependent on the availability of other services and hence other teams, there is a strong need to obviate dependency through the usage of mocks. Having conversations with developers and getting them to insert mocks while creating individual services will enable testing dependencies without waiting for the service to be available. It is imperative to make sure the mocks are easily configurable without needing access to the codebase. Usage of mocks also drives ease in automation giving teams the ability to run independently with no configuration.

Understanding Microservices Architecture Testing

Once each service is tested for its functionality, the next step is to move onto validate how the various collaborating services work together end to end. Known as subsystem testing or integration testing, it tests the whole functionality exposed together. Understanding the architecture or application blueprint by discussions with the development team is paramount in this stage. Further, there is a strong need to use real services deployed in the integration environment rather than mocks used for external dependencies.

As part of integration testing, there is a need to validate if the services are wired very closely and talking to each other. The event stream and inter-service API calls need to be configured properly so inter-service communication channels are proper. If the service functionality level testing is proper, the chances of finding errors are minimal in this stage, since the required mocks created in the functionality testing stage would have ensured that the services function properly.

Looking in-depth, we find that the testing strategies in a microservices architecture are not extremely different from those adopted for a monolith application architecture. The fundamental difference comes in the way the interdependencies and communication between multiple services forming a part of the larger application are tested to ensure that the application as a whole function in line with expectations.

Bandwidth Testing for superior user experience – here’s how?

The Bandwidth Testing process simulates a low internet bandwidth connection and checks how your application behaves under desired network speed.

Considering a scenario where a specific application’s home page always loads in milliseconds in office premises this may not be the case when an end-user with low network speed accesses the application. To enhance user experience and get to know the application load times at specific network bandwidth speeds, we can simulate it and identify specific component/service call which is taking more time and can be improved further.

How to test bandwidth


Bandwidth speed test can be done using Chrome browser. You should set your ‘Network’ panel in the chrome browser as per the below requirements.


  1. Go to Customize and control Google Chrome at the top right corner and click More tools, then select Developer tools
    • Or press keyboard shortcut Ctrl + Shift + I
    • Or press F12
  2. Then click the ‘No throttling’ dropdown and choose Add… option under the Custom section.
  3. Click Add custom profile
  4. You will need to enter profile name in order to click on Add button. For example, ‘TestApp 1 MBPS’.
  5. Fill in the Download, Upload, Latency columns as below and click Add.

Example for 100Kbps:

Download (kb/s)Upload (kb/s)Latency (ms)

Example for 1Mbps:

Download (kb/s) Upload (kb/s)Latency (ms)

Example for 2.5Mbps:

Download (kb/s)Upload (kb/s)Latency (ms)

Configuring Chrome is a one-time affair. Once Chrome has been configured for bandwidth speed test, use the same settings by selecting the profile name [TestApp 1 MBPS] from the No Throttling drop-down.

Metrics to be collected for bandwidth testing:

  • Data transferred (KB)
  • Time taken for data transfer (seconds)

Using Record network activity option in the Chrome browser, you can get the above attributes.

Note: Toggle button “Record network log”/”Stop recording network log” and button “Clear” are available in the network panel.

It is best practice to close all the non-testing applications/tools in the system and other tabs from Chrome where the testing is performed.

Steps for recording network activity:

  1. Open Developer Tools and select the Network tab.
  2. Clear network log before testing.
  3. Make sure Disable cache checkbox is checked.
  4. Select the created network throttling profile (say ‘TestApp 1 MBPS’).
  5. Start recording for the steps to be measured as per the scenario file.
  6. Wait for the completion of step and page is fully loaded to check the results
  7. Data transferred size for each recorded step will be displayed down in the status bar as highlighted. The size will be in byte/kilobyte/megabyte. Make note of it.
  8. Time taken for data transfer will be displayed in the timeline graph. The horizontal axis represents the time scale in a millisecond. Take the
  9. Maximum time. Take approximate value from the graph and make note of it.

Here is the sample screenshot taken for login process of snapdeal application page in which specific js component (base.jquery111.min.js) loading took 4.40s and also while searching for any product searchResult.min.js took 4.08s which can be improved further for better user experience.

This Bandwidth Testing Process helps in every possible way to improve user experience by identifying specific component or API calls which are taking more time to load and it helps developers to fix those specific components.

Your application’s performance is a major differentiator that decides whether it turns out to be a success or fails to meet expectations. Ensure your applications are peaked for optimal performance and success.

Improve page load speed and user experience with Trigent’s testing services

Trigent’s experienced and versatile Quality Assurance and Testing team is a major contributor in the successful launch, upgrade and maintenance of quality software used by millions around the globe. Our experienced responsible testing practices put process before convenience to delight stakeholders with an impressive industry rivaled Defect Escape Ratio or DER of 0.2.

Got a question? Contact us for a consultation

Improve the quality of digital experiences with Performance Engineering

Quality at the heart of business performance

“In 2020, the key expectation is fast, reliable, and trustworthy software.” *

As businesses embrace the Agile/DevOps culture and the emphasis on CI/CD is growing, quality assurance is seen as a standalone task, limited to validating functionalities implemented. When QA and Testing is an afterthought in an Agile/DevOps culture, the result is a subpar consumer experience followed by an adverse impact on the revenue pipeline. Poor customer experience also directly impacts brand credibility and business equity. While UI/UX are the visible elements of the customer experience, product, or service performance is a critical element that is often neglected. Performance Testing identifies the gaps that are addressed through Performance Engineering.

Small steps, significant gains – the journey towards Performance Engineering

The deeper issue lies in the organization’s approach towards quality and testing – it is considered an independent phase rather than looked upon as a collaborative and an integrated approach. Performance engineering is a set of methodologies that identifies potential risks and bottlenecks early on in the development stage of the product and addresses them. It goes without saying that performance is an essential ingredient in the quality of the product, there’s a deeper need for change in thinking – to think proactively, anticipate early in the development cycle, test and deliver a quality experience to the end consumer. An organization that makes gradual changes in its journey towards performance engineering stands to gain significantly. The leadership team, the product management, and the engineering and DevOps at different levels need to take the shift-left approach towards performance engineering.

Make Performance Engineering your strategic priority today

Despite the obvious advantages, performance testing is typically a reactive measure that is addressed after the initial launch. However, organizations need to embrace performance engineering measures right from the design phase, start small, and take incremental steps towards change.

Covid-19 has rapidly changed the way consumers behave globally. Businesses caught onto remote working; consumers moved shopping, entertainment, banking, learning, and medical consultations online. Consider the quantum jump in usage triggered by the pandemic.

The dramatic increase in the use of digital services has covered decades in days.**

Companies that adopted scalability and performance centric design have moved swiftly to capture the market opportunity.

With multiple user-interfaces across sectors being the norm and the increasing complexity of digital experiences, it is critical for businesses to get it right the first time in order to gain and retain customers’ trust.

As cloud migrations continue, whether rehosting the app on an IaaS or rebuilding a new approach, performance engineering ensures that migrated systems withstand sudden surges in usage. According to a Sogeti and Neotys report, 74% of the load testing infrastructure is operated in the cloud today. Cloud infrastructure providers ensure reliability but they may not be aware of the performance metrics that matter to the business and their impact. As organizations move from monolithic systems to distributed architectures provided by an assortment of companies, corporate leaders need to recognize the importance of performance engineering and embrace it to deliver the right solutions for the first time.

Our approach to Performance Engineering philosophy

At Trigent, we put the customer experience at the heart of planning the entire testing cycle. Our performance engineering practices align with ‘metrics that matter’ to businesses in the DevOps framework. While testing identifies the gaps in performance, the onus of architecting it right lies on the DevOps engineering team with proactive inputs from QA and Testing.

Performance engineering is also a way of thinking, the ability to plan for performance at the time of design, right at the beginning. As for quality, besides testing for functionality, anticipating potential bottlenecks helps us assess the process in its entirety in the beginning.

Asking some of these customer-centric questions early on shifts the perspective right at the outset. Ask them early, and you’re on your way to a performance engineering culture.

Parameters that matter

‘Will my application meet the defined response-time requirements of my customers?’

Consider an app that doesn’t respond within the expected standards of the customer; the chances of that application making it to the customer’s phone screen is pretty slim.

‘Will the application handle the expected user load and beyond?’

An application that tested well with 10 users may fail when that number is multiplied by a thousand or two.

We take the viewpoints of multiple stakeholders, consider parameters that matter to the customer, and assess impact early on.

Customer experience matters

Performance Engineering takes into account the overall experience of the end-user and their environment.

Asking pertinent questions such as ‘Will my users experience acceptable response times, even during peak hours?’ or ‘Does the application respond quickly enough for the intended users?’ does well to anticipate potential pitfalls in network usage and latency.

‘Where are the bottlenecks in my multi-user environment?’

Understand the real environment of the user and their challenges to provide a quality user experience.

Early Focus

The non-functional aspects are integrated into the DevOps and an early focus on performance enables us to gain insights into architectural issues.

‘How can we optimize the multi-user application before it goes live?
‘How can we detect errors that only occur under real-load conditions?

Quick course corrections help optimize performance and make the product market-ready. Besides faster deployment, quality assurance gives our clients an added advantage of reduced performance costs.

Architect it right

‘What system capacity is required to handle the expected load?’
‘Will the application handle the number of transactions required by the business?’

Important questions like these focus on architecting the product for performance. As part of the performance engineering methodology, our teams consistently check and validate the capabilities at the time of developing the product or scaling it up. We take the shift-left and shift-right approach to anticipate, identify, and remove bottlenecks early on. Getting the architecture right enables us to deliver and deploy a high-quality product every time.

Performance engineering done right is sure to improve the planning-to-deployment time with high-quality products. Plus, it reduces performance costs arising out of unforeseen issues. A step-by-step approach in testing makes sure organizations move towards achieving performance engineering. Talk to our experts for scalable performance engineering solutions for your business.

Learn more about Trigent software testing services.

* The State of Performance Engineering 2020 – A Sogeti and Neotys report
** Meet the next-normal consumer – A McKinsey & Company report

Poorly Tested App Crashes the Iowa Caucus

We may not know who won the Iowa caucus – but we certainly know that a poorly tested smartphone app lost.

This Monday, the Iowa caucus has nearly been rendered meaningless due to the Democratic party’s inability to reliably report and tally the results. While there have been reports that the overall caucus process and the management were deficient, the smartphone app that was used for reporting the results failed miserably.

Here is my review of this incident only from a software testing perspective. There is a lot of valuable lessons to be learned from this.

Do not introduce new technology at a critical time.  With the entire nation watching this first voting towards the Presidential election of 2020, it is a wrong decision to use a new app. Also, not sure how much effort and time by software maker or the democratic party put in to train all the users. It is prudent and practical to try new technologies in a smaller and limited environment to ensure both the quality of the application and user adaption.

Installation Failures. It is reported that many caucus chairs were trying to install the app on the same day or the previous night. Software teams should ensure that the users were able to successfully install the apps on their devices, well before the time of usage. And, this can be done by timely email reminders, with detailed instructions and usage analytics collected back from the app itself. A proper app installation testing will ensure that the app can be successfully installed across a verity of devices, models, carriers, and operating systems. Also, test both clean installs as well as reinstall or updates.

Too much security is as bad as too little security. I can understand the need to make app attack resistant, especially with the history of external influences. But we cannot build apps with too many layers of security that will frustrate the users from abandoning the apps.

In this case, many caucus chairs were frustrated and think that the series of PIN numbers and layers of security absolutely mucked it up. Unable to use the app, they all ended up calling the hotline. A good QA team performing usability tests can often find such issues and work to increase the ease of use.

Whitebox, unit, and integration testing are essential. The CEO of the company that made the app said that “problem was caused by a bug in the code that transmits results data.”  QA teams should focus their attention on the interactions between the boundaries of the systems. Usually, this is done by robust integration testing and proper unit testing. Systems should also be tested for typical end-of-the-day operations, such as this transmission of results.

Test for extreme conditions. QA teams should test mobile apps for extreme conditions such as low bandwidth, congested network traffic, intermittent connectivity, and high user loads. Teams often use tools to emulate low bandwidth and high latency to test the applications.

In summary, using an experienced and skilled QA team to validate your applications on real devices under real-life conditions are very essential. Trigent offers a full suite of testing services that cover both mobile and cloud applications. We provide testing services that match the speed of innovation.

Getting Started with Load Testing of Web Applications using JMeter

Apache JMeter:

JMeter is one of the most popular open source testing tools for load and performance testing services. It simulates browser behavior, sending requests to the web or application server for different loads. Volume testing using JMeter on your local machine, you can scale up to approximately 100 virtual users, but you can go up to more than 1,000,000 Virtual Users with CA BlazeMeter, which is sort of a JMeter in the cloud.

Downloading and Running the Apache JMeter:


Since JMeter is a pure Java-based application, the system should Java 8 version or higher.

Check for Java installation: Go to Command prompt, type `Java –version’, if Java is installed it will show as the Java version as below.

Related: Improved time to market and maximized business impact with minimal schedule variance and business risk.

If Java is not installed, download and install Java from the following link: “

Downloading JMeter:
  • Download the latest version of JMeter from “Apache JMeter
  • Click on from Binaries.

How to Run the JMeter:

You can start JMeter in 3 ways:

  • GUI Mode
  • Server Mode
  • Command Line

GUI Mode: Extract the downloaded Zip file in any of your drives, go to the bin folder D:apache-jmeter-3.2bin–> double click on “jmeter” windows Batch file.

After that will appear the JMeter GUI as shown below:

Before you start recording the test script, configure the browser to use the JMeter Proxy.

How to configure Mozilla Firefox browser to Use the JMeter Proxy:

  • Launch the Mozilla Firefox browser–> click on Tools Menu–> Choose Options
  • In Network Proxy section –> Choose Settings
  • Select Manual Proxy Configuration option
  • Enter value for HTTP Proxy as localhost or you can enter your local system IP address.
  • Enter the port as 8080 or you can change the port number if 8080 port is not free
  • Click on OK. Now your browser is configured with the JMeter proxy server.

Record the Test Script of Web Application:

Add a Thread Group to the Test Plan: Test Plan is our JMeter script and it will tell about the flow of our load test.

Select the Test plan –> Right click–> Add–> Threads (Users) –> Thread Group

Thread Group:

Thread group will tell about the user flow and will simulates like how user will behave on the app

The thread group has three important properties, which influence the load test:

  • Number of threads(users): This will tell about the number of Virtual users that JMeter will attempt to simulate, let’s say for ex:1,10,20 or 50 etc
  • Ramp Up Period (in seconds): The duration of time that you want to allow the Thread Group to go from 0 to n (20 here) users, let’s say 5 seconds.
  • Loop count: No of times to execute the test, 1 means the test will execute for 1 time.
2. Add Recording controller to the thread group: Recording controller should have all the recorded HTTP Request Samples.

Select the thread group –> right click–> Add –> Logic Controller –> recording controller

3. Add the HTTP Cookie Manager to the thread group:

Http Cookie manager is to use cookies on your web app

4. Add View Results tree to the thread group: View results Tree used to see the Status of the Http Sample Requests on Executing the Recorded Script.

Thread group –> Add–> Listeners–> View Result Tree

5. Add Summary Report: Summary report will show the test results of the script

Thread group –> Add –> Listeners –> Summary Report.

6. Go to the WorkBench and Add the HTTP(S) Test Script Recorder: Here you can start your test script recording.

WorkBench –> Right click –> Add–> Non Test Elements –> HTTP(S) Test Script Recorder.

Check whether the port 8080 (this should be same as which we have set for the browser port number) is available or busy in your system. If it’s busy change the port number.

7. Finally click the Start button –> you can see the popup –> click ok

8. How to Record the browsing files from the Web App:

If your test script is having option like browse any files, keep your files in bin folder of JMeter and do recording of browse files.

Go to the Mozilla Browser–>Start your Test for Ex:login page or any navigation, do not close the JMeter while recording the script. The script will record as below in Recording Controller.

Save the Test Plan with .jmx extension.

Run the Recorded Script: Select the Test Plan–>Press ctrl+R from the Key board or Start Button on Jmeter.

while executing the script ,at the top right corner a circle will display in green color along with the time box which will show how much time the script is executing. Once the Execution completed the Green circle will turn to Grey.

Test Results: We can see the Test Results in many ways like, View Results Tree, Summary Report, and Aggregate Graph.

View Result Tree

Summary Report:

After executing the test script, go to Summary Report–>click on Save Table data and save the results in a .csv or xlsx format.

Though we will get the test results in Graphical view and Summary report etc, executing Test Scripts Using JMeter GUI is not a good practice. I will discuss the execution of JMeter Test Scripts with Jenkins Integration tool in my next blog.

Read Other Blog on Load Testing:

Load/Performance Testing Using JMeter

Properties & Groovy Scripting in SoapUI


Properties are a central repository to store our information. A property is a named string value that can be accessed from a script.

There are two types of properties in SoapUI, namely, Default Properties and Custom (User-Defined) Properties.

Types of Properties:

Default Properties:

These are the sets of properties that comes by default with every SoapUI. We can change the values of these properties (but not in every case) and consume as and when needed.

Custom Properties:

These are the properties that the user defines as per requirements. It can be used as a temporary storage for validating the result of tests.

Also, in SoapUI definitions of properties are at multiple levels like Projects, Test Suites, Test Cases, Test Steps and Global Properties. Now, we shall look at each one of these in detail.

Levels of Properties:

  • Project Properties:

Project properties are the properties that are associated with the current project. This property can be accessed by all the subsets like test suite, test case, test step, a script of the project.

Below are the Groovy scripts to get and set properties from project:

//get property
 def projectProperty = testRunner.testcase.testsuite.project.getPropertyValue(“projectProperty”)
 //set property
 testRunner.testCase.testSuite.project.setPropertyValue(“projectProperty”, value)

Test Suite Properties:

Test suite property specifies the properties associated with the current test suite. This property can be used by its subsets like test case, test step and script of test suite.

Groovy scripts to get and set properties from Test Suite:

//get property
 def testSuiteProperty = testRunner.testcase.testsuite.getPropertyValue(“suiteProperty”)
 //set property
 testRunner.testCase.testSuite.setPropertyValue(“suiteProperty”, value)

Test Case Properties:

Specify the properties associated with the current test case. It can be used by test step and script of the test cases.

Groovy scripts to get and set properties from Test Case:

//get property
 def testCaseProperty = testRunner.testcase.getPropertyValue(“caseProperty”)
 //set property
 testRunner.testCase.setPropertyValue(“caseProperty”, value)

Test Step Properties:

Test step properties specify the properties associated with the current test step. It can be used by its subsets like test step, property transfer and script of the test steps.

Groovy scripts to get and set properties of Test Steps:

//get property
 def testStepProperty = testRunner.testcase.testStep.getPropertyValue(“stepProperty”)
 def myteststep = testcase.getTestStepAt(IndexNumber)
 def teststep = TestCase.getTestStepByName(“Name of the Step”)
 //set property
 testRunner.testCase.testStep.setPropertyValue(“stepProperty”, value)

Global Properties:

Global properties define the properties associated with installed version of soapui. These properties can be accessed across the project, test suites, test cases and so on.

Groovy scripts to get and set properties from Global:

//get property
 def globalProperty = com.eviware.soapui.SoapUI.globalProperties.getPropertyValue(“GlobalProperty”)
 //set property
 Com.eviware.soapui.SoapUI.globalProperties.setPropertyValue(“GlobalProperty”, value)


Properties help transfer the data between the test steps such as test suites, test steps, and test cases. Property can be defined through the groovy script. Also, we can assign and retrieve data of the properties through the groovy script.

Points to Remember When Performing Localization Testing

Before you delve into a specific section on localization testing, read about the philosophy of responsible testing that empowers your end to end testing practice.

Localization testing is the process for checking the localized version of a product for a particular culture or locale setting. Before performing localization testing, it is important to understand two related processes, i.e. globalization and internationalization.

Globalization testing is the process to check whether the software can perform properly in any locale or culture.  It is also used to check if the software can function with all types of international inputs. It is commonly called ‘G11N‘ as there as 11 characters in-between the alphabets G & N.

Internationalization testing, on the other hand, is the process of testing software to ensure that it works uniformly across multiple regions and cultures.

Localization testing is the testing process to validate whether an application is capable for use in a particular location or country. It is also called as “`L10N’ as there are 10 characters in-between the alphabets L & N.

 In general terms:

  • Internationalization is a pre-requisite for localization
  • Internationalization and localization are components of globalization
  • Translation is a facet of localization

 Localization testing will always be done prior to a product or service being launched internationally.

 Given below are a few points to remember when performing localization testing:

 Pre-localization testing:

  • Earlier translated products are available for reference
  • Glossaries are available for reference and consistency checked
  • Domain knowledge about the product, provided to testers

Regional Specification: 

  • Data and time format will change based on region
  • Phone number formats need to be formatted for a particular region
  • License and product key information obey country specific regulations
  • Currency conversions and formats are incorporated
  • Colors for symbols and UI are allocated for particular region(s)
  • Units of measurements, taxes, are in line with the norms of that country to ensure acceptability.

 Language Specific Testing: 

  • Translation is an important facet of localizing content and it is important to ensure that terminology is consistent across user interface.
  • Text is free of grammatical mistakes, translated well and is free of character consumption
  • Lack of translation is replaced by English by default
  • Translation regularly updated for new features in UI 

Appearance (Visibility): 

  • Localized images are of good quality
  • Layout is consistent with the source/English version
  • Link breaks and hyphenation are correct

 Functional Verification: 

  • Basic functionality tests should be performed on the localized application
  • Hyperlinks work correctly
  • Entry fields support special characters
  • Fields are validated. Example, postal codes for target regions
  • Lists are sorted according to target language and region

Read Other Blog on G11N Testing

“G11N” Testing– A key aspect of Software Product Testing

Load performance testing using JMeter

Software Reliability Testing or load performance testing is a field within software testing & performance testing services that tests a software’s ability to function in a given environmental conditions for a particular period of time. While there are several software testing tools for evaluating performance and reliability, I have focused on JMeter for this blog.

JMeter or The Apache JMeter™ application as it is popularly known is an open source software. It is a  hundred percent pure Java application designed for load testing, functional testing and performance testing.

Given below are some guidelines on using JMeter to test the reliability of a software product:

Steps to download and install JMeter for load performance testing

JMeter, as a simple .zip file,  can be downloaded from the below mentioned URL:

Pre-requisite is to have Java 1.5 or higher version already installed in the machine.

Unzip the downloaded file in a required location, go to bin folder and double click on Jmeter.bat file.

If Jmeter UI opens up then the installation has been successful. If it does not open, then Java might not be installed/configured or the downloaded files may be corrupted.

The next step is to configure Jmeter to listen to browser. Open up the Command prompt under bin location of Jmeter and run Jmeter batch file. This will open up simple UI of Jmeter as already seen.

If you are accessing the internet via Proxy Server then this will not work when recording scripts. We need to by-pass the proxy server and to do this we need to start Jmeter by adding two parameters, –H proxy server name/IP address and –P port number. Adding these will look like the following image:

The next step is to configure the browser to `listen’ to Jmeter while recording the scripts. For this blog, I will be using Firefox browser

  1. Open Firefox browser
  2. Go to options->Advanced
  3. Select Network tab and click on Settings
  4. A connection settings tab will be opened under the `select manual proxy’ configuration and give HTTP proxy as localhost and port as 8080.

After having configured proxy settings, we need  to have Jmeter Certificate installed on the required browser.

The Certificate will be in Jmeter bin folder, under the file name “ApacheJMeterTemporaryRootCA.crt”. Given below are steps to be followed to  install the Certificate in Firefox browser

  1. Open Firefox browser
  2. Go to options->Advanced
  3. Select certificate tab and click on View certificates
  4. Under Authorities tab click on Import
  5. Go to bin folder of Jmeter and select the certificate under it (ApacheJMeterTemporaryRootCA.crt)
  6. Check “Trust this CA to identify website” option and click on Ok.

Note : If you do not find the certificate in the bin folder directory then you can generate it by running the `HTTP Recorder’. Every time  you run the recorder, the certificate will be generated and there is no need to replace the certificate every time.

Recording of Script for web-based applications:

Steps to record a sample test script:

  1. Open up Jmeter by running Jmeter.bat
  2. Right click on Test Plan->Add->Threads(Users)->Thread Group
  3. Right Click on Thread Group->Add->Logic Controller->Recording Controller
  4. Right Click on WorkBench->Add->Non-Test Elements->HTTP(s) Test Script Recorder
  5. Right Click on HTTP(s) Test Script Recorder->Add->Listener->View Results Tree

After adding the components, Jmeter UI would look like the following image:

  1. Click on HTTP(s) Test Script Recorder and again click on Start.  Jmeter Certificate will be generated with a pop- up informing the same. Click on `Ok’ button on the pop up.
  2. Open up the browser, for which Jmeter proxy is configured and go to the URL which is under Test and execute your manual test script with which you want to determine performance.
  3. Once you are done with all your manual test, come back to Jmeter UI Click on HTTP(s) Test Script Recorder and click on Stop. This will stop recording.
  4. Click on the icon for recording script to view all your recorded HTTP samples, as follows. You can then rename Recording script controller
  5. You can view details of the page recorded under “View Results Tree Listener”. By using these values you can  determine the assertions to put in your original run.
  6. Now Click on “Thread Group” and configure the users you want to run. Also Right click on Thread Group->Add->Listener->Summary Report and also add View Results Tree.
  7. Make sure to Tick check box “Errors” under View Results Tree, or else it will take up huge memory while running your scripts as it captures almost everything
  8. Now you can run your recorded scripts! Just click on Run/Play button on the top toolbar of Jmeter UI

Analyzing the test run result:

While the scripts are running, results/timing will be captured under “Summary Report Listener”. Summary report will look the following after your test run has been completed. You can save the report by clicking on “Save Data Table”

Below are details of each keyword:

  • average : Average is the average response time for that particular http request. This response time is in milliseconds.
  • aggregate_report_min : Min denotes the minimum response time taken by the http request. Time is in milliseconds.
  • aggregate_report_max : Max denotes  the maximum response time taken by the http request. Time is in milliseconds.
  • aggregate_report_stddev : How many exceptional cases were found which were deviating from the average value of the receiving time.
  • aggregate_report_error% : This denotes the error percentage in samples during run.
  • aggregate_report_rate : How many requests per second does the server handle. Larger is better.
  • average_bytes : average response size.  Lower the average number, greater is the performance.

Read Other Blogs on Load Testing

Getting Started with Load Testing of Web Applications using JMeter

Web Service Testing


Web service is a means to establish the communication/connection between two or more application servers and exchange the data between the communicated servers or machines.

Web services are mainly built on Service Oriented Architecture (SOA) technology and communicate using HTTP or SOAP protocol.

Web services are platform and technology independent.

Types of Web Service

Web services can be implemented in different ways

  • SOAP (Simple Object Access Protocol)
  • REST (Representational State Transfer Architecture)

SOAP: Simple Object Access Protocol is a standard protocol which uses the XML format for sending and receiving web service requests and responses. SOAP messages are exchanged between the applications within the SOAP envelops in WSDL format.

REST: REST is an architecture that runs over HTTP. Rest uses GET, PUT, POST and DELETE operation calls. Rest supports both XML and JSON format. RESTFUL applications use HTTP built-in headers to carry meta-information.

Web Service Testing steps

Web service testing basically involves the following activities:

  1. Understand the WSDL file format
  2. Determine the response XML format
  3. Determine the XML request format which we need to send
  4. Determine the operations provided by the web service
  5. Send web service request and validate the response

Web Service Testing for Soap service using SoapUI (Open Source)

We will consider the currency conversion utility as an example

  1. Launch SoapUI tool -> Go to File -> New Soap Project.
  2. Enter the project Name and the WSDL URL
  3. Click OK button.
  4. Expand the first request on left pane and double -> click on the ‘Request1’. It will display the SOAP request in the XML format on right pane.
  5. Enter the value for “From Currency” and “To Currency” on request xml.
  6. Click on the submit button.
  7. Response output XML will be displayed in the right side pane based on input value provided.
  8. We need to validate that the output response has the proper currency conversion based on the input currency provided.

Web Service Testing for REST service using SoapUI (Open Source)

SoapUI will create project tree along with resources, services, methods, and endpoint with input request in the editor. See below

  • When we click on the parameter in parameters section, parameters used in the service are displayed in the separate popup window as shown above.
  • Click on the Run icon. SoapUI generates the following output for the given endpoint in the form of XML.
  • We need to validate that the output has the proper location for the given address.

Some of the Other Web Service Testing Tools that can be used

  1. SoapUI Pro
  2. TestMaker
  3. WebInject
  4. SOAPSonar
  5. wizdl
  6. Stylus Studio
  7. TestingWhiz
  8. SOAtest
  9. JMeter
  10. Storm
  11. Postman
  12. vRest
  13. HttpMaster
  14. Runscope
  15. Rapise
  16. LoadUI NG Pro

Typical failures noticed during Web Service Testing

  1. Fail to handle server connections.
  2. Security issues – Firewall, Authentication failure
  3. Performance Issues while API request and response time is too high
  4. Fail to handle expected response output
  5. SOAP and REST faults
  6. Input request data is invalid
  7. Response output data is not as expected.

Web Services Testing Challenges:

  1. Web service call sequencing, parameter selection and parameter combination
  2. Web service is GUI-less testing which makes it difficult to provide the input values
  3. Difficult to understand web service calls and WSDL without programming knowledge
  4. Unknown exceptions thrown from the service.

Why Silk Performer is the Best Performance Testing Tool for Enterprises

What is Silk Performer?

Silk Performer provides advanced facilities for running multi-user tests to validate an application’s performance under different load conditions. It provides on-demand cloud based load generation facility and simulates peak loads from distributed geographies, thereby reducing investment overhead in load-testing hardware setup and maintenance.

Silk Performer provides identification of client-side web page components which can be performance bottlenecks such as image sizes not optimized for performance, client-side resources being called serially wherein an asynchronous implementation may provide web page performance improvements.

  • Silk Performer provides webpage performance breakdown which can show performance bottlenecks of webpage component, client-side network level.
  • Silk Performer provides integration facilities with APM tools AppDynamics and DynaTrace, to resolve performance issues in code such as memory leaks, loitering objects etc.
  • Silk Performer allows creation of custom load test reporting templates to suit business processes and requirements.
  • Silk Performer Performance trend dashboard shows performance changes in the application for frequent build cycles and is useful to identify performance issues early in the development cycle.
  • By adding content verifications in test scripts, it can help identify content errors under load.
  • Silk Performer’s baseline testing approach allows to set acceptable response time SLA’s and subsequently aid, in flagging, transactions whose response time exceeds acceptable response-time.
  • Silk Performer provides load injector health indicators (CPU, Memory, Responsiveness).

Best suited for enterprise-level performance testing services

Silk Performer is a  performance testing and load testing tool that can be considered for fitment in multiple scenarios, as it supports a rich set of industry standard protocols. For e.g. consider the following scenario where checking performance with available software testing tools in the market could be really difficult.

Story/Alert message feeds sent in binary format via a TCP/IP connection with TIBCO enterprise messaging services as Middleware, to a third party NML server which converts these Story/Alert messages to NEWSML format & then forwards this message to BEA Weblogic server cluster, which processes this message and sends it to CMS and issues an output TIBCO broadcast message, to signal that the Story/Alert is successfully sent. These output TIBCO broadcast messages are sent via UDP protocol over TIBCO Enterprise messaging layer. Silkperformer does not support generic recording and playback of UDP based traffic. This limitation was addressed by developing Silk Performer load test scripts in conjunction with a COM DLL interface  which follows event driven architecture, to wait for Tib messages and pulse event when .output message is received.

The objective of performance testing is to Measure the Performance improvements incorporated for NMS and BEA application servers by capturing processing times of each of these layers in above architecture.

(Note: NewsML is a messaging standard in xml format, that is designed to provide a media-independent, structural framework for multi-media news. TIBCO is a set of middleware products)

With Silk Performer’s enhanced TCP/IP support we can record protocols at the TCP level and customize the resulting scripts using WebTcpip functions. In Silk Performer we can create Listener which wait’s for a specific TIBCO message to arrive. The arrival of the TIBCO message is signalled by a Windows Event, pulsed by the StoryListener user.

Silk Performer’s  TCP/IP TrueLog provides visual front-end diagnostics from the end-user perspective. TrueLog visually recreates the data that users provide and receive during full scale load tests which enables us to visually analyze the behavior of an application as errors occur. TrueLog Script Customization helps with a number of activities, such as session correlation. TrueLog automatically detects and customizes session-dependent information that is statically included in test scripts.

Below Silk performer sample code shows:

1.Creation of two user groups – Story_Sender and Story_Listener.

  1. Story_Sender sends a Story/Alert request to NML server, that should be followed by a TIBCO message coming in. Story_Sender starts a timer and is set to wait state, waiting for a specific Window’s event’s state to be set to “signalled”.Once the event is set to signalled, timer is stopped.
  2. Story_Listener is responsible for listening continuously to incoming TIBCO messages. After receiving a new TIBCO message, its message body(containing xml data) is parsed, and a corresponding windows event is signalled. Story_Listener is integrated via COM interface “ISilkPerformerUser” which enables Silkperformer to wait for Tib message’s and pulse event when .output message is received.
 sEvent:= CreateEvent(sEventName);

//Open tcp/ip connection to myServer using port 23



MeasureStart(“Timer1 : NML Process Story”);
// Send story message to NML


// The Story_Listener User captures and parses resulting story message to Web Logic. It then pulses “sEvent”

// if successful.
nSuccess:=WaitForSingleObject(sEvent, 20000);
if nSuccess=WAIT_OBJECT_0 then
MeasureStop(“Timer1 : NML Process Story”);

MeasureStart(“Timer2 : WebLogic Processing Time”);
MeasureGet(“Timer1 : NML Process Story”, MEASURE_TIMER_RESPONSETIME, MEASURE_KIND_LAST, fValue);

Print( “Timer1 : NML Process Story took “+ string(fValue)+” seconds”);
// Initialise hEvent to expected .OUTPUT headline

sEventName := “TibMsg:” + sHeadLine;
hEvent := CreateEvent(sEventName);

// Listening for “.OUTPUT” with user TibOutputLstnr. It pulses “hEvent” if successful.
bSuccess:=fWait_Tib(Timer2,sEventName, 40,hEvent);

if (bSuccess) then

MeasureStop(“Timer2 : WebLogic Processing Time”);

Print(” Transaction execution took “+ string(fValue)+” seconds”);

elseif not (bSuccess) then return 0

// fWait_Tib: Wait for a specific TIBCO message to arrive. The arrival of the TIBCO message is signalled by a Windows Event, pulsed by the StoryListener user.

// Arguments:

// sTimer: Name of the timer for the time measurement

// sEventName: Name of the Windows event to wait for

// fBound1: Service level (in seconds) for “Green” condition

// fBound2: Service level (in seconds) for “Yellow” condition;

// nTimeout: After waiting for nTimeout seconds, we give up and

// raise an appropriate error message

// Return value:

// True, if the message was received successfully

// False, in case of time out


sMsg : string(2000);

sTimestamp: string(100);


if hEvent = 0 then

// Our naming scheme for Windows Events reflecting Tib messages start with “TibMsg:”

sEventName := “TibMsg:” + sEventName;

// Get a handle to a windows event

hEvent := CreateEvent(sEventName);


// Wait for this windows event to be signalled, which will happen when

// the TibListener user receives a Tib messages matching this event

if WaitForSingleObject (hEvent, nTimeout * 2000) = WAIT_OBJECT_0 then


sMsg := “Received event!”;

fWait_Tib := true;


sMsg := “Waiting for event ‘” + sEventName + “‘ timed out after “ + String(nTimeout) + ” seconds”;

RepMessage(“Waiting for event (“ + sEventName + “) timed out after “ + String(nTimeout) + ” seconds”, SEVERITY_ERROR);

fWait_Tib := false;

end fWait_Tib;

We ran a series of load tests and identified transaction’s queuing up on NML server due to synchronous call’s. After changing to a asynchronous implementation, and rerunning the load tests we were able to achieve peak hour load volumes for Stories and Alerts and were able to get Performance measurements of NML and BEA servers.

SilkPerformer’s inbuilt datastore MSDE was used to archive load test results from multiple test runs. With this data store, we could do a comparison of each test run against baseline run, and build a performance trend across multiple test runs for planned build cycles.

SilkPerformer’s custom reporting templates were useful in preparing test reports providing breakup of response timer’s for NML and BEA servers, and showing 90th and 95th percentile response time achievements of each transaction. This test reports could easily be migrated to MS Excel, MS Word, and  SilkPerformer allows for publishing html version of performance test reports on online dashboards.


After using the silk performer in many quality assurance scenarios, we found that it is flexible enough to be used in various scenarios to conduct performance testing, stress testing and load testing.

Performance testing on cloud

Ensuring good application performance is crucial, especially to critical web applications, that require fast cycle times and short turnaround times for newer versions. How can such applications be optimally tested without spending a fortune on tools, technology, and people but still ensure quality assurance and on-time release? With tightening budgets and time-consuming processes, IT organizations are forced to do more with less.

A judicious combination of tools, the available cloud platforms, and well thought out methodology provide us the answer. While it is proven that open source software can reduce software development cost, there hasn’t been much use of open source tools for testing until the cloud computing paradigm has become more readily available. Until now performance testing on a large-scale testing project using test tools on dedicated hardware to model real-world scenarios has been an expensive proposition. However, cloud testing has changed the game.

Performance testing on the cloud can be broadly classified into two categories:

  1. Cloud Infrastructure for Test Environment – Performance testing always requires some sophisticated tool infrastructure. The test infrastructure requirement could vary from having specific hardware for specific tools, the number of hardware, licenses, back-ups, Bandwidth, etc. In the past, getting all the required hardware was not only challenging but also in many cases, the performance testing was not adequately tested due to missing test tools. With Cloud testing, one can just focus on performance testing and simply not on the infrastructure. Any tool be it open source like Grinder, JMeter, or any licensed software products like Silk Test Performer can be easily set up and run the test on the AUT (Application Under Test). There is some time required for setting up the tool and also requires few test runs to ensure the load injectors (the client machine that generates load) do not cause bottlenecks. This environment may be best suited in a typical waterfall model scenario where the software is evaluated and tuned at the end of the software development cycle.
  1. Cloud as a Test Tool – There are different sets of software testing tools that are readily available in the Cloud as a SaaS model. The test tool is readily available on the cloud, therefore no setup required, just subscribe and you are all set to go thus saving time in setup. Also, their system configuration is optimized to generate the required load without causing the bottlenecks. Some of the readily available test tools on Cloud are – LoadStorm, CloudTest by SOASTA, BrowserMob, nGrinder, Load-Intelligence (Can Use JMETER in Cloud), etc. This environment is more suited in an Agile scenario, where the same tasks need to be performed for smaller iterations from the initial stage of the SDLC itself. So, here you just have the scripts ready and upload to the cloud run the test and once you have the requirement metrics, you sign-off.

Conclusion – A combination of carefully selected testing tools, QA testers, readily available cloud platforms, and a sound performance test strategy, can make bring the same benefits as of the conventional methods at a much lower cost.

JMeter Regular Expression Extractor Example

Before you delve in to the details of the tool, you can get a bigger picture on the importance of performance engineering that boosts the quality of digital experiences.

In this example, we will demonstrate the use of Regular Expression Extractor post processor in Apache JMeter. We will go about parsing and extracting the portion of response data using regular expression and apply it on a different sampler. Before we look at the usage of Regular Expression Extractor, let’s look at the concept.

1. Introduction – Apache JMeter

Apache JMeter is an open-source Java based tool that enables you to perform functional, load, performance and regression tests on an application. The application may be running on a Web server or it could be a standalone in nature. It supports testing on both client-server and web model containing static and dynamic resources. It supports wide variety of protocols for conducting tests that includes, HTTP, HTTPS, JDBC, FTP, JMS, LDAP, SOAP etc.

A quick look at some of the features:

  • It provides a comprehensive GUI based workbench to play around with tests. It also allows you to work in a non-GUI mode. JMeter can also be ported on the server allowing to perform tests in a distributed environment.
  • It provides a concept of template which are pre-defined test plans for various schemes or protocols that can be directly used to create your required test plan.
  • It enables you to build test plan structurally using powerful features like Thread Group, Controllers, Samplers, Listeners etc.
  • It provides debugging and error monitoring through effective logging.
  • It supports parameterized testing through the concept of variables.
  • It supports creation of different flavors of test plan that includes Web, Database, FTP, LDAP, Web service, JMS, Monitors etc.
  • It allows for remote testing by having different JMeter instances running as servers across nodes and accessed from a single client application.
  • It gives you real time test results that covers metrics like latency, throughput, response times, active threads etc.
  • It enables you to perform testing based on regular expressions and many more other features.

1.1. Regular Expression

Regular expression is a pattern matching language that performs a match on a given value, content or expression. The regular expression is written with series of characters that denote a search pattern. The pattern is applied on strings to find and extract the match. The regular expression is often termed as regex in short. Pattern based searching has become very popular and is provided by all the known languages like Perl, Java, Ruby, Javascript, Python etc. The regex is commonly used with UNIX operating system with commands like grep, ls, awk and editors like ed and sed. The language of regex uses meta characters like . (matches any single character), [] (matches any one character), ^ (matches the start position), $ (matches the end position) and many more to devise a search pattern. Using these meta characters, one can write a powerful regex search pattern with combination of if/else conditions and replace feature. The discussion about regex is beyond the scope of this article. You can find plenty of articles and tutorials on regular expression available on the net.

1.2. Regular Expression Extractor

Regular Expression (regex) feature in JMeter is provided by the Jakarta ORO framework. It is modelled on Perl5 regex engine. With JMeter, you could use regex to extract values from the response during test execution and store it in a variable (also called as reference name) for further use. Regular Expression Extractor is a post processor that can be used to apply regex on response data. The matched expression derived on applying the regex can then be used in a different sampler dynamically in the test plan execution. The Regular Expression Extractor control panel allows you to configure the following fields:

Apply to: Regex extractor are applied to test results which is a response data from the server. A response from the primary request is considered main sample while that of sub request is a sub sample. A typical HTML page (primary resource) may have links to various other resources like image, javascript files, css etc. These are embedded resources. A request to these embedded resources will produce sub samples. An HTML page response itself becomes primary or a main sample. A user has the option to apply regex to main sample or sub samples or both.

Field to check: Regex is applied to the response data. Here you choose what type of response it should match. There are various response indicators or fields available to choose. You can apply regex to plain response body or a document that is returned as a response data. You can also apply regex to request and response headers. You can also parse URL using regex or you can opt to apply regex on response code.

Reference Name: This is the name of the variable that can be further referenced in the test plan using ${}. After applying regex, the final extracted value is stored in this variable. Behind the scenes, JMeter will generate more than 1 variable depending on the match occurred. If you have defined groups in your regex by providing parenthesis (), then it will generate as many variables as number of groups. These variables names are suffixed with the letters _g(n) where n is the group no. When you do not define any grouping on your regex, the returned value is termed as the zeroth group or group 0. Variable values can be checked by using Debug Sampler. This will enable you to verify whether you regular expression worked or not.

Regular Expression: This is the regex itself that is applied on the response data. A regex may or may not have a group. A group is a subset of string that is extracted from the match. For example, if the response data is ‘Hello World’ and my regex is Hello (.+)$, then it matches ‘Hello World’ but extracts the string ‘World’. The parenthesis () applied is the group that is captured or extracted. You may have more than one group in your regex, so which one or how many to extract, is configured through the use of template. See the below point.

Template: Templates are references or pointers to the groups. A regex may have more than one groups. It allows you to specify which group value to extract by specifying the group number as $1$ or $2$ or $1$$2$ (extract both groups). From the ‘Hello World’ example in the above point, $0$ points to the complete matched expression that is ‘Hello World’ and $1$group points to the string ‘World’. A regex without parenthesis () is matched as $0$ (default group). Based on the template specified, that group value is stored in the variable (reference name).

Match no.: A regex applied to the response data may have more than one matches. You can specify which match should be returned. For example, a value of 2 will indicate that it should return the second match. A value of 0 will indicate any random match to be returned. A negative value will return all the matches.

Default value: The regex match is set to a variable. But what happens when the regex does not match. In such a scenario, the variable is not created or generated. But if you specify a default value then if the regex does not match then the variable is set to the specified default value. It is recommended to provide a default value so that you know whether your regex worked or not. It is a useful feature for debugging your test.

2. Regular Expression Extractor By Example

We will now demonstrate the use of Regular Expression Extractor by configuring a regex that will extract the URL of the first article from the JCG (Java Code Geeks) home page. After extracting the URL, we will use it in a HTTP Request sampler to test the same. The extracted URL will be set in a variable.

Before installing JMeter, make sure you have JDK 1.6 or higher installed. Download the latest release of JMeter using the link here. At the time of writing this article, the current release of JMeter is 2.13. To install, simply unzip the archive into your home directory where you want JMeter to be installed. Set the JAVA_HOME environment variable to point to JDK root folder. After unzipping the archive, navigate to /bin folder and run the command jmeter. For Windows, you can run using the command window. This will open JMeter GUI window that will allow you to build the test plan.

2.1. Configuring Regular Expression Extractor

Before we configure regex extractor, we will create a test plan with a ThreadGroup named ‘Single User’ and a HTTP Request Sampler named ‘JCG Home’. It will point to the server For more details on creating ThreadGroup and related elements, you can view the article JMeter Thread Group Example. The below image shows the configured ThreadGroup (Single User) and HTTP Request Sampler (JCG Home). Next, we will apply the regex on the response body (main sample). When the test is executed, it will ping the web site named and return the response data which is a HTML page. This HTML web page contains JCG articles, the title of which is wrapped in a <h2> tag. We will write a regular expression that will match the first <h2> tag and extract the URL of the article. The URL will be part of an anchor <a> tag. Right click on JCG Home sampler and selectAdd -> Post Processors -> Regular Expression Extractor.

The name of our extractor is ‘JCG Article URL Extractor’. We will apply the regex to the main sample and directly on the response body (HTML page). The Reference Name or variable name provided is ‘article_url’. The regex used is <h2 .+?><a href="http://(.+?)".+?</h2>. We will not go into the details of the regex as this is a different discussion thread altogether. In a nutshell, this regex will find or match the first <h2> tag and extract the URL from the anchor tag. It will strip the word http:// and extract only the server part of the URL. The extractor itself is placed in a parenthesis () forming our first group. The Template field is set with the value of $1$ that points to our first group (the URL) and the Match No. field indicates the first match. The Default Value set is the ‘error’. So if our regex fails to match then the variable article_url will hold the value ‘error’. If the regex makes a successful match, then the article URL will be stored in the article_url variable.

We will use this article_url variable in another HTTP Request sampler named JCG Article. Right click on Single UserThreadGroup and select Add -> Sampler -> HTTP Request.

As you can see from the above, the server name is ${article_url} which is nothing but the URL that was extracted from the previous sampler using regex. You can verify the results by running the test.

2.2. View Test Results

To view the test results, we will configure the View Results Tree listener. But before we do that, we will add a Debug Sampler to see the variable and its value being generated upon executing the test. This will help you understand whether your regex successfully matched an expression or failed. Right click on Single User ThreadGroup and select Add -> Sampler-> Debug Sampler.

As we want to debug the generated variables, set the JMeter variables field to True. Next, we will view and verify test results using View Results Tree listener. Right click on Single User ThreadGroup and select Add -> Listener -> View Results Tree.

First let’s look at the output of Debug Sampler response data. It shows our variable article_url and observe the value which is the URL that we extracted. The test has also generated group variables viz. article_url_g0 and article__url_g1. The group 0 is a regular general match and group 1 is the string that is extracted from the general match. This string is also stored in our article_url variable. The variable named article_url_g tells you the no. of groups in the regex. Our regex contained only 1 group (note the sole parenthesis () in our regex). Now lets look at the result of our JCG Article sampler:

The JCG Article sampler successfully made the request to the server URL that was extracted using regex. The server URL was referenced using ${article_url} expression.

3. Conclusion

The regular expression extractor in JMeter is one of the significant feature that can help parse different types of values on different types of response indicators. These values are stored in variables that can be used as references in other threads of the test plan. The ability to devise groups in the regex, capturing portions of matches makes it even more a powerful feature. Regular expression is best used when you need to parse the text and apply it dynamically to subsequent threads in your test plan. The objective of the article was to highlight the significance of Regular Expression Extractor and its application in the test execution.

JMeter Blog Series: JMeter BeanShell Example

Here’s more about load and performance testing using Jmeter.

In this example, we will demonstrate the use of BeanShell components in Apache JMeter. We will go about writing a simple test case using the BeanShell scripting language. These scripts will be part of BeanShell components that we will configure for this example. Before we look at the usage of different BeanShell components, let’s look at the concept.

1. Introduction

Apache JMeter is an open-source Java-based tool that enables you to perform functional, load, performance, and regression tests on an application. The application may be running on a Web server or it could be standalone in nature. It supports testing on both client-server and web models containing static and dynamic resources. It supports a wide variety of protocols for conducting tests that include, HTTP, HTTPS, JDBC, FTP, JMS, LDAP, SOAP, etc.

A quick look at some of the features of Jmeter

  • It provides a comprehensive GUI based workbench to play around with tests. It also allows you to work in a non-GUI mode. JMeter can also be ported on the server allowing to perform tests in a distributed environment.
  • It provides a concept of templates which are pre-defined test plans for various schemes or protocols that can be directly used to create your required test plan.
  • It enables you to build a test plan structurally using powerful features like Thread Group, Controllers, Samplers, Listeners, etc.
  • It provides debugging and error monitoring through effective logging.
  • It supports parameterized testing through the concept of variables.
  • It supports the creation of different flavors of test plan that includes Web, Database, FTP, LDAP, Web service, JMS, Monitors etc.
  • It allows for remote testing by having different JMeter instances running as servers across nodes and accessed from a single client application.
  • It gives you real-time test results that covers metrics like latency, throughput, response times, active threads etc.
  • It enables you to perform testing based on regular expressions and many more other features.

1.1. What is BeanShell?

BeanShell is a scripting language written in Java. It is part of JSR-274 specification. It in some way is an extension to the mainstream Java language by providing scripting capabilities. It is an embedded interpretor that recognizes strongly typed Java syntax and scripting features like shell commands, loose types and method closures (functions as objects). BeanShell aids in quick development and test of Java application. One can use it for quick or rapid prototyping or quickly testing a small functionality or a process. The script can also be embedded in the Java code and invoked using the Interpreter API.

BeanShell can also be used as a configuration language as it supports creation of Java based variables like strings, arrays, maps, collections and objects. It also supports what is called as scripting variables or loosely typed variables. BeanShell scripts can also be written in a standalone mode in a external file which then can be loaded and executed by the Java program. BeanShell also provides the concept of UNIX like shell programming. You can give BeanShell commands interactively in a GUI shell and see the output instantly.

For more details on BeanShell, you can refer to the official website

1.2. JMeter Beanshell Components

JMeter provides the following components that can be used to write BeanShell scripts

  • BeanShell Sampler
  • BeanShell PreProcessor
  • BeanShell PostProcessor
  • BeanShell Assertion
  • BeanShell Listener
  • BeanShell Timer

Each of these component allows you to write scripts to conduct your test. JMeter will execute the scripts based on the lifecycle order of the components. For example, it will first invoke PreProcessor then Sampler and then PostProcessor and so on. Data can be passed between these components using thread local variables which has certain meaning and context. Every component provides you with pre-defined variables that can be used in the corresponding script.

The following table shows some of the common variables used by the BeanShell components:

Variable name Description
ctx It holds context information about the current thread that includes sampler and its results.
vars This is a thread local set of variables stored in a map used by BeanShell components in the same thread.
props These are variables loaded as properties from an external file ( stored in the classpath.
prev It holds the last result from the sampler
data It holds server response data

2. BeanShell By Example

We will now demonstrate the use of BeanShell in JMeter. We will take a simple test case of sorting an array. We will define an array of 5 alphabets (a,b,c,d,e) stored in random order. We will sort the content of the array and convert it into string. After conversion, we will remove the unwanted characters and print the final string value. It should give the output as ‘abcde’.
We will make use of the following BeanShell components to implement our test case:

  • BeanShell PreProcessor – This component will define or initialize our array.
  • BeanShell Sampler – This component will sort the array and convert it into string.
  • BeanShell PostProcessor – This component will strip the unnecessary characters from the string.
  • BeanShell Assertion – This component will assert our test result (string with sorted content).

Before installing JMeter, make sure you have JDK 1.6 or higher installed. Download the latest release of JMeter using the link here. At the time of writing this article, the current release of JMeter is 2.13. To install, simply unzip the archive into your home directory where you want JMeter to be installed. Set the JAVA_HOME environment variable to point to JDK root folder. After unzipping the archive, navigate to <JMeter_Home>/bin folder and run the command jmeter. For Windows, you can run using the command window. This will open JMeter GUI window that will allow you to build the test plan.

2.1. Configuring BeanShell Sampler

In this component, we will sort the array. But before we sort the array, it needs to be initialized. You will see the initialization routine in the next section when we create the pre-processor component. Let’s first create the BeanShell Sampler component. We will write the code to sort the array after the initialization routine. Right click on Single UserThreadGroup and select Add -> Sampler -> BeanShell Sampler.

We will provide the name of our sampler as ‘Array Sorter’. The Reset Interpreter field value is retained as ‘False’. This field is only necessary when you have multiple BeanShell samplers configured or if you are running a sampler in the loop. The value of true will reset and create a fresh instance of BeanShell interpreter for each sampler. The value of false will create only one BeanShell interpreter that will interpret scripts for all the configured samplers. From the performance perspective, it is recommended to set this field to true if you have long running scripts with multiple samplers. The Parameter field allows you to pass parameters to your BeanShell scripts. It is usually used with external BeanShell script file, but if you are writing script in this component itself then you can use Parameters or bsh.args variable to fetch the parameters. The Parametersvariable will hold the parameters as a string value (retains spaces). The bsh.args variable will hold the parameters as string array. For this example, we are not passing any parameters to the script. The Script file field is used when you have a BeanShell script defined in an external file. It is important to note, this will override any scripts written inline in this component. We will retain the default value for all the above mentioned fields for all the BeanShell components. The finalScript textbox field allows us to write scripts inline in this component itself. It allows you to use certain variables in your scripts. As you can see there is no scripting code currently in this field. We will write the code after our array is initialized in the pre-processor component.

2.2. Configuring BeanShell PreProcessor

Beanshell PreProcessor will be the first component to be executed before your sampler. It becomes a good candidate to perform initialization routines. We will initialize our array, to be sorted, in this component. Right click on Array Sorter sampler and select Add -> Pre Processors -> BeanShell PreProcessor.

We will name the component as ‘Array Initializer’. Let’s see the code in the Script textbox field. First, we are declaring and initializing the array named strArray. It is a loosely typed variable. The values of the array are not in order. Then we make use of the vars variable to store the array by calling putObject() method. The vars variable will be available to all the BeanShell components that are part of this thread. We will fetch the value of vars variable in a ‘Array Sorter’ sampler and perform the sort. In the above section, we created the ‘Array Sorter’ sampler, now we will write the following code in that sampler to sort the array. Click on Array Sorter sampler, in the Script textbox field to write the following code:

First, we get the array using getObject() method of the vars variable. Then we will sort using the Arrays class of Java. The sort() method of the said class will take our array as a parameter and perform the sort. We then convert the array into string by calling Arrays.toString() method. Arrays is a utility class provided by the JDK to perform certain useful operations on array object. We will then put this sorted string as a response data through the use of SampleResult variable. Our sorted string will look like the following: [a, b, c, d, e].

2.3. Configuring BeanShell PostProcessor

The BeanShell PostProcessor will strip the unnecessary characters like ‘[],’. This component will act more like a filter. Right click on Array Sorter sampler and select Add -> Post Processors -> BeanShell PostProcessor.

We will name the component as ‘Array Filter’. The Script textbox field contains the code that strips the unnecessary characters from our string. If you recall, the string was stored as response data by the Array Sorter sampler. Now here we fetch the string using the function getResponseDataAsString() of the prev variable. Next, we use the replace() method of the String class to strip ‘[]’ and ‘,’ characters from the string. We store that string in the vars variable. This string will now be used by BeanShell Assertion component to assert the final result.

2.4. Configuring BeanShell Assertion

Using this component, we will assert the final result value as ‘abcde’. Right click on Array Sorter sampler and select Add ->Assertions -> BeanShell Assertion.

Using the vars variable, we will get the final string and store it in the finalString variable. Then we assert by checking if the final string does not contain the value ‘abcde’ then set the Failure variable to true and provide the failure message using the FailureMessage variable. The output of the test execution can be see in the command window from where you started the JMeter GUI. The below is the console output after running our tests.

3. Conclusion

BeanShell scripting language provides scripting capabilities to the Java language. In JMeter, you can use different BeanShell components to write the test scripts and execute the same. Each component is equipped with useful variables that can be used in the scripts to perform the control flow. The scripting feature adds a powerful and useful dimension to the JMeter testing tool. The objective of the article was to show the usage of common Beanshell components and how one can write test scripts to execute tests.

Know how well your application performs under load. Register for a free primary assessment.

JMeter Blog Series: Random Variable Example

Here’s the beginning, load/performance testing using JMeter.

In this example, we will demonstrate how to configure Random Variable in Apache JMeter. We will go about configuring a random variable and apply it to a simple test plan. Before we look at the usage of Random variables, let’s look at the concept.

1. Introduction

Apache JMeter is an open-source Java-based tool that enables you to perform functional, load, performance, and regression tests on an application. The application may be running on a Web server or it could be standalone in nature. It supports testing on both client-server and web models containing static and dynamic resources. It supports a wide variety of protocols for conducting tests that include, HTTP, HTTPS, JDBC, FTP, JMS, LDAP, SOAP, etc.
A quick look at some of the features

  • It provides a comprehensive GUI based workbench to play around with tests. It also allows you to work in a non-GUI mode. JMeter can also be ported on the server allowing to perform tests in a distributed environment.
  • It provides a concept of templates which are pre-defined test plans for various schemes or protocols that can be directly used to create your required test plan.
  • It enables you to build test plan structurally using powerful features like Thread Group, Controllers, Samplers, Listeners etc.
  • It provides debugging and error monitoring through effective logging.
  • It supports parameterized testing through the concept of variables.
  • It supports creation of different flavours of test plan that includes Web, Database, FTP, LDAP, Web service, JMS, Monitors etc.
  • It allows for remote testing by having different JMeter instances running as servers across nodes and accessed from a single client application.
  • It gives you real time test results that covers metrics like latency, throughput, response times, active threads etc.
  • It enables you to perform testing based on regular expressions and many more other features.

1.1. Random Number Generation

Most programming languages today has an API that will generate or produce random numbers. The generator algorithm typically produce sequence of numbers which are arbitrary and do not follow any order, structure or format. The algorithm to devise randomness is based on some value also called as seed. The seed drives the sequence generation. Two same seeds will always have same sequence generated. The seed based approach is also termed as pseudo-random number generation.

1.2. Random Variable in JMeter

JMeter allows you to generate random number values and use it in a variable. You can do so through the Random Variable config element. The Random Variable config element allows you set the following parameters:

  • Variable name: You can provide the name of the variable that can be used in your test plan elements. The random value will be stored in this variable.
  • Format String: You can specify the format of the generated number. It can be prefixed or suffixed with string. For example, if you want the generator to produce alphanumeric values you can specify the format like SALES_000 (000 will be replaced with the generated random number).
  • Minimum and Maximum value: You can specify range within which the numbers to be generated. For example, the minimum number can be set as 10 and the maximum number can be set as 50. The generator will produce any number within that range.
  • Per Thread (User): You can specify whether random generator will be shared by all the threads (users) or each thread will have its own instance of random generator. This can indicated by setting false or true respectively.
  • Random Seed: You can also specify the seed value for your generator. If the same seed is used for every thread (Per Thread is set to true) then it will produce the same number for each thread.

2. Random Variable By Example

We will now configure the Random Variable config element. Finding test cases for random variables is always a tricky affair. You may have a test case that tests the random number itself, like whether it is in the proper range or the format of the number is valid or not. Another test case could be where you need to provide some random number as part of URL like say order ID (orderId=O122) or page numbers for pagination ( It may be best suited to perform load testing for such URL pages. We will use the configured variable in a HTTP Request Sampler as part of request URL. As part of this example, we will test Java category pages (1 – 10) of JCG website (
The page number 2 on the URL will be fetched using random variable.

2.1. JMeter installation and setup

Before installing JMeter, make sure you have JDK 1.6 or higher installed. Download the latest release of JMeter using the link here. At the time of writing this article, the current release of JMeter is 2.13. To install, simply unzip the archive into your home directory where you want JMeter to be installed. Set the JAVA_HOME environment variable to point to JDK root folder. After unzipping the archive, navigate to /bin folder and run the command jmeter. For Windows, you can run using the command window. This will open JMeter GUI window that will allow you to build the test plan.

2.2. Configuring Random Variable

To configure Random Variable, we have to make use of Config Element option. Right click on Test Plan and select Add ->Config Element -> Random Variable.

We will give the name of the element as ‘Page Counter Variable’. The Variable Name is ‘page_number’. The page_numbervariable will be used in our test plan later. Keep the output format blank. We will set Minimum Value and Maximum Valuefield values as 1 and 10 respectively. It means the numbers so generated will fall between 1 and 10 (both inclusive). Keep the seed option as blank. Retain the value of Per Thread (User) field as False. It means if you configure multiple threads, all the threads will use this same random generator instance.
Next, we will create a ThreadGroup named ‘Single User’ with the Loop Count set as ’10’. We will use only 1 thread (user) for this example. You could experiment with multiple threads to simulate load test. Our main objective of the article is to show how we can configure and use random variable and therefore we will keep it simple to 1 user test. Loop count of value 10 will repeat the test ten times per user.

For our ThreadGroup we will create HTTP Request sampler named ‘JCG Java Category’.

It will point to the server Set the Path value as /category/java/page/${page_number}. You can notice here the use of our variable ${page_number}. As this test will be repeated 10 times (loop count), at runtime thepage_number variable will be substituted with random values between the range of 1 and 10.
You can view the result of the test by configuring View Results Tree listener. Run the test and you will see the following output.

As you can see, every request will generate random page values in the URL.

3. Conclusion

The random variable feature can be handy when you want to load test several pages with URL having parameter values that can be substituted dynamically at runtime. You could also devise other use cases for using random variables. The article provided a brief insight into the Random Variable feature of the JMeter.

Exit mobile version