Fundamentals of testing microservices architecture

Increased adoption of digital has pushed the need for speed to the forefront. The need to conceptualize, develop, launch new products and iterate to make them better, much ahead of the competition and gain customer mindshare, has become the critical driver of growth. Adoption of agile principles or movement towards scrum teams for increased agility are all steps in this direction. Disruptive changes have also taken place on the application front with the 3-tier architecture of the late 90s and subsequent 2 tier monolithic architecture giving way to one that is based on microservices.

Having a single codebase made a monolithic architecture less risky but slow to adopt changes, the exact opposite of a services-based architecture. Microservices architecture makes it easier for multiple development teams to make changes to the codebase in parallel. By transforming an application into a distributed set of services that are highly independent, yet interdependent, provides the ability to create new services/functionalities and modify services without impacting the overall application. These changes can be achieved by teams cutting across geographies or locations and makes it easier for them to understand functional modules rather than the humongous application codebase. However, the highly distributed nature of services also gives them a heightened ability to fail.

Breaking it down

At the core, a microservices architecture comprises of 3 layers – a REST layer that allows the service to expose APIs, the database layer, and the service layer. A robust testing strategy needs to cover all these layers and ensure that issues are not leaked to production. The further an issue moves across stages, the impact increases on account of multiple teams getting affected. Hence the test plan must cover multiple types of testing like service testing, subsystem testing, client acceptance testing, performance testing, etc. Subsequent paragraphs outline key aspects of service level testing and integration testing in a microservices architecture-based application landscape.

In service level testing, each service forming a part of the application architecture needs to be validated. Each service has dependencies on other services and transmits information to others based on need. In a monolith architecture, since connections are being established from one class to the other within the same Java Virtual machine (JVM), chances of failure are far lower. However, in a services architecture, these are distributed, driving the need for network calls to access other services and makes it more complex.

Functional Validation: The primary goal in services testing is the functionality validation of a service. Key to this is the need to understand all events the service handles through both internal as well as external APIs. At times this calls for simulating certain events to ensure that they are being handled properly by the service. Collaboration with the development team is key to understand incoming events being handled by the service as part of its functionality. A key element of functional validation – API contract testing, tests the request and response payload along with a host of areas like pagination and sorting behaviors, metadata, etc.

Compatibility: Another important aspect is recognizing and negating backward compatibility issues. This happens during the launch of a changed version of the service that breaks existing clients running in production. Changes that happen to API contracts need to be evaluated in detail to understand if they are mandatory and capable of breaking clients in production. An addition of a new attribute or a parameter may not classify as a breaking change; however, changes to response payload, behavior, error codes, or datatypes have the ability to break. A change in value typically changes the logic behind it as well. They need to be uncovered much earlier in the service testing lifecycle.

Dependencies: Another aspect of focus is external dependencies, where one would test both incoming as well as outgoing API calls. Since these are heavily dependent on the availability of other services and hence other teams, there is a strong need to obviate dependency through the usage of mocks. Having conversations with developers and getting them to insert mocks while creating individual services will enable testing dependencies without waiting for the service to be available. It is imperative to make sure the mocks are easily configurable without needing access to the codebase. Usage of mocks also drives ease in automation giving teams the ability to run independently with no configuration.

Bringing it all together

Once each service is tested for its functionality, the next step is to move onto validate how the various collaborating services work together end to end. Known as subsystem testing or integration testing, it tests the whole functionality exposed together. Understanding the architecture or application blueprint by discussions with the development team is paramount in this stage. Further, there is a strong need to use real services deployed in the integration environment rather than mocks used for external dependencies.

As part of integration testing, there is a need to validate if the services are wired very closely and talking to each other. The event stream and inter-service API calls need to be configured properly so inter-service communication channels are proper. If the service functionality level testing is proper, the chances of finding errors are minimal in this stage, since the required mocks created in the functionality testing stage would have ensured that the services function properly.

Looking in-depth, we find that the testing strategies for microservices are not extremely different from those adopted for a monolith application architecture. The fundamental difference comes in the way the interdependencies and communication between multiple services forming a part of the larger application are tested to ensure that the application as a whole function in line with expectations.

Poor application performance can be fatal for your enterprise, avoid app degradation with application performance testing

If you’ve ever wondered what can possibly go wrong’ after creating a foolproof app, think again. Democrats’ Iowa Caucus voting app is a case in point. The Iowa caucus post-mortem pointed towards a flawed software development process and insufficient testing.

The enterprise software market is predicted to touch US$230,134.0m in 2021, and the revenue is expected to grow with a CAGR of 9.1% leading to a market volume of US$326,285.5m by 2025. It is important that enterprises aggressively work towards getting their application performance testing efforts on track to ensure that all the individual components that go into the making of the app provide superior responses to ensure a better customer experience.

Banking app outages have also been pretty rampant in recent times putting the spotlight on the importance of application performance testing. Customers of Barclays, Santander, and HSBC suffered immensely when their mobile apps suddenly went down. It’s not as if banks worldwide are not digitally equipped. They dedicate at least 2-3 percent of their revenue to information technology along with additional expenses on building a superior IT infrastructure. What they also need is early and continuous performance testing to address and minimize the occurrence of such issues.

It is important that the application performs well not just when it goes live but later too. We give you a quick lowdown on application performance testing to help you gear up to meet modern-day challenges.

Application performance testing objectives

In general, users today, have little or no tolerance for bugs or poor response times. A faulty code can also lead to serious bottlenecks that can eventually lead to slowdown or downtime. Meanwhile, bottlenecks can arise from CPU utilization, disk usage, operating system limitations, or hardware issues.

Enterprises, therefore, need to conduct performance testing regularly to:

  • Ensure the app performs as expected
  • Identify and eliminate bottlenecks through continuous monitoring
  • Identify & eliminate limitations imposed by certain components
  • Identify and act on the causes of poor performance
  • Minimize implementation risks

Application performance testing parameters

Performance testing is based on various parameters that include load, stress, spike, endurance, volume, and scalability. Resilient apps can withstand increasing workloads, high volumes of data, and sudden or repetitive spikes in users and/or transactions.

As such, performance testing ensures that the app is designed keeping peak operations in mind and all components comprising the app function as a cohesive unit to meet consumer requirements.
No matter how complex the app is, performance testing teams are often required to take the following steps:

  • Setting the performance criteria – Performance benchmarks need to be set and criteria should be identified in order to decide the course of the testing.
  • Adopting a user-centric approach – Every user is different and it is always a good idea to simulate a variety of end-users to imagine diverse scenarios and test for use cases accordingly. You would therefore need to factor in expected usage patterns, the peak times, length of an average session within the application, how many times do users use the application in a day, what is the most commonly used screen for the app, etc.
  • Evaluating the testing environment – It is important to understand the production environment, the tools available for testing, and the hardware, software, and configurations to be used before beginning the testing process. This helps us understand the challenges and plan accordingly.
  • Monitoring for the best user experience – Constant monitoring is an important step in application performance testing. It will give you answers to what, when, and why’ helping you fine-tune the performance of the application. How long does it take for the app to load, how does the latest deployment compare to previous ones, how well does the app perform while backend performances occur, etc. are things you need to assess. It is important that you leverage your performance scripts well with proper correlations, and monitor performance baselines for your database to ensure it can manage fresh data loads without diluting the user experience.
  • Re-engineering and re-testing – The tests can be rerun as required to review and analyze results, and fine-tune again if necessary.

Early Performance Testing

Test early. Why wait for users to complain when you can proactively run tests early in the development lifecycle to check for application readiness and performance? In the current (micro) service-oriented architecture approach, as soon as the component or an interface is built, performance testing at a smaller scale can allow us to uncover issues w.r.t concurrency, response time/latency, SLA, etc. This will allow us to identify bottlenecks early and gain confidence in the product as it is being built.

Performance testing best practices

For the app to perform optimally, you must adopt testing practices that can alleviate performance issues across all stages of the app cycle.

Our top recommendations are as follows:

  • Build a comprehensive performance model – Understand your system’s capacity to be ready for concurrent users, simultaneous requests, response times, system scalability, and user satisfaction. The app load time, for instance, is a critical metric irrespective of the industry you belong to. Mobile app load times can hugely impact consumer choices as highlighted in a study by Akamai which suggested conversion rates reduce by half and bounce rate increases by 6% if a mobile site load time goes up from 1 second to 3. It is therefore important that you factor in the changing needs of customers to build trust, loyalty, and offer a smooth user experience.
  • Update your test suite – The pace of technology is such that new development tools will debut all the time. It is therefore important for application performance testing teams to ensure they sharpen their skills often and are equipped with the latest testing tools and methodologies.

An application may boast of incredible functionality, but without the right application architecture, it won’t impress much. Some of the best brands have suffered heavily due to poor application performance. While Google lost about $2.3 million due to the massive outage that occurred in December 2020, AWS suffered a major outage after Amazon added a small amount of capacity to its Kinesis servers.

So, the next time you decide to put your application performance testing efforts on the back burner, you might as well ask yourself ‘what would be the cost of failure’?

Tide over application performance challenges with Trigent

With decades of experience and a bunch of the finest testing tools, our teams are equipped to help you across the gamut of application performance right from testing to engineering. We test apps for reliability, scalability, and performance while monitoring them continuously with real-time data and analytics.

Allow us to help you lead in the world of apps. Request a demo now.

Improve the quality of digital experiences with Performance Engineering

Quality at the heart of business performance

“In 2020, the key expectation is fast, reliable, and trustworthy software.” *

As businesses embrace the Agile/DevOps culture and the emphasis on CI/CD is growing, quality assurance is seen as a standalone task, limited to validating functionalities implemented. When QA and Testing is an afterthought in an Agile/DevOps culture, the result is a subpar consumer experience followed by an adverse impact on the revenue pipeline. Poor customer experience also directly impacts brand credibility and business equity. While UI/UX are the visible elements of the customer experience, product, or service performance is a critical element that is often neglected. Performance Testing identifies the gaps that are addressed through Performance Engineering.

Small steps, significant gains – the journey towards Performance Engineering

The deeper issue lies in the organization’s approach towards quality and testing – it is considered an independent phase rather than looked upon as a collaborative and an integrated approach. Performance engineering is a set of methodologies that identifies potential risks and bottlenecks early on in the development stage of the product and addresses them. It goes without saying that performance is an essential ingredient in the quality of the product, there’s a deeper need for change in thinking – to think proactively, anticipate early in the development cycle, test and deliver a quality experience to the end consumer. An organization that makes gradual changes in its journey towards performance engineering stands to gain significantly. The leadership team, the product management, and the engineering and DevOps at different levels need to take the shift-left approach towards performance engineering.

Make Performance Engineering your strategic priority today

Despite the obvious advantages, performance testing is typically a reactive measure that is addressed after the initial launch. However, organizations need to embrace performance engineering measures right from the design phase, start small, and take incremental steps towards change.

Covid-19 has rapidly changed the way consumers behave globally. Businesses caught onto remote working; consumers moved shopping, entertainment, banking, learning, and medical consultations online. Consider the quantum jump in usage triggered by the pandemic.

The dramatic increase in the use of digital services has covered decades in days.**

Companies that adopted scalability and performance centric design have moved swiftly to capture the market opportunity.

With multiple user-interfaces across sectors being the norm and the increasing complexity of digital experiences, it is critical for businesses to get it right the first time in order to gain and retain customers’ trust.

As cloud migrations continue, whether rehosting the app on an IaaS or rebuilding a new approach, performance engineering ensures that migrated systems withstand sudden surges in usage. According to a Sogeti and Neotys report, 74% of the load testing infrastructure is operated in the cloud today. Cloud infrastructure providers ensure reliability but they may not be aware of the performance metrics that matter to the business and their impact. As organizations move from monolithic systems to distributed architectures provided by an assortment of companies, corporate leaders need to recognize the importance of performance engineering and embrace it to deliver the right solutions for the first time.

Our approach to Performance Engineering philosophy

At Trigent, we put the customer experience at the heart of planning the entire testing cycle. Our performance engineering practices align with ‘metrics that matter’ to businesses in the DevOps framework. While testing identifies the gaps in performance, the onus of architecting it right lies on the DevOps engineering team with proactive inputs from QA and Testing.

Performance engineering is also a way of thinking, the ability to plan for performance at the time of design, right at the beginning. As for quality, besides testing for functionality, anticipating potential bottlenecks helps us assess the process in its entirety in the beginning.

Asking some of these customer-centric questions early on shifts the perspective right at the outset. Ask them early, and you’re on your way to a performance engineering culture.

Parameters that matter

‘Will my application meet the defined response-time requirements of my customers?’

Consider an app that doesn’t respond within the expected standards of the customer; the chances of that application making it to the customer’s phone screen is pretty slim.

‘Will the application handle the expected user load and beyond?’

An application that tested well with 10 users may fail when that number is multiplied by a thousand or two.

We take the viewpoints of multiple stakeholders, consider parameters that matter to the customer, and assess impact early on.

Customer experience matters

Performance Engineering takes into account the overall experience of the end-user and their environment.

Asking pertinent questions such as ‘Will my users experience acceptable response times, even during peak hours?’ or ‘Does the application respond quickly enough for the intended users?’ does well to anticipate potential pitfalls in network usage and latency.

‘Where are the bottlenecks in my multi-user environment?’

Understand the real environment of the user and their challenges to provide a quality user experience.

Early Focus

The non-functional aspects are integrated into the DevOps and an early focus on performance enables us to gain insights into architectural issues.

‘How can we optimize the multi-user application before it goes live?
‘How can we detect errors that only occur under real-load conditions?

Quick course corrections help optimize performance and make the product market-ready. Besides faster deployment, quality assurance gives our clients an added advantage of reduced performance costs.

Architect it right

‘What system capacity is required to handle the expected load?’
‘Will the application handle the number of transactions required by the business?’

Important questions like these focus on architecting the product for performance. As part of the performance engineering methodology, our teams consistently check and validate the capabilities at the time of developing the product or scaling it up. We take the shift-left and shift-right approach to anticipate, identify, and remove bottlenecks early on. Getting the architecture right enables us to deliver and deploy a high-quality product every time.

Performance engineering done right is sure to improve the planning-to-deployment time with high-quality products. Plus, it reduces performance costs arising out of unforeseen issues. A step-by-step approach in testing makes sure organizations move towards achieving performance engineering. Talk to our experts for scalable performance engineering solutions for your business.

Learn more about Trigent software testing services.


Reference:
* The State of Performance Engineering 2020 – A Sogeti and Neotys report
** Meet the next-normal consumer – A McKinsey & Company report

Trigent excels in delivering Digital Transformation Services: GoodFirms

GoodFirms consists of researched companies and their reviews from genuine, authorized service-buyers across the IT industry. Furthermore, the companies are examined on crucial parameters of Quality, Reliability, and Ability and ranked based on the same. This factor helps customers to choose and hire companies by bridging the gap between the two.

They recently evaluated Trigent based on the same parameters, after which they found the firm excels in delivering IT Services, mainly:


Keeping Up with Latest Technology Through Cloud computing

Cloud computing technology has made the process of meeting the changing demands of clients and customers. The companies who are early adopters of the changing technologies always achieve cutting-edge in the market. Trigent’s cloud-first strategy is made to meet the clients’ needs by driving acceleration, customer insight, and connected experience to take businesses to the next orbit of cloud transformation. Their team exhibits the highest potential in cloud computing that improves business results across the key performance indicators (KPIs). The Trigent team is instilled with productivity, operational efficiency, and growth that increases profitability.

The team possesses years of experience and works attentively in the cloud adoption journey of their clients. The professionals curate all their knowledge to bring the best of services to the table. This way, the clients can seamlessly achieve goals and secure their place as a modern cloud based-enterprise. Their vigorous effort has placed them as the top cloud companies in Bangalore at GoodFirms website.

Propelling Business with Software Testing

Continuous efforts and innovations are essential for businesses to outpace in the competitive market. The Trigent team offers next-gen software testing services to warrant the delivery of superior quality software products that are release ready. The team uses agile – continuous integration, continuous deployment – and shift-left approaches by utilizing validated, automated tools. The team expertise covers functional, security, performance, usability, accessibility testing that extends across mobile, web, cloud, and microservices deployment.

The company caters to clients of all sizes across different industries. The clients have also sustained substantial growth by harnessing their decade-long experience and domain-knowledge. Bridging the gap between companies and customers and using agile methodology for test advisory & consulting, test automation, accessibility assurance, security testing, end to end functional testing, performance testing the company holds expertise in all. Thus, the company is dubbed as the top software testing company in Massachusetts at GoodFirms.

Optimizing Work with Artificial Intelligence

Artificial intelligence has been the emerging technology for many industries during the past decade. AI is defining technology by taking it to a whole new level of automation where machine learning, natural language process, and neural networks are used to deliver solutions. At Trigent, the team promises to support clients by utilizing AI and providing faster, more effective outcomes. By serving diverse industries with complete AI operating models – strategy, design, development, and execution – the firm is automating tasks. They are focused on empowering brands by adding machine capabilities to human intelligence and simplifying operations.

The AI development teams at Trigent are appropriately applying the resources to identify and govern a process that empowers and innovate business intelligence. Besides, with their help with continuous processes enhancements and AI feedback systems, many companies have been increasing productivity and revenues. Therefore, helping clients to earn profit with artificial intelligence, the firm would soon rank in the list of the artificial intelligence programming company at GoodFirms.

About GoodFirms

GoodFirms, a maverick B2B Research and Reviews Company helps in finding Cloud Computing, Testing Services, and Artificial Intelligence firms rendering the best services to its customers. Their  extensive research process ranks the companies, boosts their online reputation and helps service seekers pick the right technology partner that meets their business needs.

Can your TMS Application Weather the Competition?

The transportation and logistics industry is growing dependent on diverse transportation management systems (TMS). This is true not only for the big shippers but also for small companies triggered by different rates, international operations, and competitive landscape. Gartner’s 2019 Magic Quadrant for Transportation Management Systems summarizes the growing importance of TMS solutions when it says, “Modern supply chains require an easy-to-use, flexible and scale TMS solution with broad global coverage. In a competitive transportation and logistics environment, TMS solutions help organizations to meet service commitments at the lowest cost.

For TMS solution providers, the path to developing or modernizing applications is not as simple as cruising calm seas. Their challenges are myriad and relate to ensuring systems that organize quotes seamlessly (no jumping from phone to a website). They need to help customers to select the ideal carrier based on temperature, time, and load to ensure maximized benefits. Very importantly, they need to help customers to track shipments while managing multiple carrier options and freight. Customers look for answers, and TMS solutions should be able to provide customers the best options in carriers. All this does not come easy and while developing and executing the solution is half of it, the more critical half lies in ensuring that the system’s functionality, security, and performance remain uncompromised. When looking for a TMS solution, customers look for providers who can present a clear picture of the total cost of ownership. Unpredictability is a no-no in this business which essentially means that the solution is implemented and tested for 100 percent performance and functionality.

Testing Makes the Difference

The TMS solution providers who will be able to sustain their competitive edge are the ones who have tested their solution from all angles and are sure of its superiority.

In a recent case study that explains the importance of testing, a cloud-based trucking intelligence company provides solutions to help fleets improve safety and compliance while reducing costs invested in a futuristic onboard telematics product. The product manages several processes and functions to provide accurate and real-time information such as tracking fleet vehicles, controlling unauthorized access to the company’s fleet assets, and mapping real-me vehicle location. The client’s customers know more about their trucks on the road using pressure monitoring, fault code monitoring, and remote diagnostics link. The onboard device records and transmits information such as speed, RPMs and idle time, distance traveled, etc. in real-time to a central server using a cellular data network.

The data stored in the central server is accessed using the associated web application via the internet. The web application also provides a driver portal for the drivers to know/edit their hours of service logs. Since the system deals with mission-critical business processes, providing accurate and real-time information is key to its success.

The challenge was to set up a test environment for the onboard device to accurately simulate the environment in the truck and simulate the transmission of data to the central server. Establishing appropriate harnesses to test the hardware and software interface was equally challenging. The other challenges were the simulation and real-time data generation of the vehicle movement using a simulator GPS.

A test lab was set up with various versions of the hardware and software and integration points with simulators. With use-case methodology and user interviews, test scenarios were chalked out to test the rich functionality and usage of the device. Functional testing and regression testing of new releases for both the onboard equipment and web application were undertaken. For each of the client’s built-in products, end-to-end testing was conducted.

As a result of the testing services, the IoT platform experienced shortened functional release cycles. The comprehensive test coverage ensured better GPS validation, reduced preventive cost by identification of holistic test cases, reduced detection cost by performing pre-emptive tests like integration testing.

Testing Integral to Functional Superiority for TMS 

As seen in the case study above, developing, integrating, operating, and maintaining a TMS is a challenging business. There are several stakeholders and a complex process that includes integrated hardware, software, humans, and processes performing myriad functions, making the TMS’s performance heavily reliant on its functioning. Adding complexity is the input/output of data, command, and control, data analysis, and communication. As a result of its complexity and the importance of its functioning in managing shipping and logistics, testing is an essential aspect of a TMS.

Testing TMS solutions from the functional, performance design and implementation aspect will ensure that:

  • Shipping loads are accurate, and there are no unwelcome surprises
  • Mobile status updates eliminate human intervention and provide real-time updates.
  • Electronic record management to ensure the workflow is smooth and accurate
  • Connectivity information to eliminate issues with shift changes and visibility
  • API integration to seamlessly communicate with customers.
  • Managing risk for both the TMS and the system’s partners/vendors.

TMS software providers need to offer new features and capabilities faster to be competitive, win more customers, and retain their business. Whether it relates to seamless dispatch workflows, freight billing or EDI, Trigent can help. Know more about Trigent’s Transportation & Logistics solutions

Getting Started with Load Testing of Web Applications using JMeter

Apache JMeter:

JMeter is one of the most popular open source testing tools for load and performance testing services. It simulates browser behavior, sending requests to the web or application server for different loads. Volume testing using JMeter on your local machine, you can scale up to approximately 100 virtual users, but you can go up to more than 1,000,000 Virtual Users with CA BlazeMeter, which is sort of a JMeter in the cloud.

Downloading and Running the Apache JMeter:

Requirements:

Since JMeter is a pure Java-based application, the system should Java 8 version or higher.

Check for Java installation: Go to Command prompt, type `Java –version’, if Java is installed it will show as the Java version as below.

Related: Improved time to market and maximized business impact with minimal schedule variance and business risk.

If Java is not installed, download and install Java from the following link: “http://bit.ly/2EMmFdt

Downloading JMeter:
  • Download the latest version of JMeter from “Apache JMeter
  • Click on apache-jmeter-3.3.zip from Binaries.

How to Run the JMeter:

You can start JMeter in 3 ways:

  • GUI Mode
  • Server Mode
  • Command Line

GUI Mode: Extract the downloaded Zip file in any of your drives, go to the bin folder D:apache-jmeter-3.2bin–> double click on “jmeter” windows Batch file.

After that will appear the JMeter GUI as shown below:

Before you start recording the test script, configure the browser to use the JMeter Proxy.

How to configure Mozilla Firefox browser to Use the JMeter Proxy:

  • Launch the Mozilla Firefox browser–> click on Tools Menu–> Choose Options
  • In Network Proxy section –> Choose Settings
  • Select Manual Proxy Configuration option
  • Enter value for HTTP Proxy as localhost or you can enter your local system IP address.
  • Enter the port as 8080 or you can change the port number if 8080 port is not free
  • Click on OK. Now your browser is configured with the JMeter proxy server.

Record the Test Script of Web Application:

Add a Thread Group to the Test Plan: Test Plan is our JMeter script and it will tell about the flow of our load test.

Select the Test plan –> Right click–> Add–> Threads (Users) –> Thread Group

Thread Group:

Thread group will tell about the user flow and will simulates like how user will behave on the app

The thread group has three important properties, which influence the load test:

  • Number of threads(users): This will tell about the number of Virtual users that JMeter will attempt to simulate, let’s say for ex:1,10,20 or 50 etc
  • Ramp Up Period (in seconds): The duration of time that you want to allow the Thread Group to go from 0 to n (20 here) users, let’s say 5 seconds.
  • Loop count: No of times to execute the test, 1 means the test will execute for 1 time.
2. Add Recording controller to the thread group: Recording controller should have all the recorded HTTP Request Samples.

Select the thread group –> right click–> Add –> Logic Controller –> recording controller

3. Add the HTTP Cookie Manager to the thread group:

Http Cookie manager is to use cookies on your web app

4. Add View Results tree to the thread group: View results Tree used to see the Status of the Http Sample Requests on Executing the Recorded Script.

Thread group –> Add–> Listeners–> View Result Tree

5. Add Summary Report: Summary report will show the test results of the script

Thread group –> Add –> Listeners –> Summary Report.

6. Go to the WorkBench and Add the HTTP(S) Test Script Recorder: Here you can start your test script recording.

WorkBench –> Right click –> Add–> Non Test Elements –> HTTP(S) Test Script Recorder.

Check whether the port 8080 (this should be same as which we have set for the browser port number) is available or busy in your system. If it’s busy change the port number.

7. Finally click the Start button –> you can see the popup –> click ok

8. How to Record the browsing files from the Web App:

If your test script is having option like browse any files, keep your files in bin folder of JMeter and do recording of browse files.

Go to the Mozilla Browser–>Start your Test for Ex:login page or any navigation, do not close the JMeter while recording the script. The script will record as below in Recording Controller.

Save the Test Plan with .jmx extension.

Run the Recorded Script: Select the Test Plan–>Press ctrl+R from the Key board or Start Button on Jmeter.

while executing the script ,at the top right corner a circle will display in green color along with the time box which will show how much time the script is executing. Once the Execution completed the Green circle will turn to Grey.

Test Results: We can see the Test Results in many ways like, View Results Tree, Summary Report, and Aggregate Graph.

View Result Tree

Summary Report:

After executing the test script, go to Summary Report–>click on Save Table data and save the results in a .csv or xlsx format.

Though we will get the test results in Graphical view and Summary report etc, executing Test Scripts Using JMeter GUI is not a good practice. I will discuss the execution of JMeter Test Scripts with Jenkins Integration tool in my next blog.

Read Other Blog on Load Testing:

Load/Performance Testing Using JMeter

Load performance testing using JMeter

Software Reliability Testing or load performance testing is a field within software testing & performance testing services that tests a software’s ability to function in a given environmental conditions for a particular period of time. While there are several software testing tools for evaluating performance and reliability, I have focused on JMeter for this blog.

JMeter or The Apache JMeter™ application as it is popularly known is an open source software. It is a  hundred percent pure Java application designed for load testing, functional testing and performance testing.

Given below are some guidelines on using JMeter to test the reliability of a software product:

Steps to download and install JMeter for load performance testing

JMeter, as a simple .zip file,  can be downloaded from the below mentioned URL:

http://jmeter.apache.org/download_jmeter.cgi

Pre-requisite is to have Java 1.5 or higher version already installed in the machine.

Unzip the downloaded file in a required location, go to bin folder and double click on Jmeter.bat file.

If Jmeter UI opens up then the installation has been successful. If it does not open, then Java might not be installed/configured or the downloaded files may be corrupted.

The next step is to configure Jmeter to listen to browser. Open up the Command prompt under bin location of Jmeter and run Jmeter batch file. This will open up simple UI of Jmeter as already seen.

If you are accessing the internet via Proxy Server then this will not work when recording scripts. We need to by-pass the proxy server and to do this we need to start Jmeter by adding two parameters, –H proxy server name/IP address and –P port number. Adding these will look like the following image:

The next step is to configure the browser to `listen’ to Jmeter while recording the scripts. For this blog, I will be using Firefox browser

  1. Open Firefox browser
  2. Go to options->Advanced
  3. Select Network tab and click on Settings
  4. A connection settings tab will be opened under the `select manual proxy’ configuration and give HTTP proxy as localhost and port as 8080.

After having configured proxy settings, we need  to have Jmeter Certificate installed on the required browser.

The Certificate will be in Jmeter bin folder, under the file name “ApacheJMeterTemporaryRootCA.crt”. Given below are steps to be followed to  install the Certificate in Firefox browser

  1. Open Firefox browser
  2. Go to options->Advanced
  3. Select certificate tab and click on View certificates
  4. Under Authorities tab click on Import
  5. Go to bin folder of Jmeter and select the certificate under it (ApacheJMeterTemporaryRootCA.crt)
  6. Check “Trust this CA to identify website” option and click on Ok.

Note : If you do not find the certificate in the bin folder directory then you can generate it by running the `HTTP Recorder’. Every time  you run the recorder, the certificate will be generated and there is no need to replace the certificate every time.

Recording of Script for web-based applications:

Steps to record a sample test script:

  1. Open up Jmeter by running Jmeter.bat
  2. Right click on Test Plan->Add->Threads(Users)->Thread Group
  3. Right Click on Thread Group->Add->Logic Controller->Recording Controller
  4. Right Click on WorkBench->Add->Non-Test Elements->HTTP(s) Test Script Recorder
  5. Right Click on HTTP(s) Test Script Recorder->Add->Listener->View Results Tree

After adding the components, Jmeter UI would look like the following image:

  1. Click on HTTP(s) Test Script Recorder and again click on Start.  Jmeter Certificate will be generated with a pop- up informing the same. Click on `Ok’ button on the pop up.
  2. Open up the browser, for which Jmeter proxy is configured and go to the URL which is under Test and execute your manual test script with which you want to determine performance.
  3. Once you are done with all your manual test, come back to Jmeter UI Click on HTTP(s) Test Script Recorder and click on Stop. This will stop recording.
  4. Click on the icon for recording script to view all your recorded HTTP samples, as follows. You can then rename Recording script controller
  5. You can view details of the page recorded under “View Results Tree Listener”. By using these values you can  determine the assertions to put in your original run.
  6. Now Click on “Thread Group” and configure the users you want to run. Also Right click on Thread Group->Add->Listener->Summary Report and also add View Results Tree.
  7. Make sure to Tick check box “Errors” under View Results Tree, or else it will take up huge memory while running your scripts as it captures almost everything
  8. Now you can run your recorded scripts! Just click on Run/Play button on the top toolbar of Jmeter UI

Analyzing the test run result:

While the scripts are running, results/timing will be captured under “Summary Report Listener”. Summary report will look the following after your test run has been completed. You can save the report by clicking on “Save Data Table”

Below are details of each keyword:

  • average : Average is the average response time for that particular http request. This response time is in milliseconds.
  • aggregate_report_min : Min denotes the minimum response time taken by the http request. Time is in milliseconds.
  • aggregate_report_max : Max denotes  the maximum response time taken by the http request. Time is in milliseconds.
  • aggregate_report_stddev : How many exceptional cases were found which were deviating from the average value of the receiving time.
  • aggregate_report_error% : This denotes the error percentage in samples during run.
  • aggregate_report_rate : How many requests per second does the server handle. Larger is better.
  • average_bytes : average response size.  Lower the average number, greater is the performance.

Read Other Blogs on Load Testing

Getting Started with Load Testing of Web Applications using JMeter

Performance testing on cloud

Ensuring good application performance is crucial, especially to critical web applications, that require fast cycle times and short turnaround times for newer versions. How can such applications be optimally tested without spending a fortune on tools, technology, and people but still ensure quality assurance and on-time release? With tightening budgets and time-consuming processes, IT organizations are forced to do more with less.

A judicious combination of tools, the available cloud platforms, and well thought out methodology provide us the answer. While it is proven that open source software can reduce software development cost, there hasn’t been much use of open source tools for testing until the cloud computing paradigm has become more readily available. Until now performance testing on a large-scale testing project using test tools on dedicated hardware to model real-world scenarios has been an expensive proposition. However, cloud testing has changed the game.

Performance testing on the cloud can be broadly classified into two categories:

  1. Cloud Infrastructure for Test Environment – Performance testing always requires some sophisticated tool infrastructure. The test infrastructure requirement could vary from having specific hardware for specific tools, the number of hardware, licenses, back-ups, Bandwidth, etc. In the past, getting all the required hardware was not only challenging but also in many cases, the performance testing was not adequately tested due to missing test tools. With Cloud testing, one can just focus on performance testing and simply not on the infrastructure. Any tool be it open source like Grinder, JMeter, or any licensed software products like Silk Test Performer can be easily set up and run the test on the AUT (Application Under Test). There is some time required for setting up the tool and also requires few test runs to ensure the load injectors (the client machine that generates load) do not cause bottlenecks. This environment may be best suited in a typical waterfall model scenario where the software is evaluated and tuned at the end of the software development cycle.
  1. Cloud as a Test Tool – There are different sets of software testing tools that are readily available in the Cloud as a SaaS model. The test tool is readily available on the cloud, therefore no setup required, just subscribe and you are all set to go thus saving time in setup. Also, their system configuration is optimized to generate the required load without causing the bottlenecks. Some of the readily available test tools on Cloud are – LoadStorm, CloudTest by SOASTA, BrowserMob, nGrinder, Load-Intelligence (Can Use JMETER in Cloud), etc. This environment is more suited in an Agile scenario, where the same tasks need to be performed for smaller iterations from the initial stage of the SDLC itself. So, here you just have the scripts ready and upload to the cloud run the test and once you have the requirement metrics, you sign-off.

Conclusion – A combination of carefully selected testing tools, QA testers, readily available cloud platforms, and a sound performance test strategy, can make bring the same benefits as of the conventional methods at a much lower cost.

JMeter Regular Expression Extractor Example

Before you delve in to the details of the tool, you can get a bigger picture on the importance of performance engineering that boosts the quality of digital experiences.

In this example, we will demonstrate the use of Regular Expression Extractor post processor in Apache JMeter. We will go about parsing and extracting the portion of response data using regular expression and apply it on a different sampler. Before we look at the usage of Regular Expression Extractor, let’s look at the concept.

1. Introduction – Apache JMeter

Apache JMeter is an open-source Java based tool that enables you to perform functional, load, performance and regression tests on an application. The application may be running on a Web server or it could be a standalone in nature. It supports testing on both client-server and web model containing static and dynamic resources. It supports wide variety of protocols for conducting tests that includes, HTTP, HTTPS, JDBC, FTP, JMS, LDAP, SOAP etc.

A quick look at some of the features:

  • It provides a comprehensive GUI based workbench to play around with tests. It also allows you to work in a non-GUI mode. JMeter can also be ported on the server allowing to perform tests in a distributed environment.
  • It provides a concept of template which are pre-defined test plans for various schemes or protocols that can be directly used to create your required test plan.
  • It enables you to build test plan structurally using powerful features like Thread Group, Controllers, Samplers, Listeners etc.
  • It provides debugging and error monitoring through effective logging.
  • It supports parameterized testing through the concept of variables.
  • It supports creation of different flavors of test plan that includes Web, Database, FTP, LDAP, Web service, JMS, Monitors etc.
  • It allows for remote testing by having different JMeter instances running as servers across nodes and accessed from a single client application.
  • It gives you real time test results that covers metrics like latency, throughput, response times, active threads etc.
  • It enables you to perform testing based on regular expressions and many more other features.

1.1. Regular Expression

Regular expression is a pattern matching language that performs a match on a given value, content or expression. The regular expression is written with series of characters that denote a search pattern. The pattern is applied on strings to find and extract the match. The regular expression is often termed as regex in short. Pattern based searching has become very popular and is provided by all the known languages like Perl, Java, Ruby, Javascript, Python etc. The regex is commonly used with UNIX operating system with commands like grep, ls, awk and editors like ed and sed. The language of regex uses meta characters like . (matches any single character), [] (matches any one character), ^ (matches the start position), $ (matches the end position) and many more to devise a search pattern. Using these meta characters, one can write a powerful regex search pattern with combination of if/else conditions and replace feature. The discussion about regex is beyond the scope of this article. You can find plenty of articles and tutorials on regular expression available on the net.

1.2. Regular Expression Extractor

Regular Expression (regex) feature in JMeter is provided by the Jakarta ORO framework. It is modelled on Perl5 regex engine. With JMeter, you could use regex to extract values from the response during test execution and store it in a variable (also called as reference name) for further use. Regular Expression Extractor is a post processor that can be used to apply regex on response data. The matched expression derived on applying the regex can then be used in a different sampler dynamically in the test plan execution. The Regular Expression Extractor control panel allows you to configure the following fields:

Apply to: Regex extractor are applied to test results which is a response data from the server. A response from the primary request is considered main sample while that of sub request is a sub sample. A typical HTML page (primary resource) may have links to various other resources like image, javascript files, css etc. These are embedded resources. A request to these embedded resources will produce sub samples. An HTML page response itself becomes primary or a main sample. A user has the option to apply regex to main sample or sub samples or both.

Field to check: Regex is applied to the response data. Here you choose what type of response it should match. There are various response indicators or fields available to choose. You can apply regex to plain response body or a document that is returned as a response data. You can also apply regex to request and response headers. You can also parse URL using regex or you can opt to apply regex on response code.

Reference Name: This is the name of the variable that can be further referenced in the test plan using ${}. After applying regex, the final extracted value is stored in this variable. Behind the scenes, JMeter will generate more than 1 variable depending on the match occurred. If you have defined groups in your regex by providing parenthesis (), then it will generate as many variables as number of groups. These variables names are suffixed with the letters _g(n) where n is the group no. When you do not define any grouping on your regex, the returned value is termed as the zeroth group or group 0. Variable values can be checked by using Debug Sampler. This will enable you to verify whether you regular expression worked or not.

Regular Expression: This is the regex itself that is applied on the response data. A regex may or may not have a group. A group is a subset of string that is extracted from the match. For example, if the response data is ‘Hello World’ and my regex is Hello (.+)$, then it matches ‘Hello World’ but extracts the string ‘World’. The parenthesis () applied is the group that is captured or extracted. You may have more than one group in your regex, so which one or how many to extract, is configured through the use of template. See the below point.

Template: Templates are references or pointers to the groups. A regex may have more than one groups. It allows you to specify which group value to extract by specifying the group number as $1$ or $2$ or $1$$2$ (extract both groups). From the ‘Hello World’ example in the above point, $0$ points to the complete matched expression that is ‘Hello World’ and $1$group points to the string ‘World’. A regex without parenthesis () is matched as $0$ (default group). Based on the template specified, that group value is stored in the variable (reference name).

Match no.: A regex applied to the response data may have more than one matches. You can specify which match should be returned. For example, a value of 2 will indicate that it should return the second match. A value of 0 will indicate any random match to be returned. A negative value will return all the matches.

Default value: The regex match is set to a variable. But what happens when the regex does not match. In such a scenario, the variable is not created or generated. But if you specify a default value then if the regex does not match then the variable is set to the specified default value. It is recommended to provide a default value so that you know whether your regex worked or not. It is a useful feature for debugging your test.

2. Regular Expression Extractor By Example

We will now demonstrate the use of Regular Expression Extractor by configuring a regex that will extract the URL of the first article from the JCG (Java Code Geeks) home page. After extracting the URL, we will use it in a HTTP Request sampler to test the same. The extracted URL will be set in a variable.

Before installing JMeter, make sure you have JDK 1.6 or higher installed. Download the latest release of JMeter using the link here. At the time of writing this article, the current release of JMeter is 2.13. To install, simply unzip the archive into your home directory where you want JMeter to be installed. Set the JAVA_HOME environment variable to point to JDK root folder. After unzipping the archive, navigate to /bin folder and run the command jmeter. For Windows, you can run using the command window. This will open JMeter GUI window that will allow you to build the test plan.

2.1. Configuring Regular Expression Extractor

Before we configure regex extractor, we will create a test plan with a ThreadGroup named ‘Single User’ and a HTTP Request Sampler named ‘JCG Home’. It will point to the server www.javacodegeeks.com. For more details on creating ThreadGroup and related elements, you can view the article JMeter Thread Group Example. The below image shows the configured ThreadGroup (Single User) and HTTP Request Sampler (JCG Home). Next, we will apply the regex on the response body (main sample). When the test is executed, it will ping the web site named www.javacodegeeks.com and return the response data which is a HTML page. This HTML web page contains JCG articles, the title of which is wrapped in a <h2> tag. We will write a regular expression that will match the first <h2> tag and extract the URL of the article. The URL will be part of an anchor <a> tag. Right click on JCG Home sampler and selectAdd -> Post Processors -> Regular Expression Extractor.

The name of our extractor is ‘JCG Article URL Extractor’. We will apply the regex to the main sample and directly on the response body (HTML page). The Reference Name or variable name provided is ‘article_url’. The regex used is <h2 .+?><a href="http://(.+?)".+?</h2>. We will not go into the details of the regex as this is a different discussion thread altogether. In a nutshell, this regex will find or match the first <h2> tag and extract the URL from the anchor tag. It will strip the word http:// and extract only the server part of the URL. The extractor itself is placed in a parenthesis () forming our first group. The Template field is set with the value of $1$ that points to our first group (the URL) and the Match No. field indicates the first match. The Default Value set is the ‘error’. So if our regex fails to match then the variable article_url will hold the value ‘error’. If the regex makes a successful match, then the article URL will be stored in the article_url variable.

We will use this article_url variable in another HTTP Request sampler named JCG Article. Right click on Single UserThreadGroup and select Add -> Sampler -> HTTP Request.

As you can see from the above, the server name is ${article_url} which is nothing but the URL that was extracted from the previous sampler using regex. You can verify the results by running the test.

2.2. View Test Results

To view the test results, we will configure the View Results Tree listener. But before we do that, we will add a Debug Sampler to see the variable and its value being generated upon executing the test. This will help you understand whether your regex successfully matched an expression or failed. Right click on Single User ThreadGroup and select Add -> Sampler-> Debug Sampler.

As we want to debug the generated variables, set the JMeter variables field to True. Next, we will view and verify test results using View Results Tree listener. Right click on Single User ThreadGroup and select Add -> Listener -> View Results Tree.

First let’s look at the output of Debug Sampler response data. It shows our variable article_url and observe the value which is the URL that we extracted. The test has also generated group variables viz. article_url_g0 and article__url_g1. The group 0 is a regular general match and group 1 is the string that is extracted from the general match. This string is also stored in our article_url variable. The variable named article_url_g tells you the no. of groups in the regex. Our regex contained only 1 group (note the sole parenthesis () in our regex). Now lets look at the result of our JCG Article sampler:

The JCG Article sampler successfully made the request to the server URL that was extracted using regex. The server URL was referenced using ${article_url} expression.

3. Conclusion

The regular expression extractor in JMeter is one of the significant feature that can help parse different types of values on different types of response indicators. These values are stored in variables that can be used as references in other threads of the test plan. The ability to devise groups in the regex, capturing portions of matches makes it even more a powerful feature. Regular expression is best used when you need to parse the text and apply it dynamically to subsequent threads in your test plan. The objective of the article was to highlight the significance of Regular Expression Extractor and its application in the test execution.

JMeter Blog Series: JMeter BeanShell Example

Here’s more about load and performance testing using Jmeter.

In this example, we will demonstrate the use of BeanShell components in Apache JMeter. We will go about writing a simple test case using the BeanShell scripting language. These scripts will be part of BeanShell components that we will configure for this example. Before we look at the usage of different BeanShell components, let’s look at the concept.

1. Introduction

Apache JMeter is an open-source Java-based tool that enables you to perform functional, load, performance, and regression tests on an application. The application may be running on a Web server or it could be standalone in nature. It supports testing on both client-server and web models containing static and dynamic resources. It supports a wide variety of protocols for conducting tests that include, HTTP, HTTPS, JDBC, FTP, JMS, LDAP, SOAP, etc.

A quick look at some of the features of Jmeter

  • It provides a comprehensive GUI based workbench to play around with tests. It also allows you to work in a non-GUI mode. JMeter can also be ported on the server allowing to perform tests in a distributed environment.
  • It provides a concept of templates which are pre-defined test plans for various schemes or protocols that can be directly used to create your required test plan.
  • It enables you to build a test plan structurally using powerful features like Thread Group, Controllers, Samplers, Listeners, etc.
  • It provides debugging and error monitoring through effective logging.
  • It supports parameterized testing through the concept of variables.
  • It supports the creation of different flavors of test plan that includes Web, Database, FTP, LDAP, Web service, JMS, Monitors etc.
  • It allows for remote testing by having different JMeter instances running as servers across nodes and accessed from a single client application.
  • It gives you real-time test results that covers metrics like latency, throughput, response times, active threads etc.
  • It enables you to perform testing based on regular expressions and many more other features.

1.1. What is BeanShell?

BeanShell is a scripting language written in Java. It is part of JSR-274 specification. It in some way is an extension to the mainstream Java language by providing scripting capabilities. It is an embedded interpretor that recognizes strongly typed Java syntax and scripting features like shell commands, loose types and method closures (functions as objects). BeanShell aids in quick development and test of Java application. One can use it for quick or rapid prototyping or quickly testing a small functionality or a process. The script can also be embedded in the Java code and invoked using the Interpreter API.

BeanShell can also be used as a configuration language as it supports creation of Java based variables like strings, arrays, maps, collections and objects. It also supports what is called as scripting variables or loosely typed variables. BeanShell scripts can also be written in a standalone mode in a external file which then can be loaded and executed by the Java program. BeanShell also provides the concept of UNIX like shell programming. You can give BeanShell commands interactively in a GUI shell and see the output instantly.

For more details on BeanShell, you can refer to the official website http://www.beanshell.org

1.2. JMeter Beanshell Components

JMeter provides the following components that can be used to write BeanShell scripts

  • BeanShell Sampler
  • BeanShell PreProcessor
  • BeanShell PostProcessor
  • BeanShell Assertion
  • BeanShell Listener
  • BeanShell Timer

Each of these component allows you to write scripts to conduct your test. JMeter will execute the scripts based on the lifecycle order of the components. For example, it will first invoke PreProcessor then Sampler and then PostProcessor and so on. Data can be passed between these components using thread local variables which has certain meaning and context. Every component provides you with pre-defined variables that can be used in the corresponding script.

The following table shows some of the common variables used by the BeanShell components:

Variable name Description
ctx It holds context information about the current thread that includes sampler and its results.
vars This is a thread local set of variables stored in a map used by BeanShell components in the same thread.
props These are variables loaded as properties from an external file (jmeter.properties) stored in the classpath.
prev It holds the last result from the sampler
data It holds server response data

2. BeanShell By Example

We will now demonstrate the use of BeanShell in JMeter. We will take a simple test case of sorting an array. We will define an array of 5 alphabets (a,b,c,d,e) stored in random order. We will sort the content of the array and convert it into string. After conversion, we will remove the unwanted characters and print the final string value. It should give the output as ‘abcde’.
We will make use of the following BeanShell components to implement our test case:

  • BeanShell PreProcessor – This component will define or initialize our array.
  • BeanShell Sampler – This component will sort the array and convert it into string.
  • BeanShell PostProcessor – This component will strip the unnecessary characters from the string.
  • BeanShell Assertion – This component will assert our test result (string with sorted content).

Before installing JMeter, make sure you have JDK 1.6 or higher installed. Download the latest release of JMeter using the link here. At the time of writing this article, the current release of JMeter is 2.13. To install, simply unzip the archive into your home directory where you want JMeter to be installed. Set the JAVA_HOME environment variable to point to JDK root folder. After unzipping the archive, navigate to <JMeter_Home>/bin folder and run the command jmeter. For Windows, you can run using the command window. This will open JMeter GUI window that will allow you to build the test plan.

2.1. Configuring BeanShell Sampler

In this component, we will sort the array. But before we sort the array, it needs to be initialized. You will see the initialization routine in the next section when we create the pre-processor component. Let’s first create the BeanShell Sampler component. We will write the code to sort the array after the initialization routine. Right click on Single UserThreadGroup and select Add -> Sampler -> BeanShell Sampler.

We will provide the name of our sampler as ‘Array Sorter’. The Reset Interpreter field value is retained as ‘False’. This field is only necessary when you have multiple BeanShell samplers configured or if you are running a sampler in the loop. The value of true will reset and create a fresh instance of BeanShell interpreter for each sampler. The value of false will create only one BeanShell interpreter that will interpret scripts for all the configured samplers. From the performance perspective, it is recommended to set this field to true if you have long running scripts with multiple samplers. The Parameter field allows you to pass parameters to your BeanShell scripts. It is usually used with external BeanShell script file, but if you are writing script in this component itself then you can use Parameters or bsh.args variable to fetch the parameters. The Parametersvariable will hold the parameters as a string value (retains spaces). The bsh.args variable will hold the parameters as string array. For this example, we are not passing any parameters to the script. The Script file field is used when you have a BeanShell script defined in an external file. It is important to note, this will override any scripts written inline in this component. We will retain the default value for all the above mentioned fields for all the BeanShell components. The finalScript textbox field allows us to write scripts inline in this component itself. It allows you to use certain variables in your scripts. As you can see there is no scripting code currently in this field. We will write the code after our array is initialized in the pre-processor component.

2.2. Configuring BeanShell PreProcessor

Beanshell PreProcessor will be the first component to be executed before your sampler. It becomes a good candidate to perform initialization routines. We will initialize our array, to be sorted, in this component. Right click on Array Sorter sampler and select Add -> Pre Processors -> BeanShell PreProcessor.

We will name the component as ‘Array Initializer’. Let’s see the code in the Script textbox field. First, we are declaring and initializing the array named strArray. It is a loosely typed variable. The values of the array are not in order. Then we make use of the vars variable to store the array by calling putObject() method. The vars variable will be available to all the BeanShell components that are part of this thread. We will fetch the value of vars variable in a ‘Array Sorter’ sampler and perform the sort. In the above section, we created the ‘Array Sorter’ sampler, now we will write the following code in that sampler to sort the array. Click on Array Sorter sampler, in the Script textbox field to write the following code:

First, we get the array using getObject() method of the vars variable. Then we will sort using the Arrays class of Java. The sort() method of the said class will take our array as a parameter and perform the sort. We then convert the array into string by calling Arrays.toString() method. Arrays is a utility class provided by the JDK to perform certain useful operations on array object. We will then put this sorted string as a response data through the use of SampleResult variable. Our sorted string will look like the following: [a, b, c, d, e].

2.3. Configuring BeanShell PostProcessor

The BeanShell PostProcessor will strip the unnecessary characters like ‘[],’. This component will act more like a filter. Right click on Array Sorter sampler and select Add -> Post Processors -> BeanShell PostProcessor.

We will name the component as ‘Array Filter’. The Script textbox field contains the code that strips the unnecessary characters from our string. If you recall, the string was stored as response data by the Array Sorter sampler. Now here we fetch the string using the function getResponseDataAsString() of the prev variable. Next, we use the replace() method of the String class to strip ‘[]’ and ‘,’ characters from the string. We store that string in the vars variable. This string will now be used by BeanShell Assertion component to assert the final result.

2.4. Configuring BeanShell Assertion

Using this component, we will assert the final result value as ‘abcde’. Right click on Array Sorter sampler and select Add ->Assertions -> BeanShell Assertion.

Using the vars variable, we will get the final string and store it in the finalString variable. Then we assert by checking if the final string does not contain the value ‘abcde’ then set the Failure variable to true and provide the failure message using the FailureMessage variable. The output of the test execution can be see in the command window from where you started the JMeter GUI. The below is the console output after running our tests.

3. Conclusion

BeanShell scripting language provides scripting capabilities to the Java language. In JMeter, you can use different BeanShell components to write the test scripts and execute the same. Each component is equipped with useful variables that can be used in the scripts to perform the control flow. The scripting feature adds a powerful and useful dimension to the JMeter testing tool. The objective of the article was to show the usage of common Beanshell components and how one can write test scripts to execute tests.

Know how well your application performs under load. Register for a free primary assessment.

JMeter Blog Series: Random Variable Example

Here’s the beginning, load/performance testing using JMeter.

In this example, we will demonstrate how to configure Random Variable in Apache JMeter. We will go about configuring a random variable and apply it to a simple test plan. Before we look at the usage of Random variables, let’s look at the concept.

1. Introduction

Apache JMeter is an open-source Java-based tool that enables you to perform functional, load, performance, and regression tests on an application. The application may be running on a Web server or it could be standalone in nature. It supports testing on both client-server and web models containing static and dynamic resources. It supports a wide variety of protocols for conducting tests that include, HTTP, HTTPS, JDBC, FTP, JMS, LDAP, SOAP, etc.
A quick look at some of the features

  • It provides a comprehensive GUI based workbench to play around with tests. It also allows you to work in a non-GUI mode. JMeter can also be ported on the server allowing to perform tests in a distributed environment.
  • It provides a concept of templates which are pre-defined test plans for various schemes or protocols that can be directly used to create your required test plan.
  • It enables you to build test plan structurally using powerful features like Thread Group, Controllers, Samplers, Listeners etc.
  • It provides debugging and error monitoring through effective logging.
  • It supports parameterized testing through the concept of variables.
  • It supports creation of different flavours of test plan that includes Web, Database, FTP, LDAP, Web service, JMS, Monitors etc.
  • It allows for remote testing by having different JMeter instances running as servers across nodes and accessed from a single client application.
  • It gives you real time test results that covers metrics like latency, throughput, response times, active threads etc.
  • It enables you to perform testing based on regular expressions and many more other features.

1.1. Random Number Generation

Most programming languages today has an API that will generate or produce random numbers. The generator algorithm typically produce sequence of numbers which are arbitrary and do not follow any order, structure or format. The algorithm to devise randomness is based on some value also called as seed. The seed drives the sequence generation. Two same seeds will always have same sequence generated. The seed based approach is also termed as pseudo-random number generation.

1.2. Random Variable in JMeter

JMeter allows you to generate random number values and use it in a variable. You can do so through the Random Variable config element. The Random Variable config element allows you set the following parameters:

  • Variable name: You can provide the name of the variable that can be used in your test plan elements. The random value will be stored in this variable.
  • Format String: You can specify the format of the generated number. It can be prefixed or suffixed with string. For example, if you want the generator to produce alphanumeric values you can specify the format like SALES_000 (000 will be replaced with the generated random number).
  • Minimum and Maximum value: You can specify range within which the numbers to be generated. For example, the minimum number can be set as 10 and the maximum number can be set as 50. The generator will produce any number within that range.
  • Per Thread (User): You can specify whether random generator will be shared by all the threads (users) or each thread will have its own instance of random generator. This can indicated by setting false or true respectively.
  • Random Seed: You can also specify the seed value for your generator. If the same seed is used for every thread (Per Thread is set to true) then it will produce the same number for each thread.

2. Random Variable By Example

We will now configure the Random Variable config element. Finding test cases for random variables is always a tricky affair. You may have a test case that tests the random number itself, like whether it is in the proper range or the format of the number is valid or not. Another test case could be where you need to provide some random number as part of URL like say order ID (orderId=O122) or page numbers for pagination (my-domain.com/category/apparel/page/5). It may be best suited to perform load testing for such URL pages. We will use the configured variable in a HTTP Request Sampler as part of request URL. As part of this example, we will test Java category pages (1 – 10) of JCG website (www.javacodegeeks.com).
http://www.javacodegeeks.com/category/java/page/2/
The page number 2 on the URL will be fetched using random variable.

2.1. JMeter installation and setup

Before installing JMeter, make sure you have JDK 1.6 or higher installed. Download the latest release of JMeter using the link here. At the time of writing this article, the current release of JMeter is 2.13. To install, simply unzip the archive into your home directory where you want JMeter to be installed. Set the JAVA_HOME environment variable to point to JDK root folder. After unzipping the archive, navigate to /bin folder and run the command jmeter. For Windows, you can run using the command window. This will open JMeter GUI window that will allow you to build the test plan.

2.2. Configuring Random Variable

To configure Random Variable, we have to make use of Config Element option. Right click on Test Plan and select Add ->Config Element -> Random Variable.

We will give the name of the element as ‘Page Counter Variable’. The Variable Name is ‘page_number’. The page_numbervariable will be used in our test plan later. Keep the output format blank. We will set Minimum Value and Maximum Valuefield values as 1 and 10 respectively. It means the numbers so generated will fall between 1 and 10 (both inclusive). Keep the seed option as blank. Retain the value of Per Thread (User) field as False. It means if you configure multiple threads, all the threads will use this same random generator instance.
Next, we will create a ThreadGroup named ‘Single User’ with the Loop Count set as ’10’. We will use only 1 thread (user) for this example. You could experiment with multiple threads to simulate load test. Our main objective of the article is to show how we can configure and use random variable and therefore we will keep it simple to 1 user test. Loop count of value 10 will repeat the test ten times per user.

For our ThreadGroup we will create HTTP Request sampler named ‘JCG Java Category’.

It will point to the server www.javacodegeeks.com. Set the Path value as /category/java/page/${page_number}. You can notice here the use of our variable ${page_number}. As this test will be repeated 10 times (loop count), at runtime thepage_number variable will be substituted with random values between the range of 1 and 10.
You can view the result of the test by configuring View Results Tree listener. Run the test and you will see the following output.

As you can see, every request will generate random page values in the URL.

3. Conclusion

The random variable feature can be handy when you want to load test several pages with URL having parameter values that can be substituted dynamically at runtime. You could also devise other use cases for using random variables. The article provided a brief insight into the Random Variable feature of the JMeter.