Quality Assurance outsourcing in the World of DevOps-Best Practices for Dispersed (Distributed) Quality Assurance Team

Why Quality Assurance (QA) outsourcing is good for business

The software testing services is expected to grow by more than USD 55 Billion between 2022-2026. With outsourced QA being expedited through teams distributed across geographies and locations, many aspects that were hitherto guaranteed through co-located teams have now come under a lot of pressure. Concerns range from constant communication and interworking to coverage across a wide range of testing types – unit testing, API testing, as well as validating experiences across a wide range of channels.

Additionally, it is essential to note that DevOps is the preferred methodology for software development and release, with collaborating teams oriented towards faster delivery cycles augmented by early feedback. And QA is regarded as a critical binding thread of DevOps practice, thereby ensuring a balanced approach in maintaining collaboration and communication between teams while ensuring that delivery cycles are up to speed and the quality of the delivered product meets customer expectations.

Best practices for ensuring the effectiveness of distributed QA teams

Focus on the right capability: 
While organizations focus to a large extent on bringing capabilities across development, support, QA, operations, and product management in a scrum team, paramount from a quality perspective would be QA skills. The challenge is to find the right skill mix. For example, a good exploratory tester; and automation skills (not necessarily in the same person). In addition, specialist skills related to performance, security, and accessibility also need to be thought through. The key is to choose an optimum mix of specialists and generalists.

Aim to achieve the right platform/tool mix: 
It is vital to maintain consistency across the tool stacks used for engagement. According to a 451 research survey, 39% of respondents juggle between 11 to 30 tools to keep an eye on their application infrastructure and cloud environment; 8% are found to use over 21 to 30 tools. Commonly referred to as tool sprawl, this makes it extremely difficult to collaborate in an often decentralized and distributed QA environment. It’s imperative to have a balanced approach toward the tool mix by influencing the team to adopt a common set of tools instead of making it mandatory.

Ensure a robust CI/CD process and environment:
A weak and insipid process may cause the development and operations team to run into problems while integrating new code.  With several geographically distributed teams committing code consistently into the CI environment, shared dev/test/deploy environments constantly run into issues if sufficient thought process has not gone into identifying environment configurations.  These can ultimately translate into failed tests and, thereby, failed delivery/deployment.  A well-defined automated process ensures continuous deployment and monitoring throughout the lifecycle of an application, from integration and testing phases through to the release and support.

A good practice would be to adopt cloud-based infrastructure, reinforced by mechanisms for managing any escalations on deployment issues effectively and quickly.  Issues like build failure or lack of infrastructure support can hamper the productivity of distributed teams.  When strengthened by remote alerts, robust reporting capabilities for teams, and resilient communication infrastructure, accelerated development to deployment becomes a reality.

Follow good development practices:
Joint backlog grooming exercises with all stakeholders, regular updates on progress, code analysis, and effective build and deployment practices, as well as establishing a workflow for defect/issue management, are paramount in ensuring the effectiveness of distributed teams. Equally important is the need to manage risk early with ongoing impact analysis, code quality reviews, risk-based testing, and real-time risk assessments. In short, the adoption of risk and impact assessment mechanisms is vital.

Another critical area of focus is the need to ascertain robust metrics that help in the early identification of quality issues and ease the process of integration with the development cycle. Research conducted in 2020 by Gatepoint and Perfecto surveyed executives from over 100 leading digital enterprises in the United States on their testing habits, tools, and challenges. The survey results showed that 63 percent start testing only after a new build and code is developed. Just 40 percent test upon each code change or at the start of new software.

Devote special attention to automation testing:
Manual (or exploratory) testing allows you to ensure that product features are well tested, while automation of tests (or, as some say, checks) helps you improve coverage for repeatable tasks. Though planning for both during your early sprint planning meetings is essential, test automation services have become an integral testing component. 

As per studies, in 2020, approximately 44 percent of IT companies have automated half of their testing. Businesses are continuously adopting test automation to fulfill the demand for quality at speed. Hence it is no surprise that according to Data Bridge Market research, the automation testing market will reach an estimated value of USD 19.9 billion by 2028 and grow at a CAGR of 14.89% in the forecast period of 2021 to 2028.

Outsourcing test automation is a sure-shot way of conducting testing and maintaining product quality. Keeping the rising demand in mind, let us look at a few benefits of outsourcing test automation services.

Early non-functional focus: 
Organizations tend to overlook the importance of bringing in occasional validations of how the product fares around performance, security vulnerabilities, or even important regulations like accessibility until late in the day. As per the 2020 DevSecOps Community Survey, 55 percent of respondents deploy at least once per week, and 11 percent claim multiple daily deployments.  But when it comes to security, 44 percent of the mature DevOps practices know it’s important but don’t have time to devote to it.

Security has a further impact on CI/CD tool stack deployment itself, as indicated by a 451 research in which more than 60% of respondents said a lack of automated, integrated security tools is a big challenge in implementing CI/CD tools effectively. 

It is essential that any issue which is non-functional in nature be exposed and dealt with before it moves down the dev pipeline. Adoption of a non-functional focus depends to a large extent on the evolution of the product and the risk to the organization.

Benefits of outsourcing your QA

To make distributed QA teams successful, an organization must have the capability to focus in a balanced and consistent way across the length and breadth of the QA spectrum, from people and processes to technology stacks. However, the ability to make these practices work hinges on the diligence with which an organization institutionalizes these best practices as well as platforms and supplements it with an organizational culture that is open to change.

Trigent’s experienced and versatile Quality Assurance and Testing team is a major contributor to the successful launch, upgrade, and maintenance of quality software used by millions around the globe. Our experienced, continuous testing practices put process before convenience to delight stakeholders with an impressive industry rivaled Defect Escape Ratio or DER of 0.2.

Ensure increased application availability and infrastructure performance. Talk to us.

Five Metrics to Track the Performance of Your Quality Assurance Teams and the efficiency of your Quality Assurance strategy

Why Quality Assurance and Engineering?

A product goes through different stages of a release cycle, from development and testing to deployment, use, and constant evolution. Organizations often seek to hasten their long release cycle while maintaining product quality. Additionally, ensuring a superior and connected customer experience is one of the primary objectives for organizations. According to a PWC research report published in 2020, 1 in 3 customers is willing to leave a brand after one bad experience. This is where Quality Engineering comes in.

There is a need to swiftly identify risks, be it bugs, errors, and problems, that can impact the business or ruin the customer experience. Most of the time, organizations cannot cover the entire scope of their testing needs, and this is where they decide to invest in Quality Assurance outsourcing.

Developing a sound Quality Assurance (QA) strategy

Software products are currently being developed for a unified CX. To meet the ever-evolving customer expectations, applications are created to deliver a seamless experience on multiple devices on various platforms. Continuous testing across devices and browsers, as well as apt deployment of multi-platform products, are essential. These require domain expertise, complimenting infrastructure, and a sound QA strategy. According to a report published in 2020-2021, the budget proportion allocated for QA was approximately 22%. 

Digital transformation has a massive impact on the time-to-market. Reduced cycle time for releasing multiple application versions by adopting Agile and DevOps principles has become imperative for providing a competitive edge. This has made automation an irreplaceable element in one’s QA strategy. With automation, a team can run tests for 16 additional hours (excluding the 8 hours of effort, on average, by a manual tester) a day, thus reducing the average cost of testing hours. In fact, as per studies, in 2020, approximately 44 percent of IT companies have automated half of their testing. 

A thorough strategy provides transparency on delivery timelines and strong interactions between developers and the testing team that comprehensively covers every aspect of the testing pyramid, from robust unit tests and contracts to functional end-to-end tests. 

Key performance metrics for QA

There are a lot of benefits to tracking performance metrics. QA performance metrics are essential for discarding inefficient strategies. The metrics also enable managers to track the progress of the QA team over time and make data-driven decisions. 

Here are five metrics to track the performance of your Quality Assurance team and the efficiency of your Quality Assurance strategy. 

1) Reduced risk build-on-build:

This metric is instrumental in ensuring a build’s stability over time by revealing the valid defects in builds. The goal is to decrease the number of risks impacting defects from one build compared to the next over the course of the QA project. However, this strategy, whilst keeping risk at the center of any release, aims to achieve the right levels of coverage across new and existing functionality. 

If the QA team experiences a constant increase in risk impacting defects, it may be because of the following reasons:

To measure the effectiveness further, one should also note the mean time to detect and the mean time to repair a defect.

2) Automated tests

Automation is instrumental in speeding up your release cycle while maintaining quality as it increases the depth, accuracy, and, more importantly, coverage of the test cases. According to a research report published in 2002, the earlier a defect is found, the more economical it is to fix, as it costs approximately five times more to fix a coding defect once the system is released.

With higher test coverage, an organization can find more defects before a release goes into production. Automation also significantly reduces the time to market by expediting the pace of development and testing. In fact, as per a 2020-2021 survey report, approximately 69% of the survey respondents stated reduced test cycle time to be a key benefit of automation. 

To ensure that the QA team maintains productivity and efficiency levels, measuring the number of automation test cases and delivering new automation scripts is essential. The metric monitors the speed of test case delivery and identifies the programs needing further testing. We recommend analyzing your automation coverage by monitoring total test cases. 

While measuring this metric, we recommend taking into account:

  • Requirements coverage vs. automated test coverage
  • Increased test coverage due to automation (for instance, multiple devices/browsers)
  • Total test duration savings

3) Tracking the escaped bugs and classifying the severity of bugs:

Ideally, there should be no defects deployed into the production. However, despite best efforts, most of the time, bugs make it into production. To track this would involve the team establishing checks and balances and classifying the severity of the defects. The team can measure the overall impact by analyzing the bugs of high severity levels that made into production. This is one of the best overall metrics for evaluating the effectiveness of your QA processes.  Customer-reported issues/defects may help identify specific ways to improve testing. 

4) Analyzing the execution time of test cycles:

The QA teams should keep track of the time taken to execute a test. The primary aim of this metric is to record and verify the time taken to run a test for the first time compared to subsequent executions. This metric can be a useful one to identify automation candidates, thereby reducing the overall test cycle time. The team should identify tests that can be run concurrently to increase effectiveness. 

5) Summary of active defects

This includes a team capturing information such as the names and descriptions of a defect. The team should keep a track/summary of verified, closed, and reopened defects over time. A low trajectory in the number of defects indicates a high quality of a product.

Be Agile and surge ahead in your business with Trigent’s QE services 

Quality Assurance is essential in every product development, and applying the right QA metrics enables you to track your progress over time. Trigent’s quality engineering services empower organizations to increase customer adoption and reduce maintenance costs by delivering a superior-quality product that is release-ready.

Are you looking to build a sound Quality Assurance strategy for your organization? Need Help? Talk to us. 

5 ways QA can help you accelerate and improve your DevOps CI/CD cycle

A practical and thorough testing strategy is essential to keep your evolving application up to date with industry standards.

In today’s digital world, nearly 50% of organizations have automated their software release to production. It is not surprising given that 80% of organizations prioritize their CX and cannot afford a longer wait time to add new features to their applications.  A reliable high-frequency deployment can be implemented by automating the testing and delivery process. This will reduce the total deployment time drastically. 

Over 62% of enterprises use CI/CD (continuous integration/continuous delivery) pipelines to automate their software delivery process.  Yet once the organization establishes its main pipelines to orchestrate software testing and promotion, these are often left unreviewed.  As a result, the software developed through the CI/CD toolchains evolve frequently.  While the software release processes remain stagnant. 

The importance of an optimal QA DevOps strategy

DevOps has many benefits in reducing cost, facilitating scalability, and improving productivity. However, one of its most critical goals is to make continuous code deliveries faster and more testable. This is achieved by improving the deployment frequency with judicious automation both in terms of delivery and testing. 

Most successful companies deploy their software multiple times a day. Netflix leverages automation and open source to help its engineers deploy code thousands of times daily. Within a year of its migration to AWS, Amazon engineers’ deployed code every 11.7 seconds with robust testing automation and deployment suite.  

A stringent automated testing suite is essential to ensure system stability and flawless delivery. It helps ensure that nothing is broken every time a new deployment is made. 

The incident of Knight Capital underlines this importance. For years, Knight relied on an internal application named SMARS to manage their buy orders in the stock market. This app had many outdated sections in its codebase that were not removed. While integrating a new code, Knight overlooked a bug that inadvertently called one of these obsolete features. This resulted in the company making buy orders worth billions in minutes. It ended up paying a $460M fine and going bankrupt overnight.

A good QA protects against the failed changes and ensures that it does not trickle down and affects the other components.  Implementing test automation in CI/CD will ensure that every new feature undergoes unit, integration, and functional tests. With this, we can have a highly reliable continuous integration process with greater deployment frequency, security, reliability, and ease. 

An optimal QA strategy to streamline the DevOps cycle would include a well-thought-out and judiciously implemented automation for QA and delivery. This would help in ensuring a shorter CI/CD cycle. It would also offer application stability and recover from any test failure without creating outages. Smaller deployment packages will ensure easier testing and faster deployment. 

5 QA testing strategies to accelerate CI/CD cycle

Most good DevOps implementations include strong interactions between developers and rigorous, in-built testing that comprehensively covers every level of the testing pyramid. This includes robust unit tests and contracts for API and functional end-to-end tests. 

Here are 5 best QA testing strategies you should consider to improve the quality of your software release cycles:

Validate API performance with API testing

APIs are one of the most critical components of a software application. It holds together the different systems involved in the application. The different entities that rely on the API, ranging from users, mobile devices, IoT devices, and applications, are also constantly expanding. Hence it is crucial to test and ensure its performance. 

Many popular tools such as Soap UI and Swagger can easily be plugged into any CI/CD pipeline. These tools help execute API tests directly from your pipeline. This will help you build and automate your test suites to run in parallel and reduce the test execution time.

Ensure flawless user experience with Automated GUI testing

Just like an API, the functionality and stability of GUIs are critical for a successful application rollout.  GUI issues after production rollout can be disastrous users wouldn’t be able to access the app or parts of its functionality.  Such issues would be challenging to troubleshoot as they might reside in individual browsers or environments. 

A robust and automated GUI test suite covering all supported browsers and mobile platforms can shorten testing cycles and ensure a consistent user experience. Automated GUI testing tools can simulate user behavior on the application and compare the expected results to the actual results. GUI testing tools like Appium and Selenium help testers simulate the user journey.  These testing tools can be integrated with any CI/CD pipeline. 

Incorporating these tools in your automated release cycle can validate GUI functions across various browsers and platforms.

Handle unscheduled outages with Non-functional testing

You may often encounter unexpected outages or failures once an application is in production. These may include environmental triggers like a data center network interruption or unusual traffic spikes. These are often outlying situations that may lead to a crisis, provided your application cannot handle it with grace. Here lies the importance of automated non-functional testing

Nonfunctional testing incorporates an application’s behavior under external or often uncontrollable factors, such as stress, load, volume, or unexpected environmental events. It is a broad category with several tools that can be incorporated into the CI/CD cycle. Integrating automated non-functional testing gates within your pipeline is advisable before the application gets released to production.

Improve application security with App Sec testing

Many enterprises don’t address security until later in the application release cycle. The introduction of DevSecOps has increased focus on including security checkpoints throughout the application release lifecycle. The earlier a security vulnerability is identified, the cheaper it is to resolve. Today, different automated security scanning tools are available depending on the assets tested.

The more comprehensive your approach to security scanning, your organization’s overall security posture will be better. Introducing checkpoints early is often a great way to impact the quality of the released software. 

Secure end-to-end functionality with Regression testing 

Changes to one component may sometimes have downstream effects across the complete system functionality. Since software involves many interconnected parts today, it’s essential to establish a solid regression testing strategy.

Regression testing should verify that the existing business functionality performs as expected even when changes are made to the system. Without this, bugs and vulnerabilities may appear in the system components. These problems become harder to identify and diagnose once the application is released. Teams doing troubleshooting may not know where to begin, especially if the release did not modify the failing component.

Accelerate your application rollout with Trigent’s QA services

HP LaserJet Firmware division improved its software delivery process and reduced its overall development cost by 40%. They achieved this by implementing a delivery process that focussed on test automation and continuous integration. 

Around 88% of organizations that participated in research conducted on CI/CD claim they lack the technical skill and knowledge to adopt testing and deployment automation. The right QA partner can help you devise a robust test automation strategy to reduce deployment time and cost. 

New-age applications are complex. While the DevOps CI/CD cycle may quicken its rollout, it may fail if not bolstered by a robust QA strategy. QA is integral to the DevOps process; without it, continuous development and delivery are inconceivable. 

Does your QA meet all your application needs? Need help? Let’s talk

QE strategy to mitigate inherent risks involved in application migration to the cloud

Cloud migration strategies, be it lift & shift, rearchitect or rebuild, are fraught with inherent risks which need to be eliminated with the right QE approach

The adoption of cloud environments has been expanding for several years and is presently in an accelerated mode. A multi-cloud strategy is the defacto approach adopted by multiple organizations, as per Flexera 2022 State of the Cloud Report. The move toward cloud-native application architectures, exponential scaling needs of applications, and increased frequency and speed of product release launches have contributed to increased cloud adoption.

The success of migrating the application landscape to the cloud hinges on the ability to perform end-to-end quality assurance initiatives specific to the cloud. 

Underestimation of application performance

Availability, scalability, reliability, and high response rates are critical expectations from an application in a cloud environment. Application performance issues can come to light on account of incorrect sizing of servers or network latency issues that might not have surfaced when the application is tested in isolation. It can also be an outcome of an incorrect understanding of probable workloads that can be managed by an application while in a cloud environment. 

The right performance engineering strategy involves designing for performance in mind and fulfilling performance validations, including load testing. This ensures that the application under test remains stable in normal and peak conditions and defines and sets up application monitoring toolsets and parameters. There needs to be an understanding of workloads with the potential to be moved to the cloud and ones that need to remain on-premise. Incompatible application architectures need to be identified. Load testing should be carried out in parallel to record SLA response times across various loads for those moved to the cloud. 

Security and compliance

With the increased adoption of data privacy norms like GDPR and CCPA, there is a renewed focus on ensuring the safety of data migrated from application to cloud. Incidents like the one with Marriott hotel, where half a million sensitive customer information like credit cards and identity were compromised, have brought the need to test the security of data loaded onto cloud environments. 

A must-have element of a sound QA strategy is to ensure that both applications and data are secure and can withstand malicious attacks. With cybersecurity attacks increasing both in quantity and innovative tactics, there is a strong need for the implementation of security policies and testing techniques, including but not limited to vulnerability scanning, penetration testing, and threat and risk assessment. These are aimed at the following.

  • Identifying security gaps and weaknesses in the system
  • DDoS attack prevention
  • Provide actionable insights on ways to eliminate potential vulnerabilities

Accuracy of Data migration 

Assuring the quality of data that is being migrated to the cloud remains the top challenge, without which the convenience and performance expectation from cloud adoption falls flat. It calls for assessing quality before migrating, monitoring during migration, and verifying the integrity and quality post-migration. This is fraught with multiple challenges like migrating from old data models, duplicate record management, and resolving data ownership, to name a few. 

White-box migration testing forms a key component of a robust data migration testing initiative. It starts off by logically verifying a migration script to guarantee it’s complete and accurate. This is followed by ensuring database compliance with required preconditions, e.g., detailed script description, source, and receiver structure, and data migration mapping. Furthermore, the QA team analyzes and assures the structure of the database, data storage formats, migration requirements, the formats of fields, etc. More recently, predictive data quality measures have also been adopted to get a centralized view and better control over data quality. 

Application Interoperability

Not all apps that need to migrate to the cloud may be compatible with the cloud environment. Some applications show better performance in a private or hybrid cloud than in a public cloud. Some others require minor tweaking, while others may require extensive reengineering or recoding. Not identifying cross-application dependencies before planning the migration waves can lead to failure. Equally important is the need to integrate with third-party tools for seamless communication across applications without glitches. 

A robust QA strategy needs to identify applications that are part of the network, their functionalities, and dependencies among applications, along with each app’s SLA since dependencies between systems and applications can make integration testing potentially challenging. Integration testing for cloud-based applications brings to the fore the need to consider the following: 

  • Resources for the validation of integration testing 
  • Assuring cloud migration by using third-party tools
  • Discovering glitches in coordination within the cloud
  • Application configuration in the cloud environment
  • Seamless integration across multiple surround applications

Ensure successful cloud migration with Trigent’s QE services

Application migration to the cloud can be a painful process without a robust QE strategy. With aspects such as data quality, security, app performance, and seamless connection with a host of surrounding applications being paramount in a cloud environment, the need for testing has become more critical than ever. 

Trigent’s cloud-first strategy enables organizations to leverage a customized, risk-mitigated cloud strategy and deployment model most suitable for the business. Our proven approach, frameworks, architectures, and partner ecosystem have helped businesses realize the potential of the cloud.

We provide a secure, seamless journey from in-house IT to a modern enterprise environment powered by Cloud. Our team of experts has enabled cloud transformation at scale and speed for small, medium, and large organizations across different industries. The transformation helps customers leverage the best architecture, application performance, infrastructure, and security without disrupting business continuity. 

Ensure a seamless cloud migration for your application. Contact us now!

Fundamentals of microservices architecture testing

The importance of microservices architecture testing

Increased adoption of digital has pushed the need for speed to the forefront. The need to conceptualize, develop, launch new products and iterate to make them better, much ahead of the competition and gain customer mindshare, has become the critical driver of growth. Adoption of agile principles or movement towards scrum teams for increased agility are all steps in this direction. Disruptive changes have also taken place on the application front with the 3-tier architecture of the late 90s and subsequent 2 tier monolithic architecture giving way to one that is based on microservices.

Having a single codebase made a monolithic architecture less risky but slow to adopt changes, the exact opposite of a services-based architecture. Microservices architecture makes it easier for multiple development teams to make changes to the codebase in parallel. By transforming an application into a distributed set of services that are highly independent, yet interdependent, provides the ability to create new services/functionalities and modify services without impacting the overall application. These changes can be achieved by teams cutting across geographies or locations and makes it easier for them to understand functional modules rather than the humongous application codebase. However, the highly distributed nature of services also gives them a heightened ability to fail.

Breaking it down – Testing strategies in a microservice architecture

At the core, a microservices architecture comprises of 3 layers – a REST layer that allows the service to expose APIs, the database layer, and the service layer. A robust testing strategy needs to cover all these layers and ensure that issues are not leaked to production. The further an issue moves across stages, the impact increases on account of multiple teams getting affected. Hence the test plan must cover multiple types of testing like service testing, subsystem testing, client acceptance testing, performance testing, etc. Subsequent paragraphs outline key aspects of service level testing and integration testing in a microservices architecture-based application landscape.

In service level testing, each service forming a part of the application architecture needs to be validated. Each service has dependencies on other services and transmits information to others based on need. In a monolith architecture, since connections are being established from one class to the other within the same Java Virtual machine (JVM), chances of failure are far lower. However, in a services architecture, these are distributed, driving the need for network calls to access other services and makes it more complex.

Functional Validation: The primary goal in services testing is the functionality validation of a service. Key to this is the need to understand all events the service handles through both internal as well as external APIs. At times this calls for simulating certain events to ensure that they are being handled properly by the service. Collaboration with the development team is key to understand incoming events being handled by the service as part of its functionality. A key element of functional validation – API contract testing, tests the request and response payload along with a host of areas like pagination and sorting behaviors, metadata, etc.

Compatibility: Another important aspect is recognizing and negating backward compatibility issues. This happens during the launch of a changed version of the service that breaks existing clients running in production. Changes that happen to API contracts need to be evaluated in detail to understand if they are mandatory and capable of breaking clients in production. An addition of a new attribute or a parameter may not classify as a breaking change; however, changes to response payload, behavior, error codes, or datatypes have the ability to break. A change in value typically changes the logic behind it as well. They need to be uncovered much earlier in the service testing lifecycle.

Dependencies: Another aspect of focus is external dependencies, where one would test both incoming as well as outgoing API calls. Since these are heavily dependent on the availability of other services and hence other teams, there is a strong need to obviate dependency through the usage of mocks. Having conversations with developers and getting them to insert mocks while creating individual services will enable testing dependencies without waiting for the service to be available. It is imperative to make sure the mocks are easily configurable without needing access to the codebase. Usage of mocks also drives ease in automation giving teams the ability to run independently with no configuration.

Understanding Microservices Architecture Testing

Once each service is tested for its functionality, the next step is to move onto validate how the various collaborating services work together end to end. Known as subsystem testing or integration testing, it tests the whole functionality exposed together. Understanding the architecture or application blueprint by discussions with the development team is paramount in this stage. Further, there is a strong need to use real services deployed in the integration environment rather than mocks used for external dependencies.

As part of integration testing, there is a need to validate if the services are wired very closely and talking to each other. The event stream and inter-service API calls need to be configured properly so inter-service communication channels are proper. If the service functionality level testing is proper, the chances of finding errors are minimal in this stage, since the required mocks created in the functionality testing stage would have ensured that the services function properly.

Looking in-depth, we find that the testing strategies in a microservices architecture are not extremely different from those adopted for a monolith application architecture. The fundamental difference comes in the way the interdependencies and communication between multiple services forming a part of the larger application are tested to ensure that the application as a whole function in line with expectations.

Poor application performance can be fatal for your enterprise, avoid app degradation with application performance testing

If you’ve ever wondered what can possibly go wrong’ after creating a foolproof app, think again. Democrats’ Iowa Caucus voting app is a case in point. The Iowa caucus post-mortem pointed towards a flawed software development process and insufficient testing.

The enterprise software market revenue is expected to grow with a CAGR of 9.1% leading to a market volume of US$ 326,285.5 m by 2025. It is important that enterprises aggressively work towards getting their application performance testing efforts on track to ensure that all the individual components that go into the making of the app provide superior responses to ensure a better customer experience.

Banking app outages have also been pretty rampant in recent times putting the spotlight on the importance of application performance testing. Customers of Barclays, Santander, and HSBC suffered immensely when their mobile apps suddenly went down. It’s not as if banks worldwide are not digitally equipped. They dedicate at least 2-3 percent of their revenue to information technology along with additional expenses on building a superior IT infrastructure. What they also need is early and continuous performance testing to address and minimize the occurrence of such issues.

It is important that the application performs well not just when it goes live but later too. We give you a quick lowdown on application performance testing to help you gear up to meet modern-day challenges.

Application performance testing objectives

In general, users today, have little or no tolerance for bugs or poor response times. A faulty code can also lead to serious bottlenecks that can eventually lead to slowdown or downtime. Meanwhile, bottlenecks can arise from CPU utilization, disk usage, operating system limitations, or hardware issues.

Enterprises, therefore, need to conduct performance testing regularly to:

  • Ensure the app performs as expected
  • Identify and eliminate bottlenecks through continuous monitoring
  • Identify & eliminate limitations imposed by certain components
  • Identify and act on the causes of poor performance
  • Minimize implementation risks

Application performance testing parameters

Performance testing is based on various parameters that include load, stress, spike, endurance, volume, and scalability. Resilient apps can withstand increasing workloads, high volumes of data, and sudden or repetitive spikes in users and/or transactions.

As such, performance testing ensures that the app is designed keeping peak operations in mind, and all components comprising the app function as a cohesive unit to meet consumer requirements.
No matter how complex the app is, performance testing teams are often required to take the following steps:

  • Setting the performance criteria – Performance benchmarks need to be set and criteria should be identified in order to decide the course of the testing.
  • Adopting a user-centric approach – Every user is different and it is always a good idea to simulate a variety of end-users to imagine diverse scenarios and test for use cases accordingly. You would therefore need to factor in expected usage patterns, the peak times, length of an average session within the application, how many times do users use the application in a day, what is the most commonly used screen for the app, etc.
  • Evaluating the testing environment – It is important to understand the production environment, the tools available for testing, and the hardware, software, and configurations to be used before beginning the testing process. This helps us understand the challenges and plan accordingly.
  • Monitoring for the best user experience – Constant monitoring is an important step in application performance testing. It will give you answers to what, when, and why’ helping you fine-tune the performance of the application. How long does it take for the app to load, how does the latest deployment compare to previous ones, how well does the app perform while backend performances occur, etc. are things you need to assess. It is important that you leverage your performance scripts well with proper correlations, and monitor performance baselines for your database to ensure it can manage fresh data loads without diluting the user experience.
  • Re-engineering and re-testing – The tests can be rerun as required to review and analyze results, and fine-tune again if necessary.

Early Performance Testing

Test early. Why wait for users to complain when you can proactively run tests early in the development lifecycle to check for application readiness and performance? In the current (micro) service-oriented architecture approach, as soon as the component or an interface is built, performance testing at a smaller scale can allow us to uncover issues w.r.t concurrency, response time/latency, SLA, etc. This will allow us to identify bottlenecks early and gain confidence in the product as it is being built.

Performance testing best practices

For the app to perform optimally, you must adopt testing practices that can alleviate performance issues across all stages of the app cycle.

Our top recommendations are as follows:

  • Build a comprehensive performance model – Understand your system’s capacity to be ready for concurrent users, simultaneous requests, response times, system scalability, and user satisfaction. The app load time, for instance, is a critical metric irrespective of the industry you belong to. Mobile app load times can hugely impact consumer choices as highlighted in a study by Akamai which suggested conversion rates reduce by half and bounce rate increases by 6% if a mobile site load time goes up from 1 second to 3. It is therefore important that you factor in the changing needs of customers to build trust, loyalty, and offer a smooth user experience.
  • Update your test suite – The pace of technology is such that new development tools will debut all the time. It is therefore important for application performance testing teams to ensure they sharpen their skills often and are equipped with the latest testing tools and methodologies.

An application may boast of incredible functionality, but without the right application architecture, it won’t impress much. Some of the best brands have suffered heavily due to poor application performance. While Google lost about $2.3 million due to the massive outage that occurred in December 2020, AWS suffered a major outage after Amazon added a small amount of capacity to its Kinesis servers.

So, the next time you decide to put your application performance testing efforts on the back burner, you might as well ask yourself ‘what would be the cost of failure’?

Tide over application performance challenges with Trigent

With decades of experience and a bunch of the finest testing tools, our teams are equipped to help you across the gamut of application performance right from testing to engineering. We test apps for reliability, scalability, and performance while monitoring them continuously with real-time data and analytics.

Allow us to help you lead in the world of apps. Request a demo now.

Improve the quality of digital experiences with Performance Engineering

Quality at the heart of business performance

“In 2020, the key expectation is fast, reliable, and trustworthy software.” *

As businesses embrace the Agile/DevOps culture and the emphasis on CI/CD is growing, quality assurance is seen as a standalone task, limited to validating functionalities implemented. When QA and Testing is an afterthought in an Agile/DevOps culture, the result is a subpar consumer experience followed by an adverse impact on the revenue pipeline. Poor customer experience also directly impacts brand credibility and business equity. While UI/UX are the visible elements of the customer experience, product, or service performance is a critical element that is often neglected. Performance Testing identifies the gaps that are addressed through Performance Engineering.

Small steps, significant gains – the journey towards Performance Engineering

The deeper issue lies in the organization’s approach towards quality and testing – it is considered an independent phase rather than looked upon as a collaborative and an integrated approach. Performance engineering is a set of methodologies that identifies potential risks and bottlenecks early on in the development stage of the product and addresses them. It goes without saying that performance is an essential ingredient in the quality of the product, there’s a deeper need for change in thinking – to think proactively, anticipate early in the development cycle, test and deliver a quality experience to the end consumer. An organization that makes gradual changes in its journey towards performance engineering stands to gain significantly. The leadership team, the product management, and the engineering and DevOps at different levels need to take the shift-left approach towards performance engineering.

Make Performance Engineering your strategic priority today

Despite the obvious advantages, performance testing is typically a reactive measure that is addressed after the initial launch. However, organizations need to embrace performance engineering measures right from the design phase, start small, and take incremental steps towards change.

Covid-19 has rapidly changed the way consumers behave globally. Businesses caught onto remote working; consumers moved shopping, entertainment, banking, learning, and medical consultations online. Consider the quantum jump in usage triggered by the pandemic.

The dramatic increase in the use of digital services has covered decades in days.**

Companies that adopted scalability and performance centric design have moved swiftly to capture the market opportunity.

With multiple user-interfaces across sectors being the norm and the increasing complexity of digital experiences, it is critical for businesses to get it right the first time in order to gain and retain customers’ trust.

As cloud migrations continue, whether rehosting the app on an IaaS or rebuilding a new approach, performance engineering ensures that migrated systems withstand sudden surges in usage. According to a Sogeti and Neotys report, 74% of the load testing infrastructure is operated in the cloud today. Cloud infrastructure providers ensure reliability but they may not be aware of the performance metrics that matter to the business and their impact. As organizations move from monolithic systems to distributed architectures provided by an assortment of companies, corporate leaders need to recognize the importance of performance engineering and embrace it to deliver the right solutions for the first time.

Our approach to Performance Engineering philosophy

At Trigent, we put the customer experience at the heart of planning the entire testing cycle. Our performance engineering practices align with ‘metrics that matter’ to businesses in the DevOps framework. While testing identifies the gaps in performance, the onus of architecting it right lies on the DevOps engineering team with proactive inputs from QA and Testing.

Performance engineering is also a way of thinking, the ability to plan for performance at the time of design, right at the beginning. As for quality, besides testing for functionality, anticipating potential bottlenecks helps us assess the process in its entirety in the beginning.

Asking some of these customer-centric questions early on shifts the perspective right at the outset. Ask them early, and you’re on your way to a performance engineering culture.

Parameters that matter

‘Will my application meet the defined response-time requirements of my customers?’

Consider an app that doesn’t respond within the expected standards of the customer; the chances of that application making it to the customer’s phone screen is pretty slim.

‘Will the application handle the expected user load and beyond?’

An application that tested well with 10 users may fail when that number is multiplied by a thousand or two.

We take the viewpoints of multiple stakeholders, consider parameters that matter to the customer, and assess impact early on.

Customer experience matters

Performance Engineering takes into account the overall experience of the end-user and their environment.

Asking pertinent questions such as ‘Will my users experience acceptable response times, even during peak hours?’ or ‘Does the application respond quickly enough for the intended users?’ does well to anticipate potential pitfalls in network usage and latency.

‘Where are the bottlenecks in my multi-user environment?’

Understand the real environment of the user and their challenges to provide a quality user experience.

Early Focus

The non-functional aspects are integrated into the DevOps and an early focus on performance enables us to gain insights into architectural issues.

‘How can we optimize the multi-user application before it goes live?
‘How can we detect errors that only occur under real-load conditions?

Quick course corrections help optimize performance and make the product market-ready. Besides faster deployment, quality assurance gives our clients an added advantage of reduced performance costs.

Architect it right

‘What system capacity is required to handle the expected load?’
‘Will the application handle the number of transactions required by the business?’

Important questions like these focus on architecting the product for performance. As part of the performance engineering methodology, our teams consistently check and validate the capabilities at the time of developing the product or scaling it up. We take the shift-left and shift-right approach to anticipate, identify, and remove bottlenecks early on. Getting the architecture right enables us to deliver and deploy a high-quality product every time.

Performance engineering done right is sure to improve the planning-to-deployment time with high-quality products. Plus, it reduces performance costs arising out of unforeseen issues. A step-by-step approach in testing makes sure organizations move towards achieving performance engineering. Talk to our experts for scalable performance engineering solutions for your business.

Learn more about Trigent software testing services.


Reference:
* The State of Performance Engineering 2020 – A Sogeti and Neotys report
** Meet the next-normal consumer – A McKinsey & Company report

Trigent excels in delivering Digital Transformation Services: GoodFirms

GoodFirms consists of researched companies and their reviews from genuine, authorized service-buyers across the IT industry. Furthermore, the companies are examined on crucial parameters of Quality, Reliability, and Ability and ranked based on the same. This factor helps customers to choose and hire companies by bridging the gap between the two.

They recently evaluated Trigent based on the same parameters, after which they found the firm excels in delivering IT Services, mainly:


Keeping Up with Latest Technology Through Cloud computing

Cloud computing technology has made the process of meeting the changing demands of clients and customers. The companies who are early adopters of the changing technologies always achieve cutting-edge in the market. Trigent’s cloud-first strategy is made to meet the clients’ needs by driving acceleration, customer insight, and connected experience to take businesses to the next orbit of cloud transformation. Their team exhibits the highest potential in cloud computing that improves business results across the key performance indicators (KPIs). The Trigent team is instilled with productivity, operational efficiency, and growth that increases profitability.

The team possesses years of experience and works attentively in the cloud adoption journey of their clients. The professionals curate all their knowledge to bring the best of services to the table. This way, the clients can seamlessly achieve goals and secure their place as a modern cloud based-enterprise. Their vigorous effort has placed them as the top cloud companies in Bangalore at GoodFirms website.

Propelling Business with Software Testing

Continuous efforts and innovations are essential for businesses to outpace in the competitive market. The Trigent team offers next-gen software testing services to warrant the delivery of superior quality software products that are release ready. The team uses agile – continuous integration, continuous deployment – and shift-left approaches by utilizing validated, automated tools. The team expertise covers functional, security, performance, usability, accessibility testing that extends across mobile, web, cloud, and microservices deployment.

The company caters to clients of all sizes across different industries. The clients have also sustained substantial growth by harnessing their decade-long experience and domain-knowledge. Bridging the gap between companies and customers and using agile methodology for test advisory & consulting, test automation, accessibility assurance, security testing, end to end functional testing, performance testing the company holds expertise in all. Thus, the company is dubbed as the top software testing company in Massachusetts at GoodFirms.

Optimizing Work with Artificial Intelligence

Artificial intelligence has been the emerging technology for many industries during the past decade. AI is defining technology by taking it to a whole new level of automation where machine learning, natural language process, and neural networks are used to deliver solutions. At Trigent, the team promises to support clients by utilizing AI and providing faster, more effective outcomes. By serving diverse industries with complete AI operating models – strategy, design, development, and execution – the firm is automating tasks. They are focused on empowering brands by adding machine capabilities to human intelligence and simplifying operations.

The AI development teams at Trigent are appropriately applying the resources to identify and govern a process that empowers and innovate business intelligence. Besides, with their help with continuous processes enhancements and AI feedback systems, many companies have been increasing productivity and revenues. Therefore, helping clients to earn profit with artificial intelligence, the firm would soon rank in the list of the artificial intelligence programming company at GoodFirms.

About GoodFirms

GoodFirms, a maverick B2B Research and Reviews Company helps in finding Cloud Computing, Testing Services, and Artificial Intelligence firms rendering the best services to its customers. Their  extensive research process ranks the companies, boosts their online reputation and helps service seekers pick the right technology partner that meets their business needs.

Can your TMS Application Weather the Competition?

The transportation and logistics industry is growing dependent on diverse transportation management systems (TMS). This is true not only for the big shippers but also for small companies triggered by different rates, international operations, and competitive landscape. Gartner’s 2019 Magic Quadrant for Transportation Management Systems summarizes the growing importance of TMS solutions when it says, “Modern supply chains require an easy-to-use, flexible and scale TMS solution with broad global coverage. In a competitive transportation and logistics environment, TMS solutions help organizations to meet service commitments at the lowest cost.

For TMS solution providers, the path to developing or modernizing applications is not as simple as cruising calm seas. Their challenges are myriad and relate to ensuring systems that organize quotes seamlessly (no jumping from phone to a website). They need to help customers to select the ideal carrier based on temperature, time, and load to ensure maximized benefits. Very importantly, they need to help customers to track shipments while managing multiple carrier options and freight. Customers look for answers, and TMS solutions should be able to provide customers the best options in carriers. All this does not come easy and while developing and executing the solution is half of it, the more critical half lies in ensuring that the system’s functionality, security, and performance remain uncompromised. When looking for a TMS solution, customers look for providers who can present a clear picture of the total cost of ownership. Unpredictability is a no-no in this business which essentially means that the solution is implemented and tested for 100 percent performance and functionality.

Testing Makes the Difference

The TMS solution providers who will be able to sustain their competitive edge are the ones who have tested their solution from all angles and are sure of its superiority.

In a recent case study that explains the importance of testing, a cloud-based trucking intelligence company provides solutions to help fleets improve safety and compliance while reducing costs invested in a futuristic onboard telematics product. The product manages several processes and functions to provide accurate and real-time information such as tracking fleet vehicles, controlling unauthorized access to the company’s fleet assets, and mapping real-me vehicle location. The client’s customers know more about their trucks on the road using pressure monitoring, fault code monitoring, and remote diagnostics link. The onboard device records and transmits information such as speed, RPMs and idle time, distance traveled, etc. in real-time to a central server using a cellular data network.

The data stored in the central server is accessed using the associated web application via the internet. The web application also provides a driver portal for the drivers to know/edit their hours of service logs. Since the system deals with mission-critical business processes, providing accurate and real-time information is key to its success.

The challenge was to set up a test environment for the onboard device to accurately simulate the environment in the truck and simulate the transmission of data to the central server. Establishing appropriate harnesses to test the hardware and software interface was equally challenging. The other challenges were the simulation and real-time data generation of the vehicle movement using a simulator GPS.

A test lab was set up with various versions of the hardware and software and integration points with simulators. With use-case methodology and user interviews, test scenarios were chalked out to test the rich functionality and usage of the device. Functional testing and regression testing of new releases for both the onboard equipment and web application were undertaken. For each of the client’s built-in products, end-to-end testing was conducted.

As a result of the testing services, the IoT platform experienced shortened functional release cycles. The comprehensive test coverage ensured better GPS validation, reduced preventive cost by identification of holistic test cases, reduced detection cost by performing pre-emptive tests like integration testing.

Testing Integral to Functional Superiority for TMS 

As seen in the case study above, developing, integrating, operating, and maintaining a TMS is a challenging business. There are several stakeholders and a complex process that includes integrated hardware, software, humans, and processes performing myriad functions, making the TMS’s performance heavily reliant on its functioning. Adding complexity is the input/output of data, command, and control, data analysis, and communication. As a result of its complexity and the importance of its functioning in managing shipping and logistics, testing is an essential aspect of a TMS.

Testing TMS solutions from the functional, performance design and implementation aspect will ensure that:

  • Shipping loads are accurate, and there are no unwelcome surprises
  • Mobile status updates eliminate human intervention and provide real-time updates.
  • Electronic record management to ensure the workflow is smooth and accurate
  • Connectivity information to eliminate issues with shift changes and visibility
  • API integration to seamlessly communicate with customers.
  • Managing risk for both the TMS and the system’s partners/vendors.

TMS software providers need to offer new features and capabilities faster to be competitive, win more customers, and retain their business. Whether it relates to seamless dispatch workflows, freight billing or EDI, Trigent can help. Know more about Trigent’s Transportation & Logistics solutions

Getting Started with Load Testing of Web Applications using JMeter

Apache JMeter:

JMeter is one of the most popular open source testing tools for load and performance testing services. It simulates browser behavior, sending requests to the web or application server for different loads. Volume testing using JMeter on your local machine, you can scale up to approximately 100 virtual users, but you can go up to more than 1,000,000 Virtual Users with CA BlazeMeter, which is sort of a JMeter in the cloud.

Downloading and Running the Apache JMeter:

Requirements:

Since JMeter is a pure Java-based application, the system should Java 8 version or higher.

Check for Java installation: Go to Command prompt, type `Java –version’, if Java is installed it will show as the Java version as below.

Related: Improved time to market and maximized business impact with minimal schedule variance and business risk.

If Java is not installed, download and install Java from the following link: “http://bit.ly/2EMmFdt

Downloading JMeter:
  • Download the latest version of JMeter from “Apache JMeter
  • Click on apache-jmeter-3.3.zip from Binaries.

How to Run the JMeter:

You can start JMeter in 3 ways:

  • GUI Mode
  • Server Mode
  • Command Line

GUI Mode: Extract the downloaded Zip file in any of your drives, go to the bin folder D:apache-jmeter-3.2bin–> double click on “jmeter” windows Batch file.

After that will appear the JMeter GUI as shown below:

Before you start recording the test script, configure the browser to use the JMeter Proxy.

How to configure Mozilla Firefox browser to Use the JMeter Proxy:

  • Launch the Mozilla Firefox browser–> click on Tools Menu–> Choose Options
  • In Network Proxy section –> Choose Settings
  • Select Manual Proxy Configuration option
  • Enter value for HTTP Proxy as localhost or you can enter your local system IP address.
  • Enter the port as 8080 or you can change the port number if 8080 port is not free
  • Click on OK. Now your browser is configured with the JMeter proxy server.

Record the Test Script of Web Application:

Add a Thread Group to the Test Plan: Test Plan is our JMeter script and it will tell about the flow of our load test.

Select the Test plan –> Right click–> Add–> Threads (Users) –> Thread Group

Thread Group:

Thread group will tell about the user flow and will simulates like how user will behave on the app

The thread group has three important properties, which influence the load test:

  • Number of threads(users): This will tell about the number of Virtual users that JMeter will attempt to simulate, let’s say for ex:1,10,20 or 50 etc
  • Ramp Up Period (in seconds): The duration of time that you want to allow the Thread Group to go from 0 to n (20 here) users, let’s say 5 seconds.
  • Loop count: No of times to execute the test, 1 means the test will execute for 1 time.
2. Add Recording controller to the thread group: Recording controller should have all the recorded HTTP Request Samples.

Select the thread group –> right click–> Add –> Logic Controller –> recording controller

3. Add the HTTP Cookie Manager to the thread group:

Http Cookie manager is to use cookies on your web app

4. Add View Results tree to the thread group: View results Tree used to see the Status of the Http Sample Requests on Executing the Recorded Script.

Thread group –> Add–> Listeners–> View Result Tree

5. Add Summary Report: Summary report will show the test results of the script

Thread group –> Add –> Listeners –> Summary Report.

6. Go to the WorkBench and Add the HTTP(S) Test Script Recorder: Here you can start your test script recording.

WorkBench –> Right click –> Add–> Non Test Elements –> HTTP(S) Test Script Recorder.

Check whether the port 8080 (this should be same as which we have set for the browser port number) is available or busy in your system. If it’s busy change the port number.

7. Finally click the Start button –> you can see the popup –> click ok

8. How to Record the browsing files from the Web App:

If your test script is having option like browse any files, keep your files in bin folder of JMeter and do recording of browse files.

Go to the Mozilla Browser–>Start your Test for Ex:login page or any navigation, do not close the JMeter while recording the script. The script will record as below in Recording Controller.

Save the Test Plan with .jmx extension.

Run the Recorded Script: Select the Test Plan–>Press ctrl+R from the Key board or Start Button on Jmeter.

while executing the script ,at the top right corner a circle will display in green color along with the time box which will show how much time the script is executing. Once the Execution completed the Green circle will turn to Grey.

Test Results: We can see the Test Results in many ways like, View Results Tree, Summary Report, and Aggregate Graph.

View Result Tree

Summary Report:

After executing the test script, go to Summary Report–>click on Save Table data and save the results in a .csv or xlsx format.

Though we will get the test results in Graphical view and Summary report etc, executing Test Scripts Using JMeter GUI is not a good practice. I will discuss the execution of JMeter Test Scripts with Jenkins Integration tool in my next blog.

Read Other Blog on Load Testing:

Load/Performance Testing Using JMeter

Load performance testing using JMeter

Software Reliability Testing or load performance testing is a field within software testing & performance testing services that tests a software’s ability to function in a given environmental conditions for a particular period of time. While there are several software testing tools for evaluating performance and reliability, I have focused on JMeter for this blog.

JMeter or The Apache JMeter™ application as it is popularly known is an open source software. It is a  hundred percent pure Java application designed for load testing, functional testing and performance testing.

Given below are some guidelines on using JMeter to test the reliability of a software product:

Steps to download and install JMeter for load performance testing

JMeter, as a simple .zip file,  can be downloaded from the below mentioned URL:

http://jmeter.apache.org/download_jmeter.cgi

Pre-requisite is to have Java 1.5 or higher version already installed in the machine.

Unzip the downloaded file in a required location, go to bin folder and double click on Jmeter.bat file.

If Jmeter UI opens up then the installation has been successful. If it does not open, then Java might not be installed/configured or the downloaded files may be corrupted.

The next step is to configure Jmeter to listen to browser. Open up the Command prompt under bin location of Jmeter and run Jmeter batch file. This will open up simple UI of Jmeter as already seen.

If you are accessing the internet via Proxy Server then this will not work when recording scripts. We need to by-pass the proxy server and to do this we need to start Jmeter by adding two parameters, –H proxy server name/IP address and –P port number. Adding these will look like the following image:

The next step is to configure the browser to `listen’ to Jmeter while recording the scripts. For this blog, I will be using Firefox browser

  1. Open Firefox browser
  2. Go to options->Advanced
  3. Select Network tab and click on Settings
  4. A connection settings tab will be opened under the `select manual proxy’ configuration and give HTTP proxy as localhost and port as 8080.

After having configured proxy settings, we need  to have Jmeter Certificate installed on the required browser.

The Certificate will be in Jmeter bin folder, under the file name “ApacheJMeterTemporaryRootCA.crt”. Given below are steps to be followed to  install the Certificate in Firefox browser

  1. Open Firefox browser
  2. Go to options->Advanced
  3. Select certificate tab and click on View certificates
  4. Under Authorities tab click on Import
  5. Go to bin folder of Jmeter and select the certificate under it (ApacheJMeterTemporaryRootCA.crt)
  6. Check “Trust this CA to identify website” option and click on Ok.

Note : If you do not find the certificate in the bin folder directory then you can generate it by running the `HTTP Recorder’. Every time  you run the recorder, the certificate will be generated and there is no need to replace the certificate every time.

Recording of Script for web-based applications:

Steps to record a sample test script:

  1. Open up Jmeter by running Jmeter.bat
  2. Right click on Test Plan->Add->Threads(Users)->Thread Group
  3. Right Click on Thread Group->Add->Logic Controller->Recording Controller
  4. Right Click on WorkBench->Add->Non-Test Elements->HTTP(s) Test Script Recorder
  5. Right Click on HTTP(s) Test Script Recorder->Add->Listener->View Results Tree

After adding the components, Jmeter UI would look like the following image:

  1. Click on HTTP(s) Test Script Recorder and again click on Start.  Jmeter Certificate will be generated with a pop- up informing the same. Click on `Ok’ button on the pop up.
  2. Open up the browser, for which Jmeter proxy is configured and go to the URL which is under Test and execute your manual test script with which you want to determine performance.
  3. Once you are done with all your manual test, come back to Jmeter UI Click on HTTP(s) Test Script Recorder and click on Stop. This will stop recording.
  4. Click on the icon for recording script to view all your recorded HTTP samples, as follows. You can then rename Recording script controller
  5. You can view details of the page recorded under “View Results Tree Listener”. By using these values you can  determine the assertions to put in your original run.
  6. Now Click on “Thread Group” and configure the users you want to run. Also Right click on Thread Group->Add->Listener->Summary Report and also add View Results Tree.
  7. Make sure to Tick check box “Errors” under View Results Tree, or else it will take up huge memory while running your scripts as it captures almost everything
  8. Now you can run your recorded scripts! Just click on Run/Play button on the top toolbar of Jmeter UI

Analyzing the test run result:

While the scripts are running, results/timing will be captured under “Summary Report Listener”. Summary report will look the following after your test run has been completed. You can save the report by clicking on “Save Data Table”

Below are details of each keyword:

  • average : Average is the average response time for that particular http request. This response time is in milliseconds.
  • aggregate_report_min : Min denotes the minimum response time taken by the http request. Time is in milliseconds.
  • aggregate_report_max : Max denotes  the maximum response time taken by the http request. Time is in milliseconds.
  • aggregate_report_stddev : How many exceptional cases were found which were deviating from the average value of the receiving time.
  • aggregate_report_error% : This denotes the error percentage in samples during run.
  • aggregate_report_rate : How many requests per second does the server handle. Larger is better.
  • average_bytes : average response size.  Lower the average number, greater is the performance.

Read Other Blogs on Load Testing

Getting Started with Load Testing of Web Applications using JMeter

Performance testing on cloud

Ensuring good application performance is crucial, especially to critical web applications, that require fast cycle times and short turnaround times for newer versions. How can such applications be optimally tested without spending a fortune on tools, technology, and people but still ensure quality assurance and on-time release? With tightening budgets and time-consuming processes, IT organizations are forced to do more with less.

A judicious combination of tools, the available cloud platforms, and well thought out methodology provide us the answer. While it is proven that open source software can reduce software development cost, there hasn’t been much use of open source tools for testing until the cloud computing paradigm has become more readily available. Until now performance testing on a large-scale testing project using test tools on dedicated hardware to model real-world scenarios has been an expensive proposition. However, cloud testing has changed the game.

Performance testing on the cloud can be broadly classified into two categories:

  1. Cloud Infrastructure for Test Environment – Performance testing always requires some sophisticated tool infrastructure. The test infrastructure requirement could vary from having specific hardware for specific tools, the number of hardware, licenses, back-ups, Bandwidth, etc. In the past, getting all the required hardware was not only challenging but also in many cases, the performance testing was not adequately tested due to missing test tools. With Cloud testing, one can just focus on performance testing and simply not on the infrastructure. Any tool be it open source like Grinder, JMeter, or any licensed software products like Silk Test Performer can be easily set up and run the test on the AUT (Application Under Test). There is some time required for setting up the tool and also requires few test runs to ensure the load injectors (the client machine that generates load) do not cause bottlenecks. This environment may be best suited in a typical waterfall model scenario where the software is evaluated and tuned at the end of the software development cycle.
  1. Cloud as a Test Tool – There are different sets of software testing tools that are readily available in the Cloud as a SaaS model. The test tool is readily available on the cloud, therefore no setup required, just subscribe and you are all set to go thus saving time in setup. Also, their system configuration is optimized to generate the required load without causing the bottlenecks. Some of the readily available test tools on Cloud are – LoadStorm, CloudTest by SOASTA, BrowserMob, nGrinder, Load-Intelligence (Can Use JMETER in Cloud), etc. This environment is more suited in an Agile scenario, where the same tasks need to be performed for smaller iterations from the initial stage of the SDLC itself. So, here you just have the scripts ready and upload to the cloud run the test and once you have the requirement metrics, you sign-off.

Conclusion – A combination of carefully selected testing tools, QA testers, readily available cloud platforms, and a sound performance test strategy, can make bring the same benefits as of the conventional methods at a much lower cost.

JMeter Regular Expression Extractor Example

Before you delve in to the details of the tool, you can get a bigger picture on the importance of performance engineering that boosts the quality of digital experiences.

In this example, we will demonstrate the use of Regular Expression Extractor post processor in Apache JMeter. We will go about parsing and extracting the portion of response data using regular expression and apply it on a different sampler. Before we look at the usage of Regular Expression Extractor, let’s look at the concept.

1. Introduction – Apache JMeter

Apache JMeter is an open-source Java based tool that enables you to perform functional, load, performance and regression tests on an application. The application may be running on a Web server or it could be a standalone in nature. It supports testing on both client-server and web model containing static and dynamic resources. It supports wide variety of protocols for conducting tests that includes, HTTP, HTTPS, JDBC, FTP, JMS, LDAP, SOAP etc.

A quick look at some of the features:

  • It provides a comprehensive GUI based workbench to play around with tests. It also allows you to work in a non-GUI mode. JMeter can also be ported on the server allowing to perform tests in a distributed environment.
  • It provides a concept of template which are pre-defined test plans for various schemes or protocols that can be directly used to create your required test plan.
  • It enables you to build test plan structurally using powerful features like Thread Group, Controllers, Samplers, Listeners etc.
  • It provides debugging and error monitoring through effective logging.
  • It supports parameterized testing through the concept of variables.
  • It supports creation of different flavors of test plan that includes Web, Database, FTP, LDAP, Web service, JMS, Monitors etc.
  • It allows for remote testing by having different JMeter instances running as servers across nodes and accessed from a single client application.
  • It gives you real time test results that covers metrics like latency, throughput, response times, active threads etc.
  • It enables you to perform testing based on regular expressions and many more other features.

1.1. Regular Expression

Regular expression is a pattern matching language that performs a match on a given value, content or expression. The regular expression is written with series of characters that denote a search pattern. The pattern is applied on strings to find and extract the match. The regular expression is often termed as regex in short. Pattern based searching has become very popular and is provided by all the known languages like Perl, Java, Ruby, Javascript, Python etc. The regex is commonly used with UNIX operating system with commands like grep, ls, awk and editors like ed and sed. The language of regex uses meta characters like . (matches any single character), [] (matches any one character), ^ (matches the start position), $ (matches the end position) and many more to devise a search pattern. Using these meta characters, one can write a powerful regex search pattern with combination of if/else conditions and replace feature. The discussion about regex is beyond the scope of this article. You can find plenty of articles and tutorials on regular expression available on the net.

1.2. Regular Expression Extractor

Regular Expression (regex) feature in JMeter is provided by the Jakarta ORO framework. It is modelled on Perl5 regex engine. With JMeter, you could use regex to extract values from the response during test execution and store it in a variable (also called as reference name) for further use. Regular Expression Extractor is a post processor that can be used to apply regex on response data. The matched expression derived on applying the regex can then be used in a different sampler dynamically in the test plan execution. The Regular Expression Extractor control panel allows you to configure the following fields:

Apply to: Regex extractor are applied to test results which is a response data from the server. A response from the primary request is considered main sample while that of sub request is a sub sample. A typical HTML page (primary resource) may have links to various other resources like image, javascript files, css etc. These are embedded resources. A request to these embedded resources will produce sub samples. An HTML page response itself becomes primary or a main sample. A user has the option to apply regex to main sample or sub samples or both.

Field to check: Regex is applied to the response data. Here you choose what type of response it should match. There are various response indicators or fields available to choose. You can apply regex to plain response body or a document that is returned as a response data. You can also apply regex to request and response headers. You can also parse URL using regex or you can opt to apply regex on response code.

Reference Name: This is the name of the variable that can be further referenced in the test plan using ${}. After applying regex, the final extracted value is stored in this variable. Behind the scenes, JMeter will generate more than 1 variable depending on the match occurred. If you have defined groups in your regex by providing parenthesis (), then it will generate as many variables as number of groups. These variables names are suffixed with the letters _g(n) where n is the group no. When you do not define any grouping on your regex, the returned value is termed as the zeroth group or group 0. Variable values can be checked by using Debug Sampler. This will enable you to verify whether you regular expression worked or not.

Regular Expression: This is the regex itself that is applied on the response data. A regex may or may not have a group. A group is a subset of string that is extracted from the match. For example, if the response data is ‘Hello World’ and my regex is Hello (.+)$, then it matches ‘Hello World’ but extracts the string ‘World’. The parenthesis () applied is the group that is captured or extracted. You may have more than one group in your regex, so which one or how many to extract, is configured through the use of template. See the below point.

Template: Templates are references or pointers to the groups. A regex may have more than one groups. It allows you to specify which group value to extract by specifying the group number as $1$ or $2$ or $1$$2$ (extract both groups). From the ‘Hello World’ example in the above point, $0$ points to the complete matched expression that is ‘Hello World’ and $1$group points to the string ‘World’. A regex without parenthesis () is matched as $0$ (default group). Based on the template specified, that group value is stored in the variable (reference name).

Match no.: A regex applied to the response data may have more than one matches. You can specify which match should be returned. For example, a value of 2 will indicate that it should return the second match. A value of 0 will indicate any random match to be returned. A negative value will return all the matches.

Default value: The regex match is set to a variable. But what happens when the regex does not match. In such a scenario, the variable is not created or generated. But if you specify a default value then if the regex does not match then the variable is set to the specified default value. It is recommended to provide a default value so that you know whether your regex worked or not. It is a useful feature for debugging your test.

2. Regular Expression Extractor By Example

We will now demonstrate the use of Regular Expression Extractor by configuring a regex that will extract the URL of the first article from the JCG (Java Code Geeks) home page. After extracting the URL, we will use it in a HTTP Request sampler to test the same. The extracted URL will be set in a variable.

Before installing JMeter, make sure you have JDK 1.6 or higher installed. Download the latest release of JMeter using the link here. At the time of writing this article, the current release of JMeter is 2.13. To install, simply unzip the archive into your home directory where you want JMeter to be installed. Set the JAVA_HOME environment variable to point to JDK root folder. After unzipping the archive, navigate to /bin folder and run the command jmeter. For Windows, you can run using the command window. This will open JMeter GUI window that will allow you to build the test plan.

2.1. Configuring Regular Expression Extractor

Before we configure regex extractor, we will create a test plan with a ThreadGroup named ‘Single User’ and a HTTP Request Sampler named ‘JCG Home’. It will point to the server www.javacodegeeks.com. For more details on creating ThreadGroup and related elements, you can view the article JMeter Thread Group Example. The below image shows the configured ThreadGroup (Single User) and HTTP Request Sampler (JCG Home). Next, we will apply the regex on the response body (main sample). When the test is executed, it will ping the web site named www.javacodegeeks.com and return the response data which is a HTML page. This HTML web page contains JCG articles, the title of which is wrapped in a <h2> tag. We will write a regular expression that will match the first <h2> tag and extract the URL of the article. The URL will be part of an anchor <a> tag. Right click on JCG Home sampler and selectAdd -> Post Processors -> Regular Expression Extractor.

The name of our extractor is ‘JCG Article URL Extractor’. We will apply the regex to the main sample and directly on the response body (HTML page). The Reference Name or variable name provided is ‘article_url’. The regex used is <h2 .+?><a href="http://(.+?)".+?</h2>. We will not go into the details of the regex as this is a different discussion thread altogether. In a nutshell, this regex will find or match the first <h2> tag and extract the URL from the anchor tag. It will strip the word http:// and extract only the server part of the URL. The extractor itself is placed in a parenthesis () forming our first group. The Template field is set with the value of $1$ that points to our first group (the URL) and the Match No. field indicates the first match. The Default Value set is the ‘error’. So if our regex fails to match then the variable article_url will hold the value ‘error’. If the regex makes a successful match, then the article URL will be stored in the article_url variable.

We will use this article_url variable in another HTTP Request sampler named JCG Article. Right click on Single UserThreadGroup and select Add -> Sampler -> HTTP Request.

As you can see from the above, the server name is ${article_url} which is nothing but the URL that was extracted from the previous sampler using regex. You can verify the results by running the test.

2.2. View Test Results

To view the test results, we will configure the View Results Tree listener. But before we do that, we will add a Debug Sampler to see the variable and its value being generated upon executing the test. This will help you understand whether your regex successfully matched an expression or failed. Right click on Single User ThreadGroup and select Add -> Sampler-> Debug Sampler.

As we want to debug the generated variables, set the JMeter variables field to True. Next, we will view and verify test results using View Results Tree listener. Right click on Single User ThreadGroup and select Add -> Listener -> View Results Tree.

First let’s look at the output of Debug Sampler response data. It shows our variable article_url and observe the value which is the URL that we extracted. The test has also generated group variables viz. article_url_g0 and article__url_g1. The group 0 is a regular general match and group 1 is the string that is extracted from the general match. This string is also stored in our article_url variable. The variable named article_url_g tells you the no. of groups in the regex. Our regex contained only 1 group (note the sole parenthesis () in our regex). Now lets look at the result of our JCG Article sampler:

The JCG Article sampler successfully made the request to the server URL that was extracted using regex. The server URL was referenced using ${article_url} expression.

3. Conclusion

The regular expression extractor in JMeter is one of the significant feature that can help parse different types of values on different types of response indicators. These values are stored in variables that can be used as references in other threads of the test plan. The ability to devise groups in the regex, capturing portions of matches makes it even more a powerful feature. Regular expression is best used when you need to parse the text and apply it dynamically to subsequent threads in your test plan. The objective of the article was to highlight the significance of Regular Expression Extractor and its application in the test execution.

JMeter Blog Series: JMeter BeanShell Example

Here’s more about load and performance testing using Jmeter.

In this example, we will demonstrate the use of BeanShell components in Apache JMeter. We will go about writing a simple test case using the BeanShell scripting language. These scripts will be part of BeanShell components that we will configure for this example. Before we look at the usage of different BeanShell components, let’s look at the concept.

1. Introduction

Apache JMeter is an open-source Java-based tool that enables you to perform functional, load, performance, and regression tests on an application. The application may be running on a Web server or it could be standalone in nature. It supports testing on both client-server and web models containing static and dynamic resources. It supports a wide variety of protocols for conducting tests that include, HTTP, HTTPS, JDBC, FTP, JMS, LDAP, SOAP, etc.

A quick look at some of the features of Jmeter

  • It provides a comprehensive GUI based workbench to play around with tests. It also allows you to work in a non-GUI mode. JMeter can also be ported on the server allowing to perform tests in a distributed environment.
  • It provides a concept of templates which are pre-defined test plans for various schemes or protocols that can be directly used to create your required test plan.
  • It enables you to build a test plan structurally using powerful features like Thread Group, Controllers, Samplers, Listeners, etc.
  • It provides debugging and error monitoring through effective logging.
  • It supports parameterized testing through the concept of variables.
  • It supports the creation of different flavors of test plan that includes Web, Database, FTP, LDAP, Web service, JMS, Monitors etc.
  • It allows for remote testing by having different JMeter instances running as servers across nodes and accessed from a single client application.
  • It gives you real-time test results that covers metrics like latency, throughput, response times, active threads etc.
  • It enables you to perform testing based on regular expressions and many more other features.

1.1. What is BeanShell?

BeanShell is a scripting language written in Java. It is part of JSR-274 specification. It in some way is an extension to the mainstream Java language by providing scripting capabilities. It is an embedded interpretor that recognizes strongly typed Java syntax and scripting features like shell commands, loose types and method closures (functions as objects). BeanShell aids in quick development and test of Java application. One can use it for quick or rapid prototyping or quickly testing a small functionality or a process. The script can also be embedded in the Java code and invoked using the Interpreter API.

BeanShell can also be used as a configuration language as it supports creation of Java based variables like strings, arrays, maps, collections and objects. It also supports what is called as scripting variables or loosely typed variables. BeanShell scripts can also be written in a standalone mode in a external file which then can be loaded and executed by the Java program. BeanShell also provides the concept of UNIX like shell programming. You can give BeanShell commands interactively in a GUI shell and see the output instantly.

For more details on BeanShell, you can refer to the official website http://www.beanshell.org

1.2. JMeter Beanshell Components

JMeter provides the following components that can be used to write BeanShell scripts

  • BeanShell Sampler
  • BeanShell PreProcessor
  • BeanShell PostProcessor
  • BeanShell Assertion
  • BeanShell Listener
  • BeanShell Timer

Each of these component allows you to write scripts to conduct your test. JMeter will execute the scripts based on the lifecycle order of the components. For example, it will first invoke PreProcessor then Sampler and then PostProcessor and so on. Data can be passed between these components using thread local variables which has certain meaning and context. Every component provides you with pre-defined variables that can be used in the corresponding script.

The following table shows some of the common variables used by the BeanShell components:

Variable name Description
ctx It holds context information about the current thread that includes sampler and its results.
vars This is a thread local set of variables stored in a map used by BeanShell components in the same thread.
props These are variables loaded as properties from an external file (jmeter.properties) stored in the classpath.
prev It holds the last result from the sampler
data It holds server response data

2. BeanShell By Example

We will now demonstrate the use of BeanShell in JMeter. We will take a simple test case of sorting an array. We will define an array of 5 alphabets (a,b,c,d,e) stored in random order. We will sort the content of the array and convert it into string. After conversion, we will remove the unwanted characters and print the final string value. It should give the output as ‘abcde’.
We will make use of the following BeanShell components to implement our test case:

  • BeanShell PreProcessor – This component will define or initialize our array.
  • BeanShell Sampler – This component will sort the array and convert it into string.
  • BeanShell PostProcessor – This component will strip the unnecessary characters from the string.
  • BeanShell Assertion – This component will assert our test result (string with sorted content).

Before installing JMeter, make sure you have JDK 1.6 or higher installed. Download the latest release of JMeter using the link here. At the time of writing this article, the current release of JMeter is 2.13. To install, simply unzip the archive into your home directory where you want JMeter to be installed. Set the JAVA_HOME environment variable to point to JDK root folder. After unzipping the archive, navigate to <JMeter_Home>/bin folder and run the command jmeter. For Windows, you can run using the command window. This will open JMeter GUI window that will allow you to build the test plan.

2.1. Configuring BeanShell Sampler

In this component, we will sort the array. But before we sort the array, it needs to be initialized. You will see the initialization routine in the next section when we create the pre-processor component. Let’s first create the BeanShell Sampler component. We will write the code to sort the array after the initialization routine. Right click on Single UserThreadGroup and select Add -> Sampler -> BeanShell Sampler.

We will provide the name of our sampler as ‘Array Sorter’. The Reset Interpreter field value is retained as ‘False’. This field is only necessary when you have multiple BeanShell samplers configured or if you are running a sampler in the loop. The value of true will reset and create a fresh instance of BeanShell interpreter for each sampler. The value of false will create only one BeanShell interpreter that will interpret scripts for all the configured samplers. From the performance perspective, it is recommended to set this field to true if you have long running scripts with multiple samplers. The Parameter field allows you to pass parameters to your BeanShell scripts. It is usually used with external BeanShell script file, but if you are writing script in this component itself then you can use Parameters or bsh.args variable to fetch the parameters. The Parametersvariable will hold the parameters as a string value (retains spaces). The bsh.args variable will hold the parameters as string array. For this example, we are not passing any parameters to the script. The Script file field is used when you have a BeanShell script defined in an external file. It is important to note, this will override any scripts written inline in this component. We will retain the default value for all the above mentioned fields for all the BeanShell components. The finalScript textbox field allows us to write scripts inline in this component itself. It allows you to use certain variables in your scripts. As you can see there is no scripting code currently in this field. We will write the code after our array is initialized in the pre-processor component.

2.2. Configuring BeanShell PreProcessor

Beanshell PreProcessor will be the first component to be executed before your sampler. It becomes a good candidate to perform initialization routines. We will initialize our array, to be sorted, in this component. Right click on Array Sorter sampler and select Add -> Pre Processors -> BeanShell PreProcessor.

We will name the component as ‘Array Initializer’. Let’s see the code in the Script textbox field. First, we are declaring and initializing the array named strArray. It is a loosely typed variable. The values of the array are not in order. Then we make use of the vars variable to store the array by calling putObject() method. The vars variable will be available to all the BeanShell components that are part of this thread. We will fetch the value of vars variable in a ‘Array Sorter’ sampler and perform the sort. In the above section, we created the ‘Array Sorter’ sampler, now we will write the following code in that sampler to sort the array. Click on Array Sorter sampler, in the Script textbox field to write the following code:

First, we get the array using getObject() method of the vars variable. Then we will sort using the Arrays class of Java. The sort() method of the said class will take our array as a parameter and perform the sort. We then convert the array into string by calling Arrays.toString() method. Arrays is a utility class provided by the JDK to perform certain useful operations on array object. We will then put this sorted string as a response data through the use of SampleResult variable. Our sorted string will look like the following: [a, b, c, d, e].

2.3. Configuring BeanShell PostProcessor

The BeanShell PostProcessor will strip the unnecessary characters like ‘[],’. This component will act more like a filter. Right click on Array Sorter sampler and select Add -> Post Processors -> BeanShell PostProcessor.

We will name the component as ‘Array Filter’. The Script textbox field contains the code that strips the unnecessary characters from our string. If you recall, the string was stored as response data by the Array Sorter sampler. Now here we fetch the string using the function getResponseDataAsString() of the prev variable. Next, we use the replace() method of the String class to strip ‘[]’ and ‘,’ characters from the string. We store that string in the vars variable. This string will now be used by BeanShell Assertion component to assert the final result.

2.4. Configuring BeanShell Assertion

Using this component, we will assert the final result value as ‘abcde’. Right click on Array Sorter sampler and select Add ->Assertions -> BeanShell Assertion.

Using the vars variable, we will get the final string and store it in the finalString variable. Then we assert by checking if the final string does not contain the value ‘abcde’ then set the Failure variable to true and provide the failure message using the FailureMessage variable. The output of the test execution can be see in the command window from where you started the JMeter GUI. The below is the console output after running our tests.

3. Conclusion

BeanShell scripting language provides scripting capabilities to the Java language. In JMeter, you can use different BeanShell components to write the test scripts and execute the same. Each component is equipped with useful variables that can be used in the scripts to perform the control flow. The scripting feature adds a powerful and useful dimension to the JMeter testing tool. The objective of the article was to show the usage of common Beanshell components and how one can write test scripts to execute tests.

Know how well your application performs under load. Register for a free primary assessment.

JMeter Blog Series: Random Variable Example

Here’s the beginning, load/performance testing using JMeter.

In this example, we will demonstrate how to configure Random Variable in Apache JMeter. We will go about configuring a random variable and apply it to a simple test plan. Before we look at the usage of Random variables, let’s look at the concept.

1. Introduction

Apache JMeter is an open-source Java-based tool that enables you to perform functional, load, performance, and regression tests on an application. The application may be running on a Web server or it could be standalone in nature. It supports testing on both client-server and web models containing static and dynamic resources. It supports a wide variety of protocols for conducting tests that include, HTTP, HTTPS, JDBC, FTP, JMS, LDAP, SOAP, etc.
A quick look at some of the features

  • It provides a comprehensive GUI based workbench to play around with tests. It also allows you to work in a non-GUI mode. JMeter can also be ported on the server allowing to perform tests in a distributed environment.
  • It provides a concept of templates which are pre-defined test plans for various schemes or protocols that can be directly used to create your required test plan.
  • It enables you to build test plan structurally using powerful features like Thread Group, Controllers, Samplers, Listeners etc.
  • It provides debugging and error monitoring through effective logging.
  • It supports parameterized testing through the concept of variables.
  • It supports creation of different flavours of test plan that includes Web, Database, FTP, LDAP, Web service, JMS, Monitors etc.
  • It allows for remote testing by having different JMeter instances running as servers across nodes and accessed from a single client application.
  • It gives you real time test results that covers metrics like latency, throughput, response times, active threads etc.
  • It enables you to perform testing based on regular expressions and many more other features.

1.1. Random Number Generation

Most programming languages today has an API that will generate or produce random numbers. The generator algorithm typically produce sequence of numbers which are arbitrary and do not follow any order, structure or format. The algorithm to devise randomness is based on some value also called as seed. The seed drives the sequence generation. Two same seeds will always have same sequence generated. The seed based approach is also termed as pseudo-random number generation.

1.2. Random Variable in JMeter

JMeter allows you to generate random number values and use it in a variable. You can do so through the Random Variable config element. The Random Variable config element allows you set the following parameters:

  • Variable name: You can provide the name of the variable that can be used in your test plan elements. The random value will be stored in this variable.
  • Format String: You can specify the format of the generated number. It can be prefixed or suffixed with string. For example, if you want the generator to produce alphanumeric values you can specify the format like SALES_000 (000 will be replaced with the generated random number).
  • Minimum and Maximum value: You can specify range within which the numbers to be generated. For example, the minimum number can be set as 10 and the maximum number can be set as 50. The generator will produce any number within that range.
  • Per Thread (User): You can specify whether random generator will be shared by all the threads (users) or each thread will have its own instance of random generator. This can indicated by setting false or true respectively.
  • Random Seed: You can also specify the seed value for your generator. If the same seed is used for every thread (Per Thread is set to true) then it will produce the same number for each thread.

2. Random Variable By Example

We will now configure the Random Variable config element. Finding test cases for random variables is always a tricky affair. You may have a test case that tests the random number itself, like whether it is in the proper range or the format of the number is valid or not. Another test case could be where you need to provide some random number as part of URL like say order ID (orderId=O122) or page numbers for pagination (my-domain.com/category/apparel/page/5). It may be best suited to perform load testing for such URL pages. We will use the configured variable in a HTTP Request Sampler as part of request URL. As part of this example, we will test Java category pages (1 – 10) of JCG website (www.javacodegeeks.com).
http://www.javacodegeeks.com/category/java/page/2/
The page number 2 on the URL will be fetched using random variable.

2.1. JMeter installation and setup

Before installing JMeter, make sure you have JDK 1.6 or higher installed. Download the latest release of JMeter using the link here. At the time of writing this article, the current release of JMeter is 2.13. To install, simply unzip the archive into your home directory where you want JMeter to be installed. Set the JAVA_HOME environment variable to point to JDK root folder. After unzipping the archive, navigate to /bin folder and run the command jmeter. For Windows, you can run using the command window. This will open JMeter GUI window that will allow you to build the test plan.

2.2. Configuring Random Variable

To configure Random Variable, we have to make use of Config Element option. Right click on Test Plan and select Add ->Config Element -> Random Variable.

We will give the name of the element as ‘Page Counter Variable’. The Variable Name is ‘page_number’. The page_numbervariable will be used in our test plan later. Keep the output format blank. We will set Minimum Value and Maximum Valuefield values as 1 and 10 respectively. It means the numbers so generated will fall between 1 and 10 (both inclusive). Keep the seed option as blank. Retain the value of Per Thread (User) field as False. It means if you configure multiple threads, all the threads will use this same random generator instance.
Next, we will create a ThreadGroup named ‘Single User’ with the Loop Count set as ’10’. We will use only 1 thread (user) for this example. You could experiment with multiple threads to simulate load test. Our main objective of the article is to show how we can configure and use random variable and therefore we will keep it simple to 1 user test. Loop count of value 10 will repeat the test ten times per user.

For our ThreadGroup we will create HTTP Request sampler named ‘JCG Java Category’.

It will point to the server www.javacodegeeks.com. Set the Path value as /category/java/page/${page_number}. You can notice here the use of our variable ${page_number}. As this test will be repeated 10 times (loop count), at runtime thepage_number variable will be substituted with random values between the range of 1 and 10.
You can view the result of the test by configuring View Results Tree listener. Run the test and you will see the following output.

As you can see, every request will generate random page values in the URL.

3. Conclusion

The random variable feature can be handy when you want to load test several pages with URL having parameter values that can be substituted dynamically at runtime. You could also devise other use cases for using random variables. The article provided a brief insight into the Random Variable feature of the JMeter.

Exit mobile version