The Best Test Data Management Practices in an Increasingly Digital World

A quick scan of the application landscape shows that customers are more empowered, digitally savvy, and eager to have superior experiences faster. To achieve and maintain leadership in this landscape, organizations need to update applications constantly and at speed. This is why dependency on agile, DevOps, and CI/CD technologies has increased tremendously, further translating to an exponential increase in the adoption of test data management initiatives. CI/CD pipelines benefit from the fact that any new code that is developed is automatically integrated into the main application and tested continuously. Automated tests are critical to success, and agility is lost when test data delivery does not match code development and integration velocity.

Why Test Data Management?

Industry data shows that up to 60% of development and testing time is consumed by data-related activities, with a significant portion dedicated to testing data management. This amply validates that the global test data management market is expected to grow at a CAGR of 11.5% over the forecast period 2020-2025, according to the ResearchandMarkets TDM report.

Best Practices for Test Data Management

Any organization focusing on making its test data management discipline stronger and capable of supporting the new age digital delivery landscape needs to focus on the following three cornerstones.

Applicability:
The principle of shift left mandates that each phase in an SDLC has a tight feedback loop that ensures defects don’t move down the development/deployment pipeline, making it less costly for errors to be detected and rectified. Its success hinges to a large extent on close mapping of test data to the production environment. Replicating or cloning production data is manually intensive, and as the World Quality Report 2020-21 shows, 79% of respondents create test data manually with each run. Scripts and automation tools can take up most heavy lifting and bring this down to a large extent when done well. With production quality data being very close to reality, defect leakage is reduced vastly, ultimately translating to a significant reduction in defect triage cost at later stages of development/deployment.

However, using production-quality data at all times may not be possible, especially in the case of applications that are only a prototype or built from scratch. Additionally, using a complete copy of the production database is time and effort-intensive – instead, it is worthwhile to identify relevant subsets for testing. A strategy that brings together the right mix of product quality data and synthetic data closely aligned to production data models is the best bet. While production data maps to narrower testing outcomes in realistic environments, synthetic data is much broader and enables you to simulate environments beyond the ambit of production data. Usage of test data automation platforms that allocates apt dataset combinations for tests can bring further stability to testing.

Tight coupling with production data is also complicated by a host of data privacy laws like GDPR, CCPA, CPPA, etc., that mandate protecting customer-sensitive information. Anonymizing data or obfuscating data to remove sensitive information is an approach that is followed to circumvent this issue. Usually, non-production environments are less secure, and data masking for protecting PII information becomes paramount.

Accuracy:
Accuracy is critical in today’s digital transformation-led SDLC, where app updates are being launched to market faster and need to be as error-free as possible, a nearly impossible feat without accurate test data. The technology landscape is also more complex and integrated like never before, percolating the complexity of data model relationships and the environments in which they are used. The need is to maintain a single source of data truth. Many organizations adopt the path of creating a gold master for data and then make data subsets based on the need of the application. Adopting tools that validate and update data automatically during each test run further ensures the accuracy of the master data.

Accuracy also entails ensuring the relevance of data in the context of the application being tested. Decade-old data formats might be applicable in the context of an insurance application that needs historic policy data formats. However, demographic data or data related to customer purchasing behavior applicable in a retail application context is highly dynamic. The centralized data governance structure addresses this issue, at times sunsetting the data that has served its purpose, preventing any unintended usage. This also reduces maintenance costs for archiving large amounts of test data.

Also important is a proper data governance mechanism that provides the right provisioning capability and ownership driven at a central level, thereby helping teams use a single data truth for testing. Adopting similar provisioning techniques can further remove any cross-team constraints and ensure accurate data is available on demand.

Availability:
The rapid adoption of digital platforms and application movement into cloud environments have been driving exponential growth in user-generated data and cloud data traffic. The pandemic has accelerated this trend by moving the majority of application usage online. ResearchandMarkets report states that for every terabyte of data growth in production, ten terabytes are used for development, testing, and other non-production use cases, thereby driving up costs. Given this magnitude of test data usage, it is essential to align data availability with the release schedules of the application so that testers don’t need to spend a lot of time tweaking data for every code release.

The other most crucial thing in ensuring data availability is to manage version control of the data, helping to overcome the confusion caused by conflicting and multiple versioned local databases/datasets. The centrally managed test data team will help ensure single data truth and provide subsets of data as applicable to various subsystems or based on the need of the application under test. The central data repository also needs to be an ever-changing, learning one since the APIs and interfaces of the application keeps evolving, driving the need for updating test data consistently. After every test, the quality of data can be evaluated and updated in the central repository making it more accurate. This further drives reusability of data across a plethora of similar test scenarios.

The importance of choosing the right test data management tools

In DevOps and CI/CD environments, accurate test data at high velocity is an additional critical dimension in ensuring continuous integration and deployment. Choosing the right test data management framework and tool suite helps automate various stages in making data test ready through data generation, masking, scripting, provisioning, and cloning. World quality report 2020-21 indicates that the adoption of cloud and tool stacks for TDM has witnessed an increase, but there is a need for more maturity to make effective use.

In summary, for test data management, like many other disciplines, there is no one size fits all approach. An optimum mix of production mapped data, and synthetic data, created and housed in a repository managed at a central level is an excellent way to go. However, this approach, primarily while focusing on synthetic data generation, comes with its own set of challenges, including the need to have strong domain and database expertise. Organizations have also been taking TDM to the next level by deploying AI and ML techniques, which scan through data sets at the central repository and suggest the most practical applications for a particular application under test.

Need help? Partner with experts from Trigent to get a customized test data management solution and be a leader in the new-age digital delivery landscape.

4 Rs for Scaling Outsourced QA. The first steps towards a rewarding engagement

Expanding nature of products, need for faster releases to market much ahead of competition, knee jerk or ad hoc reactions to newer revenue streams with products, ever increasing role of customer experience across newer channels of interaction, are all driving the need to scale up development and testing. With the increased adoption of DevOps, the need to scale takes a different color altogether.

Outsourcing QA has become the norm on account of its ability to address the scalability of testing initiatives and bring in a sharper focus on outcome-based engagements. The World Quality Report 2020 mentions that 34% of respondents felt QA teams lack skills especially on the AI/ML front. This further reinforces their need to outsource for getting the right mix of skill sets so as to avoid any temporary skill set gaps.

However, ensuring that your outsourced QA gives you speed and scale can be a reality only if the rules of engagement with the partner are clear. Focusing on 4 R’s as outlined below while embarking on the outsourcing journey, will help you derive maximum value.

  1. Right Partner
  2. Right Process
  3. Right Communication
  4. Right Outcome

Right Partner

The foremost step is to identify the right partner, one with a stable track record, depth in QA, domain as well as technology, and the right mix of skill sets across toolsets and frameworks. Further, given the blurring lines between QA and development with testing being integrated across the SDLC, there is a strong need for the partner to have strengths across DevOps, CI/CD in order to make a tangible impact on the delivery cycle.

The ability of the partner to bring to table prebuilt accelerators can go a long way in achieving cost, time and efficiency benefits. The stability or track record of the partner translates to the ability to bring onboard the right team which stays committed throughout the duration of the engagement. The team’s staying power assumes special significance in longer duration engagements wherein shifts in critical talent derails efficiency and timelines on account of challenges involved with newer talent onboarding and effective knowledge transfer.

An often overlooked area is the partner’s integrity. During the evaluation stages, claims pertaining to industry depth as well as technical expertise abound and partners tend to overpromise. Due care needs to be exercised to know if their recommendations are grounded in delivery experience. Closer look at the partner’s references and past engagements not only help to gain insight into their claims but also help to evaluate their ability to deliver in your context.

It’s also worthwhile to explore if the partner is open to differentiated commercial models that are more outcome driven and based on your needs rather than being fixated on the traditional T&M model.

Right Process

With the right partner on board, creating a robust process and governing mechanism assumes tremendous significance. Mapping key touchpoints from the partner side, aligning them to your team, and identifying escalation points serve as a good starting point. With agile and DevOps principles having collaboration across teams as the cornerstone, development, QA, and business stakeholder interactions should form a key component of the process. While cross-functional teams with Dev QA competencies start off each sprint with a planning meeting, formulating cadence calls to assess progress and setting up code drop or hand off criteria between Dev and QA can prevent Agile engagements from degrading into mini waterfall models.

Bringing in automated CI/CD pipelines obviates the need for handoffs substantially. Processes then need to track and manage areas such as quality and release readiness, visibility across all stages of the pipeline through reporting of essential KPIs, documentation for managing version control, resource management, and capacity planning. At times, toolset disparity between various stages and multiple teams driving parallel work streams creates numerous information silos leading to fragmented visibility at the product level. The right process should focus on integration aspects as well to bridge these gaps. Each team needs to be aware and given visibility on ownership at each stage of the pipeline.

Further, a sound process also brings in elements of risk mitigation and impact assessment and ensures adequate controls are built into SOP documents to circumvent any unforeseen event. Security measures is another critical area that needs to be incorporated into the process early on, more often it is an afterthought in the DevOps process. Puppet 2020 State of DevOps report mentions that integrating security fully into the software delivery process can quickly remediate critical vulnerabilities – 45% of organizations with this capability can remediate vulnerabilities within a day.

Right Communication

Clear and effective communication is an integral component of QA, more so when DevOps, Agile, and similar collaboration-heavy initiatives are pursued achieving QA at scale. Effective communication at the beginning of the sprint ensures that cross-functional teams are cognizant of the expectations from each of them and have their eye firmly fixed on the end goal of application release. From then on, a robust feedback loop, one that aims at continuous feedback and response, cutting across all stages of the value chain, plays a vital role in maintaining the health of the DevOps pipeline.

While regular stand-up meetings have their own place in DevOps, effective communication needs to go much beyond to focus on tools, insights across each stage, and collaboration. A wide range of messaging apps like Slack, email, and notification tools accelerate inter-team communication. Many of these toolkits are further integrated with RSS feeds, google drive, and various CI tools like Jenkins, Travis, Bamboo, etc. making build pushes and code change notifications fully automated. Developers need notifications when a build fails, testers need them when a build succeeds and Ops need to be notified at various stages depending on the release workflow.

The toolkits adopted by the partner also need to extend communication to your team. At times, it makes sense for the partner to have customer service and help desk support as an independent channel to accept your concern. The Puppet report further mentions that companies at a high level of DevOps maturity use ticketing systems 16% more than what is used by companies at the lower end of the maturity scale. Communication of the project’s progress and evolution to all concerned stakeholders is integral irrespective of the platforms used. Equally important is the need to categorize communication in terms of priority and based on what is most applicable to classes of users.

Documentation is an important component of communication and from our experiences, commonly underplayed. It is important for sharing work, knowledge transfer, continuous learning and experimentation. Code that is well documented enables faster completion of audit as well. In CI/CD based software release methodology, code documentation plays a strong role in version control across multiple releases. Experts advocate continuous documentation as core communication practice.

Right Outcome

Finally, it goes without saying that setting parameters for measuring the outcome, tracking and monitoring those, determines the success of the partner in scaling your QA initiatives. Metrics like velocity, reliability, reduced application release cycles and ability to ramp up/ramp down are commonly used. Further, there are also a set of metrics aimed at the efficiency of the CI/CD pipeline, like environment provisioning time, features deployment rate, and a series of build, integration, and deployment metrics. However, it is imperative to supplement these with others that are more aligned to customer-centricity – delivering user-ready software faster with minimal errors at scale.

In addition to the metrics that are used to measure and improve various stages of the CI/CD pipeline, we also need to track several non-negotiable improvement measures. Many of these like deployment frequency, error rates at increased load, performance & load balancing, automation coverage of delivery process and recoverability helps to ascertain the efficiency of QA scale up.

Closely following on the heels of an earlier point, an outcome based model which maps financials to your engagement objectives will help to track outcomes to a large extent. While the traditional T&M model is governed by transactional metrics, project overlays abound in cases where engagement scope does not align well to outcome expectations. An outcome based model also pushes the partner to bring in innovation through AI/ML and similar new age technology drivers – providing you access to such skill sets without the need for having them on your rolls.

If you are new to outsourcing, or working with a new partner, it may be good to start with a non-critical aspect of the work (for regular testing or automation), establish the process and then scale the engagement. For those players having maturity in terms of adopting outsourced QA functions in some way or the other, the steps outlined earlier form an all inclusive checklist to ensure maximization of engagement traction and effectiveness with the outsourcing partner.

Partner with us

Trigent’s experienced and versatile Quality Assurance and Testing team is a major contributor to the successful launch, upgrade, and maintenance of quality software used by millions around the globe. Our experienced responsible testing practices put process before convenience to delight stakeholders with an impressive industry rivaled Defect Escape Ratio or DER of 0.2.

Trigent is an early pioneer in IT outsourcing and offshore software development business. We enable organizations to adopt digital processes and customer engagement models to achieve outstanding results and end-user experience. We help clients achieve this through enterprise-wide digital transformation, modernization, and optimization of their IT environment. Our decades of experience, deep domain knowledge, and technology expertise delivers transformational solutions to ISVs, enterprises, and SMBs.

Contact us now.

Outsourcing QA in the world of DevOps – Best Practices for Dispersed (Distributed) QA teams

DevOps is the preferred methodology for software development and release, with collaborating teams oriented towards faster delivery cycles augmented by early feedback. QA is a critical binding thread of DevOps practice, with early inclusion at the story definition stage. Adoption of a distributed model of QA had earlier been bumpy, however, the pandemic has evened out the rough edges.

The underlying principle which drives DevOps is collaboration. With outsourced QA being expedited through teams distributed across geographies and locations, a plethora of aspects that were hitherto guaranteed through co-located teams, have now come under a lot of pressure. Concerns range from constant communication and interworking to coverage across a wide range of testing types – unit testing, API testing as well as validating experiences across a wide range of channels. As with everything in life, DevOps needs a balanced approach, maintaining collaboration and communication between teams while ensuring that delivery cycles are up to speed and the quality of the delivered product meets customer expectations.

Outlined below some of the best practices for ensuring the effectiveness of distributed QA teams for an efficient DevOps process.

Focus on right capability: While organizations focus to a large extent on bringing capabilities across development, support, QA, operations, and product management in a scrum team, paramount from a quality perspective would be QA skills. The challenge is to find the right skill mix. For example, a good exploratory tester; good automation skills (not necessarily in the same person). In addition, specialist skills related to performance, security, accessibility also need to be thought through. The key is to choose an optimum mix of specialists and generalists.

Aim to achieve the right platform/tool mix: It is vital to maintain consistency across the tool stacks used for engagement. As per a 451 research survey, 39% of respondents juggle 11 to 30 tools so as to keep an eye on their application infrastructure and cloud environment; 8% are even found to use over 21 to 30 tools. Commonly referred to as tool sprawl, this makes it extremely difficult to collaborate in an often decentralized and distributed QA environment. It’s imperative to have a balanced approach towards the tool mix, ideally by influencing the team to adopt a common set of tools instead of making it mandatory.

Ensure a robust CI/process and environment: A weak and insipid process may cause the development and operations team to run into problems while integrating new code. With several geographically distributed teams committing code consistently into the CI environment, shared dev/test/deploy environments constantly run into issues if sufficient thought process has not gone into identification of environment configurations. These can ultimately translate into failed tests and thereby failed delivery/deployment. A well-defined automated process ensures continuous deployment & monitoring throughout the lifecycle of an application, from integration and testing phases through to release & support.

A good practice would be to adopt cloud-based infrastructure, reinforced by mechanisms for managing any escalations on deployment issues effectively and quickly. Issues like build fail or lack of infra support can hamper the productivity of distributed teams. When strengthened by remote alerts and robust reporting capabilities for teams and resilient communication infrastructure, accelerated development to deployment becomes a reality.

Follow good development practices: Joint backlog grooming exercises with all stakeholders, regular updates on progress, code analysis, and effective build & deployment practices, as well as establishing a workflow for defect/issue management, are paramount in ensuring the effectiveness of distributed DevOps. Equally important is the need to manage risk early with ongoing impact analysis, code quality reviews, risk-based testing, and real-time risk assessments. In short, the adoption of risk and impact assessment mechanisms is vital.
Another key area of focus is the need to ascertain robust metrics that help in the early identification of quality issues and eases the process of integration with the development cycle. Recent research from Gatepoint and Perfecto surveyed executives from over 100 leading digital enterprises in the United States on their testing habits, tools, and challenges. The survey results show that 63 percent start to test only after a new build and code is being developed. Just 40 percent test upon each code change or at the start of new software.

Devote equal attention to both manual and automation testing: Manual (or exploratory) testing allows you to ensure that product features are well tested, while automation of tests (or as some say checks!) helps you with improving coverage for repeatable tasks. Planning for both during your early sprint planning meetings is important. In most cases, automation is usually given step-motherly treatment and falls at the wayside due to scope creep and repeated testing due to defects. A 2019 state of testing report, shows that only 25 percent of respondents claimed they have more than 50 percent of their functional tests automated. So, the ideal approach would be to separate the two sets of activities and ensure that they both get equal attention from their own set of specialists.

Early non-functional focus: Organizations tend to overlook the importance of bringing in occasional validations of how the product fares around performance, security vulnerabilities, or even important regulations like accessibility, until late in the day. In the 2020 DevSecOps Community Survey, 55 percent of respondents deploy at least once per week, and 18 percent claim multiple daily deployments. But when it comes to security, 45 percent of the survey’s respondents know it’s important but don’t have time to devote to it. Security has a further impact on CI/CD tool stack deployment itself as indicated by the 451 research in which more than 60% of respondents said a lack of automated, integrated security tools is a big challenge in implementing CI/CD tools effectively.

It is essential that any issue which is non-functional in nature be exposed and dealt with before it moves down the dev pipeline. Adoption of a non-functional focus depends to a large extent on the evolution of the product and the risk to the organization.

In order to make distributed QA teams successful, an organization must have the capability to focus in a balanced and consistent way across the length and breadth of the QA spectrum, from people and processes to technology stacks. It is heartening to note that the recent pandemic situation has revealed a positive trend in terms of better acceptance of these practices. However, the ability to make these practices work, hinges on the diligence with which an organization institutionalizes these best practices as well as platforms and supplements it with an organizational culture that is open to change.

Trigent’s experienced and versatile Quality Assurance and Testing team is a major contributor to the successful launch, upgrade, and maintenance of quality software used by millions around the globe. Our experienced responsible testing practices put process before convenience to delight stakeholders with an impressive industry rivaled Defect Escape Ratio or DER of 0.2.

Test our abilities. Contact us today.

Responsible Testing – Human centricity in Testing

Why responsibility in testing?

Consumers demand quality and expect more from products. The DevOps culture emphasizes the need for speed and scale of releases. As CI/CD crisscrosses with quality, it is vital to engage a human element in testing to foresee potential risks and think on behalf of the customer and the end-user.

Trigent looks at testing from a multiplicity of perspectives. Our test team gets involved at all stages of the DevOps cycle, not just when the product is ready. For us, responsible testing begins early in the cycle.

Introduce the Quality factor in DevOps

A responsible testing approach goes beyond the call of pre-defined duties and facilitates end-to-end stakeholder assurance and business value creation. Processes and strategies like risk assessment, non-functional tests, and customer experiences are baked into testing. Trigent’s philosophy of Responsible Testing characterizes all that we focus on while testing for functionality, security, and performance of an application.

Risk coverage: Assessing the failure and impact early on is one of the most critical aspects of testing. We work along with our clients’ product development teams to understand what’s important to stakeholders, evaluate and anticipate risks involved early on giving our testing a sharp focus.

Collaborative Test Design: We consider the viewpoints of multiple stakeholders to get a collaborative test design in place. Asking the right questions to the right people to get their perspectives helps us in testing better.

Customer experience: Responsible Testing philosophy strongly underlines customer experience as a critical element of testing. We test for all promises that are made for each of the customer touchpoints.

Test early, test often: We take the shift-left approach early on in the DevOps cycle. More releases and shorter release times mean testing early and testing often that translates into constantly rolling out new and enhanced requirements.

Early focus on non-functional testing: We plan for the non-functional testing needs at the beginning of the application life cycle. Our teams work closely with the DevOps team’s tests for security, performance, and accessibility – as early as possible.

Leverage automation: In our Responsible Testing philosophy, we look at it as a means to get the process to work faster and better. Or to leverage tools that can give better insights into testing, and areas to focus on testing. The mantra is judicious automation.

Release readiness: We evaluate all possibilities of going to the market – checking if we are operationally ready, planning for the support team’s readiness to take on the product. We also evaluate the readiness of the product, its behavior when it is actually released, and prepare for the subsequent changes expected.

Continuous feedback: Customer reviews, feedback speaks volumes of their experience with the application. We see it as an excellent opportunity to address customer concerns in real-time and offer a better product. Adopting the shift-right approach we focus on continuously monitoring product performance and leveraging the results in improving our test focus.

Think as a client. Test as a consumer.

Responsibility in testing is an organizational trait that is nurtured into Trigent’s work culture. We foster a culture where our testers imbibe qualities such as critical thinking on behalf of the client and the customer, the ability to adapt, and the willingness to learn.

Trigent values these qualitative aspects and soft skills in a responsible tester that contribute to the overall quality of testing and the product.
Responsibility: We take responsibility for the quality of testing of the product and also the possible business outcomes.

Communication: In today’s workplace, collaborating with multiple stakeholders, teams within and outside the organization is the reality. We emphasize not just on the functional skill sets but the ability to understand people, empathize with different perspectives, and express requirements effectively across levels and functions.

Collaboration: We value the benefits of a good collaboration with BA/PO/Dev and QA and Testing – a trait critical to understand the product features, usage models, and work seamlessly with cross-functional teams.

Critical thinking: As drivers of change in technology, it is critical to developing a mindset of asking the right questions and anticipating future risks for the business. In the process, we focus on gathering relevant information from the right stakeholders to form deep insights about the business and consumer. Our Responsible Testing approach keeps the customer experience at the heart of testing.

Adaptability & learning: In the constantly changing testing landscape, being able to quickly adapt to new technologies and the willingness to learn helps us offer better products and services.

Trigent’s Responsible Testing approach is a combination of technology and human intervention that elevates the user experience and the business value. To experience our Responsible Testing approach, talk to our experts for QA & Testing solutions.

Learn more about responsible testing in our webinar and about Trigent’s software testing services.

Accelerate CI/CD Pipeline Blog Series – Part 1- Continuous Testing

Given its usefulness in software development, Agile methodologies have come to be embraced across the IT ecosystem to streamline processes, improve feedback, and accelerate innovation.

Organizations now see DevOps as the next wave after Agile that enables Continuous Integration and Continuous Delivery (CI/CD).  While Agile helped streamline and automate the entire software delivery lifecycle, CI/CD goes further. CI checks the code often, and the tested chunks are integrated, sometimes several times in a single day, to create a stream of smaller and frequent releases through CD.

As a principal analyst at Forrester Research puts it succinctly: ”If Agile was the opening act, continuous delivery is the headliner. The link that enables CI/CD, however, is Continuous Testing (CT).

What is Continuous Testing?

Continuous Testing is a process by which feedback on business risks of a software release is acquired as rapidly as possible. It helps in early risk identification & incremental coverage as it’s integrated into the delivery pipeline. Continuous Testing is achieved by making test automation an integral part of the software delivery pipeline. It’s seamlessly interwoven into the software delivery pipeline (not tagged at the end).

Though CI/CD enables speed-to-market, inadequate end-to-end experience testing can turn it into a liability.  A key aspect of CT is to leverage test automation to enable coverage and speed.

Test Automation – Continuous Testing’s Secret Success Factor

Automation of tests is the key to ensure that Quality Assurance is as continuous, agile, and reliable.  CT involves automating the tests and running them early and often.  It leverages service virtualization to increase the test coverage when parts of the business functions are available at different points in time.

Automated Testing binds together all the other processes that comprise the CD pipeline and makes DevOps work. By validating changing scenarios, Smart automation helps in faster software delivery.

In part II of the blog series we will talk more about why test automation is essential for CI/CD Testing, and automation framework.

Learn more about Trigent software testing services or test automation services

Outsourcing Testing in a DevOps World

Software products today are being developed for a unified experience. Applications are created to perform and deliver a seamless experience on multiple types of devices, operating on various platforms.

Additionally, the growing demand for launching products at pace and scale is pushing businesses towards ensuring that they are market-ready in shorter time frames. The prevalence of Agile/DevOps practices now requires testing to be carried out simultaneously to development. Continuous development, integration, testing, and deployment have become the norm. Testers are now a part of the development process, testing the features, releases, and updates in parallel as they get developed.

The testing & deploying of a multi-platform product in a fast-paced environment requires expertise and complimenting infrastructure to deliver a unified experience. Add multiple product lines, constant updates for new features, a complex deployment, a distributed user base, into the mix, and your search for an outsourcing partner could become a daunting task.

We share some considerations that can guide your decision making — drawn from our experience of working as outsourcing partners for some of our clients, helping them deliver quality products on time.

Criteria our clients applied before selecting us as their outsourcing testing partners

Need for staff augmentation vs. managed services

You can choose staff augmentation if the requirement is short term and the tasks are well defined. In the case of a long term project, it is best to opt for managed services. Managed services suit best if the project requires ongoing support and skill sets that are not available with the business but are vital for the product or project. It also fits well for long term projects that have a clear understanding of outputs and outcomes.

Agility of the vendor’s testing practices

Agile/DevOps methodologies now drive a healthy percentage of software development and testing. Can the vendor maintain velocity in an Agile/DevOps environment? Do they have the processes to integrate into cross-functional, distributed teams to ensure continuous integration, development, testing, and deployment?

Relevant experience working for your industry

Relevant industry experience ensures that the testers involved know about your business domain. Industry knowledge not only increases efficiency but also guides testers to prioritize testing with the highest level of business impact.

Tools, frameworks, and technologies that the vendor offers

Understand the expertise of the vendor in terms of the tools, frameworks, and technologies. What is their approach to automation? Do they use/recommend licensed or open source tools? These are some considerations that can guide your evaluation.

Offshoring – Onshoring – Bestshoring

Many vendors recommend offshoring processes to reap benefits from cost savings. But does offshoring translate to an equally beneficial proposition for you? While you can best ascertain the applicability and benefits of offshoring, it is advisable to go for a mix of the three. In a managed services engagement, right shoring (a mix of onsite & offshore), ensures that the coordination aspects are dealt with by the vendor.

Reputation in the market

Ascertaining the reputation of the vendor in the market can be another useful way of evaluation. Reading independent reviews about the organization, understanding the longevity of their existing engagements (Customer stickiness), references from businesses that have engaged with the vendor earlier, number of years the organization has been in business are some of the factors that can be applied.

Culture within the organization

The culture of your potential partner must be broadly aligned with your organizational culture. The vendor should identify with your culture, be quick to adapt, be ethical, and gel well with the existing team. The culture within the vendor organization is also crucial to ensure that the employees are respectful of their commitments and stay accountable for assigned responsibilities.

Low-cost vs. high-quality output

Engaging with people who have the experience in the required domain and technology, working on a customized delivery model, within stipulated timelines, come for a premium. But are more likely to deliver value compared to a low-cost fragmented solution with inexperienced manpower, little or zero knowledge on domain-specific skills and technology, and an unsure commitment to timely delivery.

Do you know of other factors that influence decision making in terms of identifying the right outsourcing partner? Please share your thoughts.