Search This Blog

Showing posts with label Automation Testing. Show all posts
Showing posts with label Automation Testing. Show all posts

Monday, June 16, 2025

Generative AI: Transforming Software Testing

Generative AI (GenAI) is poised to fundamentally transform the software development lifecycle (SDLC), particularly in the realm of software testing. As applications grow increasingly complex and release cycles accelerate, traditional testing methods are proving inadequate. GenAI, a subset of artificial intelligence, offers a game-changing solution by dynamically generating test cases, identifying potential risks, and optimising testing processes with minimal human input. This shift promises significant benefits, including faster test execution, enhanced test coverage, reduced costs, and improved defect detection. While challenges related to data quality, integration, and skill gaps exist, the future of software testing is undeniably intertwined with the continued advancement and adoption of GenAI, leading towards autonomous and hyper-personalised testing experiences.

Main Themes and Key Ideas

1. The Critical Need for Generative AI in Modern Software Testing

Traditional testing methods are struggling to keep pace with the evolving landscape of software development.

  • Increasing Application Complexity: Modern applications, built with "microservices, containerised deployments, and cloud-native architectures," overwhelm traditional tools. GenAI helps by "predicting failure points based on historical data" and "generating real-time test scenarios for distributed applications."
  • Faster Release Cycles in Agile & DevOps: The demand for rapid updates in CI/CD environments necessitates accelerated testing. "According to the World Quality Report 2023, 63% of enterprises struggle with test automation scalability in Agile and DevOps workflows." GenAI "automates the creation of high-coverage test cases, accelerating testing cycles" and "reduces dependency on manual testing, ensuring faster deployments."
  • Improved Test Coverage & Accuracy: Manual test scripts often miss "edge cases," leading to post-production defects. GenAI "analyzes real-world user behavior, ensuring comprehensive test coverage" and "automatically generates test scenarios for corner cases and security vulnerabilities."
  • Reducing Manual Effort and Costs: "Manual testing and script maintenance are labor-intensive." GenAI "automatically generates test scripts without human intervention" and "adapts existing test cases to application changes, reducing maintenance overhead."

2. Core Capabilities and Benefits of Generative AI in Software Testing

GenAI leverages machine learning and AI to create new content based on existing data, leading to a paradigm shift in testing.

  • Accelerated Test Execution: "Faster test cycles reduce time-to-market."
  • Enhanced Test Coverage: "AI ensures comprehensive testing across all application components."
  • Reduced Script Maintenance: "Self-healing capabilities minimise script updates."
  • Cost Efficiency: "Lower resource allocation reduces testing costs."
  • Better Defect Detection: "Predictive analytics identify defects before they impact users."

3. Key Applications of Generative AI in Software Testing

GenAI’s practical applications are diverse and address many pain points in current testing practices.

  • Automated Test Case Generation: GenAI "analyzes application logic, past test results, and user behavior to create test cases," identifying "missing test scenarios" and ensuring "edge case testing."
  • Self-Healing Test Automation: Addresses the significant pain point of script maintenance. GenAI "uses computer vision and NLP to detect UI changes" and "automatically updates automation scripts, preventing test failures." Examples include Mabl and Testim.
  • Test Data Generation & Management: Essential for complex applications, GenAI "creates synthetic test data that mimics real-world user behavior" and "ensures compliance with data privacy regulations (e.g., GDPR, HIPAA)." Examples include Tonic AI and Datomize.
  • Defect Prediction & Anomaly Detection: GenAI "analyzes past defect data to identify patterns and trends," "predicts high-risk areas," and "detects anomalies in logs and system behavior." Appvance IQ is cited for reducing "post-production defects by up to 40%."
  • Optimising Regression Testing: GenAI "identifies the most relevant test cases for each code change" and "reduces test execution time by eliminating redundant tests." Applitools uses "AI-driven visual validation."
  • Natural Language Processing (NLP) for Test Case Creation: Bridges the gap between manual and automated testing by "converting plain-English test cases into automation scripts," simplifying automation for non-coders.

4. Challenges in Implementing Generative AI

Despite the immense potential, several hurdles need to be addressed for successful adoption.

  • Data Availability & Quality: GenAI requires "large, high-quality datasets," and "poor data quality can lead to biased or inaccurate test cases."
  • Integration with Existing Tools: "Many enterprises rely on legacy systems that lack AI compatibility."
  • Skill Gap & AI Adoption: QA teams require "AI/ML expertise," necessitating "upskilling programs."
  • False Positives & Over-Testing: AI models "may generate excessive test cases or false defect alerts, requiring human oversight."

5. The Future of Generative AI in Software Testing

The article forecasts significant advancements leading to more autonomous and integrated testing.

  • Autonomous Testing: Future frameworks will "not only design test cases but also execute and analyze them without human intervention." This includes "Self-healing test automation," "AI-driven exploratory testing," and "Autonomous defect triaging."
  • AI-Augmented DevOps: The fusion of GenAI with DevOps will create "hyper-automated CI/CD pipelines" capable of "predicting failures and resolving them in real time." This encompasses "AI-powered code quality analysis," "Predictive defect detection," and "Intelligent rollback mechanisms."
  • Hyper-Personalized Testing: GenAI will enable testing "tailored to specific user behaviors, preferences, and environments," including "Dynamic test scenario generation," "AI-driven accessibility testing," and "Continuous UX optimisation."

Conclusion

Generative AI is not merely an enhancement but a "necessity rather than an option" for organisations seeking to maintain software quality in a rapidly evolving digital landscape. By addressing the complexities of modern applications, accelerating release cycles, improving coverage, and reducing costs, GenAI will enable enterprises to deliver "faster, more reliable software." While challenges require strategic planning and investment, the trajectory of GenAI in software testing points towards an increasingly automated, intelligent, and efficient future.

Generative AI in Software Testing



Generative AI (GenAI) is poised to fundamentally transform the software development lifecycle (SDLC)—especially in software testing. As applications grow in complexity and release cycles shorten, traditional testing methods fall short. GenAI offers a game-changing solution: dynamically generating test cases, identifying risks, and optimizing testing with minimal human input.

Key benefits include:

  • Faster test execution

  • Enhanced coverage

  • Cost reduction

  • Improved defect detection

Despite challenges like data quality, integration, and skill gaps, the future of software testing is inseparably linked to GenAI, paving the way toward autonomous and hyper-personalized testing.


๐Ÿš€ Main Themes & Tools You Can Use


1. The Critical Need for GenAI in Modern Software Testing

Why GenAI? Traditional testing can’t keep pace with:

  • Complex modern architectures (microservices, containers, cloud-native)

    • GenAI predicts failure points using historical data and real-time scenarios.

    • ๐Ÿ› ️ Tool ExampleDiffblue Cover — generates unit tests for Java code using AI.

  • Agile & CI/CD Release Pressure

    • According to the World Quality Report 2023, 63% of enterprises face test automation scalability issues.

    • ๐Ÿ› ️ Tool ExampleTestim by Tricentis — uses AI to accelerate test creation and maintenance.

  • Missed Edge Cases

    • GenAI ensures coverage by analyzing user behavior and generating test cases automatically.

    • ๐Ÿ› ️ Tool ExampleFunctionize — AI-powered test creation based on user journeys.

  • High Manual Effort

    • GenAI generates and updates test scripts autonomously.

    • ๐Ÿ› ️ Tool ExampleMabl — self-healing, low-code test automation platform.


2. Core Capabilities and Benefits of GenAI in Testing

Capability

Impact

Accelerated Test Execution

Speeds up releases

Enhanced Test Coverage

Covers functional, UI, and edge cases

Reduced Script Maintenance

AI auto-updates outdated tests

Cost Efficiency

Fewer resources, less manual work

Improved Defect Detection

Finds bugs early via predictive analytics


๐Ÿ› ️ Tool ReferenceAppvance IQ — uses AI to improve defect detection and test coverage.


3. Key Applications of GenAI in Software Testing

✅ Automated Test Case Generation

  • Analyzes code logic, results, and behavior to generate meaningful test cases.

  • ๐Ÿ› ️ ToolTestsigma — auto-generates and maintains tests using NLP and AI.

๐Ÿ”ง Self-Healing Test Automation

  • Automatically adapts to UI or logic changes.

  • ๐Ÿ› ️ Tools:

๐Ÿงช Test Data Generation & Management

  • Creates compliant synthetic data simulating real-world conditions.

  • ๐Ÿ› ️ Tools:

    • Tonic.ai — privacy-safe synthetic test data

    • Datomize — dynamic data masking & synthesis

๐Ÿ” Defect Prediction & Anomaly Detection

  • Identifies defect-prone areas before they affect production.

  • ๐Ÿ› ️ ToolAppvance IQ

๐Ÿ” Optimizing Regression Testing

  • Prioritizes relevant tests for code changes.

  • ๐Ÿ› ️ ToolApplitools — AI-driven visual testing and regression optimization.

✍️ NLP for Test Case Creation

  • Converts natural language into executable tests.

  • ๐Ÿ› ️ ToolTestRigor — plain English to automated test scripts.


4. Challenges in Implementing GenAI

Challenge

Description

Data Availability & Quality

Poor data → inaccurate test generation

Tool Integration

Legacy tools may lack AI support

Skill Gap

Requires upskilling QA teams in AI/ML

False Positives

Over-testing may need human review


๐Ÿ› ️ Solution Suggestion: Use platforms like Katalon Studio that offer GenAI plugins with low-code/no-code workflows to reduce technical barriers.


5. The Future of GenAI in Software Testing

๐Ÿค– Autonomous Testing

  • Self-designing, executing, and analyzing test frameworks.

  • ๐Ÿ› ️ ToolFunctionize

๐Ÿ”„ AI-Augmented DevOps

  • Integrated CI/CD with AI-based code quality checks and rollback mechanisms.

  • ๐Ÿ› ️ ToolHarness Test Intelligence — AI-powered testing orchestration in pipelines.

๐ŸŽฏ Hyper-Personalized Testing

  • Tailors tests to real user behavior and preferences.

  • ๐Ÿ› ️ ToolTestim Mobile — for AI-driven UX optimization and mobile test personalization.


๐Ÿงฉ Conclusion

Generative AI isn’t just an enhancement — it’s becoming a necessity for QA teams aiming to keep pace in a high-velocity development environment.

By combining automation, intelligence, and adaptability, GenAI can enable faster releases, fewer bugs, and more robust software.

✅ Start exploring tools like Testim, Appvance IQ, Mabl, Functionize, and Applitools today to get a head start on the future of intelligent testing.


๐Ÿ’ฌ Let’s Discuss:

Have you implemented GenAI tools in your QA process? What has been your experience with tools like TestRigor, Tonic.ai, or Mabl?

๐Ÿ‘‡ Drop your thoughts or tool recommendations in the comments.


#GenAI #SoftwareTesting #Automation #AIinQA #TestAutomation #DevOps #SyntheticData #AItools #QualityEngineering

Thursday, September 2, 2021

API Automation Guidelines

 As an automation engineer, we need to follow a few guidelines.

Few of the guidelines as below:

  • No code change in the master branch directly - work on feature branches

  • Build the project locally before raising a PR

  • Run the test(s) locally before raising a PR

  • There has to be at least 1 person who reviews a PR

    • Post your PR link on the slack channel tagging concerned people and the reviewer would merge the PR and update with a comment on the slack thread

    • Reviewer has to ensure that the newly added tests are passing on the pipeline before merging

  • Ensure we add proper commit message while committing any code

    • Example: “automated customer cancel in order flow” or “modified X to achieve Y“. Basically meaningful commit instead of just writing “commit“ “fixed“ etc

  • Test Method should be 40-50 lines long at max

    • Break it into private methods if needed

    • Name the test method such that there is NO need of documenting its behaviour - test method names should start with "verify******"

  • Do NOT span any PR beyond 3-4 days - either get it merged within this time period or close the current one (if it is spilling over 3-4 days) and create another after local rebase

  • Put all assertions in Test classes (use return in helper methods to get what needs to be compared for assertions)

  • Always add a message with assertions to be logged upon a failure - it gives the good context of the issue in the report upon a failure, upfront

  • Ensure the correct tags are attached to the scenarios/tests before raising a PR (Smoke, Regression, ServiceType)

  • Don’t use “System.out.println” in the code, use TestNG logger only.

  • Add allure annotations properly so test reports can be used effectively.

  • Test your code with all negative cases. Avoid null pointer exceptions in your code.

  • Add logging for each api call (Request Call/Request Payload/Response Json are the minimal ones).

  • Add all other necessary logging for your test case so it can be helpful later for the debugging

  • Avoid adding redundant code and create a helper method instead.

  • Always add health check verification for the new APIs.

NFR Template/Checklist for JIRA


To make NFR as predefined template/checklist, we came up with few critical points to start with and it would be auto-populated as and when someone creates any story to the project.

Idea is to pushing NFR in initial phase discussion like designing and developing and as a cross check goes to QA. Apart from predefined template/checklist, anyone can work on other points too for which checklist already been published in Confluence under Guidelines and having predefined checklist in each story would ensure we are having NFR discussions too along with functional towards any deliverables to production.


NFR ListChecklist_PointsComments if any
Logging
Have we ensured we are not logging access logs?Access logs represent the request logs containing the API Path, status code, latencies & and any information about the request. We can avoid logging this since we already have this information in the istio-proxy logs
Have we ensured we didn't add any sort of secrets in logs (DB passwords, keys, etc) ?
Have we ensured that payload gets logged in the event of an error ?
Have we ensured that logging level can be dyanamic configured ?
Have we ensured that entire sequence of events in particular flow can be identified using an identifier like orderId or anything- The logs added should be meaningful enough such that anyone looking at the logs, regardless of whether they have context on the code should be able to understand the flow.
- For new features, it maybe important that the logs are logged as info to help ensure the feature is working is expected in production. Once we have confidence that the feature is working as expected, we could change these logs to debug unless required. Devs could take a call based on the requirement.
Have we ensured that we are using logging levels diligently ?
Timeouts
Have we ensured that we have set a timeout for database calls ?
Have we ensured that we have set a timeout for API call ?
Have we ensured that timeouts are derived from dependent component timeouts ?An API might have dependencies on few other components (APIs, DB queries, etc) internally. It is important the overall API timeout is considered after careful consideration of the dependent component timeouts.
Have we ensured that we have set a HTTP timeout ?Today, in most of our services we set timeouts at the client (caller). But we should also start looking at setting timeouts for requests on the server (callee). This way we ensure we kill the request in the server if it exceeds a timeout regardless of whether the client closes the connection or not.
Response Codes
Have we ensured that we are sending 2xx only for successfull scenarios ?
Have we ensured that we are sending 500 only for unexpected errors (excluding timeouts) ?
Have we ensured that we are sending 504 for a timeout error ?
Perf
Have we ensured that we did perf testing of any new API we build to get benchmark of the same we can go as per the expectations and can track accordingly going forward ?
We should identify below parameters as part of the perf test & any other additional info as per need:
- Max number of requests a pod can handle with the allocated resources
- CPU usage
- Memory usage
- Response times


Have we ensured we did perf testing of existing APIs if there are changes around it to make sure we didn’t impact existing benchmark results ?
Feature ToggleHave we ensured that we have feature toggle for new features to be able to go back to the old state at any given point until we are confident of the new changes. We may need to have toggles like feature will be enabled for specific users or city ?
ResiliencyHave we ensured that we are resilient to failures of dependent components (database, services ) ?
MetricsHave we ensured that we are capturing the right metrics in prometheous ?Below are some of the metrics that could be captured based on need or criticality:
- Business metrics (example: number of payment gateway failures)
- Business logic failures (example: number of rider prioritization requests that failed)
- Or any other errors which would be important to help assess the impact in a critical flow could be captured as metrics.
Security
Have we ensured that right authentication scheme is active at the gateway level ?This is applicable when we are adding any end point on Kong(Gateway). 
- any of the authentication plugins (jwt,key-auth/basic-auth) must be defined either at the route level or on the service level
- for gateway kong end points, acl plugin must be added and same group must be present on the consumer definition.
Have we ensured that proper rate limiting applied at the gateway level ?This is applicable when we are adding any end point on Kong(Gateway).Team leads are the code owners, so one of them have to check this when approving the PR. 
- rate limiting plugin needs to be enabled on the route / service level on the PR raised against kong-config. 
Have we ensured that we are retreiving the userId from JWT ?if requests is coming from kong, userid in requestbody should be matched with headers. Or for fetching any user related information, we have to read the userId only from the header populated by kong (x-consumer-username).

 


It would be populated in all Jira stories across projects as a predefined NFR checklist as given below screenshot.




Thursday, October 17, 2019

Framework Evaluation & Selection

Framework Evaluation Criteria

  1. Support automation integration testing for UI (real/headless browser/device), API,  performance in cross platforms.

    • For the integration test, we don’t test the front-end or back-end individually. We need to verify the data with the integration between system, front-end & back-end. So the framework needs to be able to support all testing type and especially crossing browser, device & system.
  2. Support CI integration with parallel execution.

    • The testing framework needs to be able to integrate with our current CI/CD. Parallel execution will help reduce the build time, so we will be able to deploy quickly and have faster turnaround times for bugs and features.
  3. Supports the concept of executable documentation for BDD (behavior-driven development)& DDT (data-driven testing) in modularization, maintainable and understandable test suites.

    • BDD will help the test case & test suite easy to maintaining and understanding. It also helps communication between business and development is extremely focused as a result of common language.
    • One of the most important concepts for effective test automation is modularization, it will help to create a sequence of test case only one and reuse as often as required in test script without rewriting the test all the time.

BDD Testing Framework Selection


GaugeCucumber
LanguageMarkdownGherkin
IDE & plugin supportYesYes
Easy to integrate with CI/CDYesYes
Easy to use, quick to learnNoNo
Reusable, easy to maintainYesYes
Parallel executionBuilt-in3rd party plugin
Customize reportingYesYes
PriceOpen source & free Open source & free 

Winner - Gauge:

  • An open source lightweight cross-platform test automation tool with the ability to author test cases in the business language and have built-in parallel execution feature.
  • Support BDD (Behavior-Driven Development) & CI(Continuous Integration) & report customization.

API Testing Framework Selection


REST AssuredPostman
Support BDDYes3rd third party
Support DDTYesLimit
Easy to integrate with CI/CDYesYes
Easy to use, quick to learnNoYes
Reusable, easy to maintainYesNo
Customize reportingCan be used with any customized/open source reporting tool.No
PriceOpen source & free 8-21$ per user/month for professional collaboration  & advanced features
  

Winner - REST Assured:

  • An open source Java-based Domain-Specific Language (DSL) that allows writing powerful, readable, and maintainable automated tests for RESTful APIs. 
  • Support testing and validating REST services in BDD (Behavior-Driven Development) / Gherkin format.

API Performance Testing Framework  Selection


Apache JMeterLoadrunner
Support DDTYesYes
Easy to integrate with CI/CDYesYes
Easy to use, quick to learnYesNo
Cross-platformYesWindows, Linux
Reusable, easy to maintainYesNo
Customize reportingYesYes
PriceOpen source & free Free for first 50 virtual User

Winner - Apache JMeter : 

  • Open source performance test runner & management framework may be used to test performance both on static and dynamic resources.
  • Support load test functional behavior and measure performance. It can be used to simulate a heavy load on a server, group of servers, network or object to test its strength or to analyze overall performance under different load types.

Test Runner & Test Suite Management Framework Selection


TestNGJUnit
Support DDTYesYes
Easy to integrate with CI/CDYesYes
Easy to use, quick to learnNoNo
Reusable, easy to maintainYesYes
Parallel executionYesYes
PriceOpen source & free Open source & free 
Annotation supportYesLimit
Suite TestYesYes
Ignore TestYesYes
Exception TestYesYes
TimeoutYesYes
Parameterized TestYesYes
Dependency TestYesNo
Support executing before & after all tests in the suiteYesNo
Support executing  before & after a test runsYesNo
Support executing  before the first & last test method
is invoked that belongs to any of these groups is invoked
YesNo

Winner - TestNG :

  • Open source test runner framework which helps to run your tests in arbitrarily big thread pools with various policies available and flexible test configuration.
  • Support DDT(Data-driven testing) & Test suite management.

Build Tool & Dependency ManagementFramework Selection


Apache MavenGrandle
Easy to integrate with CI/CDYesYes
Easy to use, quick to learnYesNo
Build Script LanguageXMLGroovy
Reusable, easy to maintainYesYes
Dependency ManagementYesYes
Dependency ScopesBuilt-inCustom
IDE & plugin supportManyA little
PriceOpen source & free Open source & free 

Winner - Apache Maven 

  • The leading open source dependency management and build tool. It standardizes the software build process by articulating a project’s constitution.
  • Software project management and comprehension tool in the concept of a project object model (POM), Maven can manage a project's build, reporting, and documentation from a central piece of information.

Monday, April 4, 2016

Passing data to DataProvider from Excel sheet

In this example we will see how to pass the data to Dataproviders by reading the data from excel sheet. DataProvider helps to send multiple sets of data to a test method. But here we need to make sure that the array returned by the dataprovider should match with the test method parameters.
We will write a simple program in which we will validate login screen by taking multiple usernames and passwords. The annotated method must return object[][] where each object[] can be assigned to the test method one as username and the other parameter as password.
Step 1: First create a method to read excel data and return string array.
Step 2: Create before class and after class methods which helps in getting the browser and closing them when done.
Step 3: Create a data provider which actually gets the values by reading the excel.
Step 4: Create a Test which takes two parameters username and password.
Step 5: Add dataprovider name for @Test method to receive data from dataprovider.
package com.pack;

import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;

import org.testng.Assert;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;


import jxl.Sheet;
import jxl.Workbook;
import jxl.read.biff.BiffException;

public class ReadExcelDataProvider {
 public WebDriver driver;
 public WebDriverWait wait;
 String appURL = "https://www.linkedin.com/";
 
 //Locators
 private By byEmail = By.id("session_key-login");
 private By byPassword = By.id("session_password-login");
 private By bySubmit = By.id("signin");
 private By byError = By.id("global-alert-queue");
 
 @BeforeClass
 public void testSetup() {
  driver=new FirefoxDriver();
  driver.manage().window().maximize();
  wait = new WebDriverWait(driver, 5);
 }
 

 @Test(dataProvider="empLogin")
 public void VerifyInvalidLogin(String userName, String password) {
  driver.navigate().to(appURL);
  driver.findElement(byEmail).sendKeys(userName);
  driver.findElement(byPassword).sendKeys(password);
  //wait for element to be visible and perform click
  wait.until(ExpectedConditions.visibilityOfElementLocated(bySubmit));
  driver.findElement(bySubmit).click();
  
  //Check for error message
  wait.until(ExpectedConditions.presenceOfElementLocated(byError));
  String actualErrorDisplayed = driver.findElement(byError).getText();
  String requiredErrorMessage = "Please correct the marked field(s) below.";
  Assert.assertEquals(requiredErrorMessage, actualErrorDisplayed);
  
 }
 
 @DataProvider(name="empLogin")
 public Object[][] loginData() {
  Object[][] arrayObject = getExcelData("D:/sampledoc.xls","Sheet1");
  return arrayObject;
 }

 /**
  * @param File Name
  * @param Sheet Name
  * @return
  */
 public String[][] getExcelData(String fileName, String sheetName) {
  String[][] arrayExcelData = null;
  try {
   FileInputStream fs = new FileInputStream(fileName);
   Workbook wb = Workbook.getWorkbook(fs);
   Sheet sh = wb.getSheet(sheetName);

   int totalNoOfCols = sh.getColumns();
   int totalNoOfRows = sh.getRows();
   
   arrayExcelData = new String[totalNoOfRows-1][totalNoOfCols];
   
   for (int i= 1 ; i < totalNoOfRows; i++) {

    for (int j=0; j < totalNoOfCols; j++) {
     arrayExcelData[i-1][j] = sh.getCell(j, i).getContents();
    }

   }
  } catch (FileNotFoundException e) {
   e.printStackTrace();
  } catch (IOException e) {
   e.printStackTrace();
   e.printStackTrace();
  } catch (BiffException e) {
   e.printStackTrace();
  }
  return arrayExcelData;
 }

 @Test
 public void tearDown() {
  driver.quit();
 }
}
After clicking on login button, we are using WebdriverWaits to Check for error message and validate.
The output should look like below:
[TestNG] Running:
  C:\Users\easy\AppData\Local\Temp\testng-eclipse-583753747\testng-customsuite.xml

PASSED: VerifyInvalidLogin("testuser1", "testpassword1")
PASSED: VerifyInvalidLogin("testuser2", "testpassword2")
PASSED: VerifyInvalidLogin("testuser3", "testpassword3")
PASSED: VerifyInvalidLogin("testuser4", "testpassword4")
PASSED: VerifyInvalidLogin("testuser5", "testpassword5")

===============================================
    Default test
    Tests run: 5, Failures: 0, Skips: 0
===============================================


===============================================
Default suite
Total tests run: 5, Failures: 0, Skips: 0
===============================================

DataProvider in TestNG

Marks a method as supplying data for a test method. The annotated method must return an Object[][] where each Object[] can be assigned the parameter list of the test method.
The @Test method that wants to receive data from this DataProvider needs to use a dataProvider name equals to the name of this annotation.
The name of this data provider. If it's not supplied, the name of this data provider will automatically be set to the name of the method.
In the below example we will pass the data from getData() method to data provider. We will send 3 rows and 2 columns ie. we will pass three different usernames and passwords.
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;

public class DataProviderExample{
 
 //This test method declares that its data should be supplied by the Data Provider
 // "getdata" is the function name which is passing the data
       // Number of columns should match the number of input parameters
 @Test(dataProvider="getData")
 public void setData(String username, String password)
 {
  System.out.println("you have provided username as::"+username);
  System.out.println("you have provided password as::"+password);
 }

 @DataProvider
 public Object[][] getData()
 {
 //Rows - Number of times your test has to be repeated.
 //Columns - Number of parameters in test data.
 Object[][] data = new Object[3][2];

 // 1st row
 data[0][0] ="sampleuser1";
 data[0][1] = "abcdef";

 // 2nd row
 data[1][0] ="testuser2";
 data[1][1] = "zxcvb";
 
 // 3rd row
 data[2][0] ="guestuser3";
 data[2][1] = "pass123";

 return data;
 }
}
When we execute the above example, we will the get the output as below:
Each data set that we pass will be considered as a test method. As we passed three set of data to the data provider, it will display result as below:
Default suite
Total tests run: 3, Failures: 0, Skips: 0
Output of the Above Program
you have provided username as::sampleuser1
you have provided password as::sampleuser1
you have provided username as::testuser2
you have provided password as::testuser2
you have provided username as::guestuser3
you have provided password as::guestuser3
PASSED: testdata("sampleuser1", "abcdef")
PASSED: testdata("testuser2", "zxcvb")
PASSED: testdata("guestuser3", "password123")

My Profile

My photo
can be reached at 09916017317