profile-pic
Vetted Talent

Aaradhya Jain

Vetted Talent

Seasoned QA Automation Engineer with 3.9 years of expertise in web and mobile automation.

Proficient in Selenium, Appium, and Cucumber, I've consistently delivered tangible

results, reducing testing times by over 30% through the creation of efficient

automation scripts. A collaborative team player committed to staying abreast of the

latest QA automation trends for continuous improvement. Currently serving notice period.

  • Role

    Automation Test Engineer

  • Years of Experience

    3.90 years

Skillsets

  • Selenium Automation
  • AI - 2 Years
  • Selenium Grid
  • Selenium Automation
  • Programming languages - java
  • Jenkins
  • Git
  • Manual Testing - 4 Years
  • SDET - 4 Years
  • testing - 3.7 Years
  • Git - 3.9 Years
  • automation - 3.7 Years
  • Postman
  • Selenium Grid
  • Selenium - 3.9 Years
  • TestNG - 3.9 Years
  • Maven
  • Jira - 4 Years
  • Jenkins
  • JavaScript - 2 Years
  • Git
  • Cucumber - 3.7 Years
  • Python - 4 Years
  • Automation Testing - 4 Years
  • API Testing - 3.7 Years
  • Java - 3.9 Years
  • Java - 3 Years
  • Appium - 3.9 Years

Vetted For

12Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Mobile Automation TesterAI Screening
  • 71%
    icon-arrow-down
  • Skills assessed :Appium, Selenium, Java, TestNG, Git, mobile automation, Gherkin, Jenkins, Maven, Ant, BrowserStack, Rummy
  • Score: 64/90

Professional Summary

3.90Years
  • Oct, 2022 - Present3 yr

    Key Contributor

    Digital Platform
  • Mar, 2021 - Present4 yr 7 months

    Associate Consultant

    Capgemini
  • QA Automation Tester

    Digital Platform (Web-Based Testing Platform)

Applications & Tools Known

  • icon-tool

    Selenium

  • icon-tool

    Appium

  • icon-tool

    Mobile Automation

  • icon-tool

    Java

  • icon-tool

    TestNg

  • icon-tool

    Git

  • icon-tool

    Jira

  • icon-tool

    Selenium Grid

  • icon-tool

    Manual Testing

  • icon-tool

    QA

  • icon-tool

    Jenkins

  • icon-tool

    Appium

  • icon-tool

    Cucumber

  • icon-tool

    TestNG

  • icon-tool

    Maven

  • icon-tool

    Python

  • icon-tool

    Postman

  • icon-tool

    Appium

  • icon-tool

    TestNG

  • icon-tool

    Maven

  • icon-tool

    Appium

  • icon-tool

    TestNG

  • icon-tool

    TestNG

Work History

3.90Years

Key Contributor

Digital Platform
Oct, 2022 - Present3 yr
    As a key contributor to the "Digital Platform" project, I played a pivotal role in developing a cutting-edge web-based testing platform. This platform empowered developers and testers to conduct seamless cross-browser testing of websites and mobile applications across on-demand browsers and real mobile devices. Key Contributions: Automation Framework Development: Implemented and configured Selenium Grid and Appium for efficient testing on both mobile devices and browsers, optimizing test execution across diverse environments. Automation Scripting: Authored highly efficient automation scripts tailored for both mobile and browser-based testing, significantly reducing testing times and enhancing overall test coverage. API Testing: Experienced in API testing using Postman to validate and ensure robust backend functionality and integration. Integration and Continuous Testing: Integrated automated tests into the CI/CD pipeline, ensuring continuous testing and faster feedback loops, leading to quicker detection and resolution of issues. QA Testing and Feedback: Functioned as a QA Tester, conducting thorough exploratory testing to identify potential issues and providing valuable feedback on the software's usability and user experience. This dual role allowed for a holistic approach to quality assurance.

Associate Consultant

Capgemini
Mar, 2021 - Present4 yr 7 months
    Spearheaded the development and delivery of Digital Platform solutions for the Banking client, playing a key role in ensuring project success. Thrived in diverse work environments, including client locations, adapting seamlessly to varying project requirements. Active participation in Agile sprints, contributing to effective sprint planning, and playing a crucial role in the execution of software releases. Engaged in daily stand-up meetings and sprint planning sessions, fostering efficient communication and collaboration within the team. Collaborated with cross-functional teams to guarantee the seamless delivery of projects, emphasizing effective communication and teamwork. Mentored team members on DevOps methodologies, tools, and best practices, cultivating a culture of continuous improvement and knowledge sharing.

QA Automation Tester

Digital Platform (Web-Based Testing Platform)
    Automation Framework Development: Implemented and configured Selenium Grid and Appium for efficient testing on both mobile devices and browsers, optimizing test execution across diverse environments. Automation Scripting: Authored highly efficient automation scripts tailored for both mobile and browser-based testing, significantly reducing testing times and enhancing overall test coverage. API Testing: Experienced in API testing using Postman to validate and ensure robust backend functionality and integration. Integration and Continuous Testing: Integrated automated tests into the CI/CD pipeline, ensuring continuous testing and faster feedback loops, leading to quicker detection and resolution of issues. QA Testing and Feedback: Functioned as a QA Tester, conducting thorough exploratory testing to identify potential issues and providing valuable feedback on the software's usability and user experience. This dual role allowed for a holistic approach to quality assurance.

Achievements

  • Capgemini Star Performer (2022)
  • HSBC Recognition (2023)
  • People Champion Award (2023)

Major Projects

3Projects

Digital Platform (Web-Based Testing Platform)

Oct, 2022 - Present3 yr
    As a key contributor to the "Digital Platform" project, I played a pivotal role in developing a cutting-edge web-based testing platform. This platform empowered developers and testers to conduct seamless cross-browser testing of websites and mobile applications across on-demand browsers and real mobile devices. Key Contributions: Automation Framework Development:Implemented and configured Selenium Grid and Appium for efficient testing on both mobile devices and browsers, optimizing test execution across diverse environments. Automation Scripting: Authored highly efficient automation scripts tailored for both mobile and browser-based testing, significantly reducing testing times and enhancing overall test coverage. API Testing: Experienced in API testing using Postman to validate and ensure robust backend functionality and integration. Integration and Continuous Testing: Integrated automated tests into the CI/CD pipeline, ensuring continuous testing and faster feedback loops, leading to quicker detection and resolution of issues. QA Testing and Feedback: Functioned as a QA Tester, conducting thorough exploratory testing to identify potential issues and providing valuable feedback on the software's usability and user experience. This dual role allowed for a holistic approach to quality assurance.

Digital Platform

    As a key contributor to the "Digital Platform" project, I played a pivotal role in developing a cutting-edge web-based testing platform. This platform empowered developers and testers to conduct seamless cross-browser testing of websites and mobile applications across on-demand browsers and real mobile devices.

Home Loan App

Capgemini
May, 2021 - Oct, 2021 5 months

    Home Loan app is a web application where customers can apply for a home loan without visiting any bank branch.

    My Role:

           Write clean, efficient, code of java APIs using Spring Boot.

           Front-end code development in Angular

Education

  • Bachelor of Technology (Computer Science and Engineering)

    Teerthanker Mahaveer University Moradabad (2020)
  • Bachelor of Technology (Computer Science and Engineering)

    Teerthanker Mahaveer University (2020)

AI-interview Questions & Answers

Okay. So, uh, I am, and, uh, I have over 3.9 years of experience working as a q auto QA Automation, um, currently working as an associate consultant for Capgemini. My expertise lies between, um, mobile and web automation, primarily using tools like, uh, Selenium, APM, Cucumber, TestNG. I have worked on Java and Python and, uh, other various, uh, languages as per the, uh, requirement. Apart from that, I have worked on API automation part, uh, primarily using tools like Postman, also frameworks. In my current job role, I have written so many test scripts, uh, automation scripts. I have created test cases, test planning, 4 different features, and execute them on the, uh, automated automated and manual both devices as per the requirement. Apart from this, I have completed my, uh, graduations from Thirthankar Mahabe University, Muratabad. And by pursuing a degree of, uh, BTEC, uh, with computer science. And just after my graduation, I have, uh, started my professional journey from here onward in the Capgemini. So this is the brief about, uh, my, uh, self. Uh, so let me start from my background, which is related to my project details. So, uh, I have worked, uh, for a banking client, uh, in a project named as digital platform. It is a, um, cloud based web application, which is, uh, for which provided the capability of testing, uh, or development. Uh, it does provide, uh, the capability of of testing for manual and automated browsers and mobile on the cloud based platform. It is kind of similar like, uh, the the the huge names which, uh, might, uh, might you aware of, uh, like, browser stack, uh, Expedia test, uh, some more. So, uh, what is the main advantage of this particular, uh, system? Uh, it is inside a banking network, and they are, uh, it doesn't require to, uh, cross the the network layer to connect with the outside the network. Uh, so this is the on home solution which provide mobile automation testing, uh, mobile manual testing, browser automation and browser manual testing, and some other parts of testing of of the mobile and web app related just like accessibility testing, network throttling, uh, and performance related things. So these all things are available on, uh, on our web based solution, uh, which is, uh, named as digital platform. My role in this particular project is primarily working as a QA automation. I have done so many automation script for the, uh, web, uh, complete web UI, which is available for the users. And apart from that, I have worked on the mobile application parts for developing, uh, the the automation scripts, uh, for different banking applications. Uh, I have also, uh, written the automation scripts to, um, to provide capability of of some features in this particular platform. So this is brief about, uh, my background. Uh, I hope it is pretty much clear for you. Thank you.

Okay. So in in in my, uh, uh, current, uh, projects or in experience, we have started from the data driven approaches, uh, in in the recent frameworks. The main advantage of these kinds of, uh, approaches is, like, we have kept the complete data of the different different fields in a separate document. Either it is a Excel file or it is a JSON file. And, uh, we are fetching the data, and, uh, we'll use in our, uh, automation scripts. The main advantage of these particular files are you just don't need to, uh, go to different lines in the code to change that particular data. You just need to provide the key of that particular thing, and then you can just, um, easily modify, uh, those data values in this, uh, in these, uh, files which are available, uh, uh, in in a separate folder. So this data driven approaches are, uh, very useful when you are working on a multiple kinds of test case, test data, and different different kinds of situation. So it is better, uh, than the keyboard keyword driven approaches. In the keyword driven approaches, you are limited to use one value at a time, or you need to, uh, maintain all these kinds of structure, uh, in a particular file. Uh, instead of that, we are using a data driven kind of it, uh, kind of, uh, structures. It is also helpful while working on the parallel data types or parallel testing or the, uh, more than, uh, one user. If you want to run a particular test case with the different users, then you can use the data driven approaches. The, uh, one more advantage of this particular approach is data provider functionality, which is available in the automation part. Uh, what it is doing, it is creating a object of each data, uh, for each test data or each test user in a separate object. Either it is a map or it is a a dictionary in in the terms of the languages, Or, uh, it can be a list so we can access the data as per our need and, uh, use it, uh, for different different test cases or for the same test case with multiple data multiple test users.

So one of the most used technology for iOS or APM in, uh, Android in the APM, what we can do, we can attach a capability, uh, of auto accept alerts in automation script. Uh, what it will do, it will automatically automatically allow the pop up or alert while, uh, we are using the automation. When we are, uh, going to the post automation post notifications, uh, on the device side, so, uh, the ban strategy to use these kinds of structure, we are creating some locators which are not not limited to a particular notification, uh, either that have a accept button or it has a okay button or some kind of positive, uh, but, uh, button level, which we need to accept, or we have a decline option. So, uh, what, uh, is the, uh, what is the better approach of what we are doing here? Uh, we are, uh, taking a a generic locator to click on that particular button or functionality by to handling these kinds of pop up in both of the devices for Android and iOS. And, uh, this is, uh, pretty much helpful in the different different situations. And, uh, also, it is not limited to a particular device or particular vendor. It will work on, uh, different different vendors. So as per the different vendors, uh, post notifications pop up or, uh, or the kind of pop ups we are handling like this.

How do we balance the trade offs between test execution speed and reliability when writing mobile automation scripts? So, uh, whenever we are executing the, uh, automations on the devices on the mobile devices, uh, to to make it more reliable, make it, uh, more, uh, faster, uh, and and, uh, of better automation scripts. What we are doing, we are using explicit kind of weights with different kinds of functionalities, for example, visibility of elements, locating of the element, elements would be clickable, and these kinds of thing. So instead of using a dynamic, uh, static bit, what we are doing, we are just, uh, doing we are just providing a dynamic kind of bit position or dialing to make type of, uh, um, condition where it will wait till a particular condition to fulfill, and then it will start executing those things. Also, we are providing a particular time, uh, in which, uh, I mean, after that time, it should, um, it should get failed. For example, we have provided 20 seconds to fulfill that particular condition. That condition will not get fulfilled in in the 20 seconds, then the particular steps will get felt. So by using these particular, uh, things, we are, uh, we are using the the, uh, reliable automation script. Apart from, uh, this, we are also, uh, optimize our test scripts in the critical, uh, critical criticality of, uh, of the of the of the test case, I can say. We are just, uh, highlighting whether which is required for, uh, sanity then which is required for the regression. So as per the requirement, we will run only those kinds of, um, those kinds of the test cases as per as per the, uh, criticality of the criticality of the, um, I can say, deployment or the changes or the release. Uh, also, we are using the correct locator, uh, which is the most powerful or which is the, uh, most fast, we can save, uh, accessibility ID available for a particular locator. So we will try to use the accessibility ID or the source ID first. Apart from that, if there is, uh, missing I mean, accessibility ID is not available, then we will try to use the XPath, and then we will, uh, take the other locators in that. Apart from that, uh, if there are, uh, situations where we can run the parallel executions, uh, on different devices, on different different, uh, providers, for example, APM grade or these kinds of solution that we have. So we can run the parallel application for different test cases on the different devices. So it will make our execution faster. And, uh, one more thing we can, uh, do here, uh, instead of installing the application each and every time, we can, uh, just provide the app once, and then we will launch for the different different activities. So by using these particular things, we can, uh, optimize our scripts. So all apart from this, we can also, uh, uh, manage our logs and debugging enhancement parts to to make our executions more reliable by writing the automation script. So if something, uh, went wrong, so it will capture the screenshots on the particular step and all those things sort handled, uh, properly in our auto reset. So this will be helpful to make, uh, the execution speed more and the, uh, auto reset script sort of reliable.

How would you integrate testNG result into a larger CICD pipeline such as with Jenkins? Okay. So whenever we are using the CICD tools, uh, like Jenkins, so what we are doing, we either we are using different kinds of frameworks such as Maven, Gradle, or anything else. So, uh, for example, if we are using a Maven kind of structure and we are using the test engine inside it, So what we will do, we will use some profiles to start the execution. And just after completing the execution, we are fetching all the reports from the either from the test engine files, test engine reports, or from our differently generated, uh, report column, which is available, uh, on different folder structure. So just after the completion of the Jenkins pipeline, uh, we will trigger 1, uh, mail, or we will send the report of complete execution, uh, after, uh, the the automation completion, uh, whether it is getting failed, passed, escaped, or anything is available. What we are we can do, we can execute some listeners, which will fetch the report on a particular location, and then it will summarize the report and send on send on the, uh, the send on send to different different mailers, uh, and, uh, uh, also, it will share the complete report, complete execution, uh, timings to complete these steps and all those things on our different, uh, particular environment, or we can say it will save somewhere where it is required. So by using these kinds of structure, we can, um, we can integrate test NG results in the CICD pipeline tools, which will, uh, provide you the, uh, better update of your reporting and results of that particular framework.

As you know, you have provided with the company use page of that model structure and test the. Please explain the likely issue you see here with the login method and implementation. Username look at our buyer dot ID. Buyer dot ID login button locator is here, and we are using login. Okay. So what I can see in this particular uh, class which is provided, uh, you know, in in in this course embed. So here, we are writing a login test in login method, and you are we are using different kinds of locators, username, password, and login button locators in that particular method. And, uh, what we are doing, we are creating those locators in that same file, which have all the test cases, which have all the test methods available. What we should do, uh, while we are following page of the model kind of structure, we should use these locators in a separate file. Either it is an interface file or either it is a different file. So in the different, um, locate, uh, interfaces, we can separately, uh, keep these locators for different different pages. For example, we can create a interface as norm named as a login page element, and there we can store all these, uh, required locators. And then we can implement that locator in our login page, uh, test file, and, uh, we can use them, uh, on our different methods. What What is the benefit of this particular thing? Uh, if in future there might, uh, be any changes in a particular locator, then you just don't need to go and, uh, change your script, uh, in in the particular test file or test methods. You just need to change the locators in that element related interface, which is more flexible in terms of the changes of the locator. So it will not affect the overall code. It will keep all the locators and methods separately so we can easily change those elements as per our requirement. So this, uh, approach we can use to, uh, while we are following the page object structure.

Example, this scenario written for a VDD framework. Identify the error and explain what why it's incorrect given the context of language. Feature scenario when then. Okay. And, uh, is a feature user login scenario valid user login when user enters admin and password 123, then the users would redirect to the dashboard page. Okay. So one thing, uh, which is clearly available, uh, we are sending the data, uh, like password and all those things in that particular feature script. I think, uh, this is not correct. Uh, we should provide some, uh, data driven kind of structure in which all the test data are suitable, and it will pick the data from there. Then, uh, then we can provide, uh, provide the data from it. Apart from that, uh, it has provided the data in in a single column kind of a structure, uh, which is not a proper format in the term of Gherkin language. We should provide in a double quotation marker to the consistency consistency and, uh, to lead the different different parsing errors. And, uh, is in these lines, uh, the lines we are using when then these are the the the things are not in a proper way. Um, the line then and the users would be redirected to the dashboard page incorrectly combined then and and instead of the, uh, when then we are we can use a given when then and these kinds of structure. So the correct, uh, form can be, uh, when the user enters admin and password, then the user, uh, should be redirected to the dashboard page. These kinds of thing, we can use, uh, in in in terms of, uh, the languages to correct these these things.

Explain your, uh, approaches to refactor a complex method that interacts with the, uh, both Android and iOS m elements using APM and effort to reduce code in duplication. Okay. So, uh, in in such cases, what we can do if, uh, first of all, we will, uh, finalize, like, whether the locators are similar, uh, for both of the, um, both of the OS level things, or we can use some generic type of locators. So we can use them for both of the devices, uh, or both type of OS. OS. Apart from that, we can also use abstract platform specified behavior where, uh, we can provide such locators and methods which will return, uh, return, um, I mean, the same method we can use, but it will, uh, return as per the driver's face of the model. Uh, also, we should use the page object model kind of structure as per different different pages of the devices. We can use platform agnostic utility layer where we can use click and sending and these types of text in relevant to the iOS or Android kind of structure. We can simply use the send keys or, uh, click and these kinds of method. And sometimes some functionalities might be there for for different different, uh, features or different different, uh, OS level. For example, sometimes swipe or scroll will not, uh, be the same in Android and iOS. So what we can do, we can create a separate method for swipe. And, uh, in that, we can implement both structures for Android and iOS. And, uh, whether we are calling a a a method, which is which is irrelevant to the OS level, so what we can do, we can capture the capability of that particular session, whether it is a Android session or a iOS session. And as per the session, we can, uh, use that swipe kind of structure. Also, uh, we can we can, uh, commonly use the the accents and the, uh, different things for both type of, um, uh, both types of platforms. For example, uh, whether we are working on a login page, so we are providing some usernames, some password fields, and then we will click on login button. So we can write these things in a single common file, and we can use those things as per our I mean, the locators, which are available, it it should be should be the, uh, same, uh, whether, uh, I mean, uh, it is for Android or iOS. We are providing our look at us like that whether it will pick for the both of the data type. If it is not working for the both types, then what we can do? One more thing we can add in the parameters of our of our automation script execution. We can write our test cases in such a manner where it will pick the methods from different different, uh, classes or fields, uh, whether we are defining some methods for Android. We are defining some methods for iOS. So whenever we are running the automation, we will provide a tag, uh, with a platform name, whether it is running for Android or it is for iOS. Then, uh, in the test cases, it should pick the methods as per the OS level. So in the test script, we will not, uh, change, uh, if it is for the different OS. By using these kinds of things, we can achieve, uh, this reduced code thing, uh, for the different different Android and iOS.

What strategies do you deploy to run automated mobile test in cloud based platforms such as BrowserStack or SauceLab? So first of all, as starting of this particular conversation, I have already told, like, uh, in our, uh, in our case, we are using our own ground, uh, on developed platform like digital platform, which is, uh, provided the same kind of structure. So we are more, uh, I mean, we are less frequently using this kind of solution for browser stack or all those things. What what we are doing, we are just, uh, providing the capabilities in in our fee in our automation script, which is not dependent on the other scripts. I mean, this is the main part whenever we are working on any cloud based platforms. So, uh, we are just providing the capabilities of the driver to create session on the different devices on on different, uh, uh, cloud cloud based platforms. For example, if you want to run a session on router's tech, then, uh, there might be some capability changes only for the browser stack. But rest of the part, which is related to your automation script, which is related to your test cases, this should not be changed as per their, uh, frame, uh, their their platforms or something. So, uh, for example, if we want to execute some script, uh, if, uh, sorry, if we have some automation script ready for different different, uh, OS or device level, so or and then you want to run those same script on the browser stack. So you just need to check, uh, take the capabilities of those devices. You just need to, uh, uh, grab the hub, uh, hub URL. You just need to, uh, grab that username or password, which is required to run your automation on that particular cloud based platform. And then you can add those things in your capability. And while creating the session, you just need to provide all those things, which is related to the device. For example, you can provide the automation name. You can provide the device name, platform name, platform version, these kinds of capability in your desired capabilities. Then you can, uh, hit a request to create a remote session, uh, uh, whether it is for a APM driver, it is for a iOS driver, it is for a Android driver, any of 1 which is applicable for your device. And once you will got a driver, then you can just execute your automation script on any of the solution. Only this configuration part may change when you are working on this particular cloud based solution. Apart from that, there are some other functionalities which we can utilize while working on such solutions. We can run parallel executions, uh, on multiple devices at the same time. If there are multiple test cases which are, uh, not dependent on each other, then we can run those things on parallel basis. We can provide the, uh, uh, provide the capabilities like that. Uh, we are just want to run these automations on some Android devices. There are not not a specific device which we want to use. We just want to run on some some Samsung devices or some Samsung specific device, uh, but it is not related to any OS level. So we will provide capability like that, and we will, uh, configure our framework like that in our test engine file or some other file to run it on parallel basis, on parallel execution. And then we can, uh, run those script on on on the parallel business. Also, uh, there are some factors which are for app upload or app kind of things. So in in terms of tag, we will we will provide, uh, our application to a particular API, then it will return an token to us, and then we can add that token in our automation script, and we'll start, uh, the run on that particular application. So these kinds of strategies we can use

How do you set up and use Jenkins for continuous test execution of your automation suite? So, uh, while working on the Jenkins, uh, in the automation suite, the main part of, uh, this, uh, what we need to do, we need, uh, to we need to not dependent on a local machine, uh, for automation done. For example, if you want to run some APM automation, then we can use some APM grade browser stack on any kind of solution. And, uh, and and whether you want to, um, execute that on Jenkins, you just need to, uh, use some kinds of architecture like Maven or Gradle or any kind of them. And, uh, you you just need to, um, provide a structure like that, uh, on clicking some command, on hitting some command, which file will start running, which tags will get open, which, uh, which execution will get start. So these things we need to specify in our project. And, uh, while configuring, uh, the Jenkins pipeline, we will provide all the required information in such a way, uh, like, from which get locations it will pick the data on from which branch it will, uh, take our it will pick our latest code and which versions of Java it will use, which versions of, uh, Python it should use, and, uh, which version of it should use. And, uh, provide the commands to execute the test script, and it will start executing those kinds of thing. Apart from this, we can also provide the scheduled time on Jenkins whenever it should start. We can trigger some, uh, some pipelines for run on daily basis. We can trigger on some date basis. We can trigger on weekend's pipeline, and, uh, we can also use the Jenkins in real functionalities to send the mail, uh, on if the pipeline got failed or some, uh, something wrong got happened. Apart from that, we can also use some different, uh, things which is provided by the Jenkins. For example, come after completion of the execution, if you want to share the complete report to some other user or some some some kind of structure which are not dependent on your frame, but instead of that, you want to, uh, handle those, uh, things on the Jenkins site. So you can write the sales scripts to, uh, complete the that particular task, and that sales script will automatically got triggered once your Jenkins pipeline will got, uh, completed. Either it should get it will get failed, it will get passed, or something wrong happened. It will automatically run that last step, which is regardless of the build, and we can use it for the different different functionalities.