Author: MAURICIO ARROYAVE PANESSO, QA LEAD

A lot has been said about how Artificial Intelligence tools like Chat GPT can help with your code development, but what about testing? QA has been at the core of Yuxi Global’s areas of expertise for the past 15 years, and here are some tips to best use these chat bots (we used Chat GPT) and increase your manual testing efficiency.

First, keep in mind that “searching” using a chatbot is different from what you do using a regular search engine. In this case, it is better to establish a conversation, because all the information you type into the tool will render better results. So don’t run with the first answer you get, start an exchange and go deeper into the subject to find the most valuable information.

A good way to start is by defining a scope of work. Ask for test cases, test plans, scenarios or test strategies based on a requirement or user story. You can paste the user text to the prompt and ask something like this:

I need test cases for this actor: As a customer, I want to know about upcoming events so I can plan my trip.

This question will return a first set of test cases for the functionality you want to test, but remember, we must dig deeper after that first interaction. So it is important to ask a follow-up question like this one:

Generate more test cases based on the previous scenario or provide comprehensive test cases for the user story.

This keeps giving you answers, sometimes with the same information from the first ones, but there will always be couple new ones from the first interaction.

After that, we can check for a testing scopes or types, asking: I would like a scope for the previous user story. This would give results like functional, performance or usability testing. Asking for more examples related to the type of test can also deliver great results.

Usually, the chatbot will return the basic course test cases, but another way to enhance our testing is to ask about alternative course scenarios. You know, scenarios such as field validations, number of characters and such.

Getting the scope for the scenarios is the first part, but what about steps or a test case template? Well, the AI can also help with that. Ask something along the lines of: I need the steps for the basic course scenarios and template to run them. Usually, the result is a table with steps and a general expected result for each scenario. It is key to go over these answers and improve the expected results to cover more validations.

Now, if you need something like BDD, the tool is also capable to generate Gherkin scenarios based on user stories, scope and defined acceptance criteria. For a request like I would like gherkin scenarios for the upcoming events user story, you get something like this:

As you can see, the use of AI can be very useful to speed up, in this case, testing activities. But there are few things to keep in mind:

  • Try to mimic scenarios from the real user stories or requirements from your project: you don’t know how the data is handled by the tool.
  • Don’t just copy and paste the results: most of the time the scenarios need a few adjustments or you can even get more test cases or validations to add or ask for.
  • Understand the results you get: it is important to have basic testing knowledge to pick the information you get. Because, as I said, the information you get needs adjustments most of the time.

In the end, AI-powered tools like Chat GPT have the potential to help us speed up part of our tasks and improve our manual QA tasks by providing new information and testing scenarios. From the testing perspective it is a great companion to start analysis stages, get ideas, expand testing scenarios and even delegate repetitive tasks so we, as QA Analysts, can focus on more complex activities.