top of page
  • Writer's pictureCraig Risi

Software Testing in an AI-driven world – Part 3 – Using AI to improve quality



AI systems themselves can be used for various different applications that all need testing but that doesn’t mean we should ignore utilising some of those techniques to help us in improving the quality of our own software. Thankfully, with the collection of all the data that we tend to store around our software and development process, it allows us to utilise artificial intelligence tools to aid us in improving the quality of our software and make our testing even more efficient.


But what about existing RPA tools?

Now the most obvious place to think is what is referred to as Robotic Process Automation. Robotic process automation is one of those buzzwords that gets flown around the world and especially in tools which claim to offer RPA in their services. Truth is, RPA is nothing new, we’ve been doing it for years through traditional automation scripts. The only difference is that whereas what we currently automate is things we already know, more of automated checks, RPA helps us in potentially analyzing things we don’t know.


There is a lot of tools out there claiming that they can use AI to create automated tests for your application. And while these tools offer lots of potential when working with third-party systems or driving some form of regular high-level automation at a production level, they are not ready yet to be adopted into mainstream software development. These tolls analyses the way a UI looks and works, or the functions contained in a piece of code and then try to identify the right test cases. The biggest problem with RPA tools is that most of these tools only kick in once code has been designed and written and not before, which is actually when you want to be finding your defects.


You want to rather build our software with testing in mind and find defects early in the design and to do this, you need careful up-front analysis – something which RPA tools are not going to provide you. It is also one of the reasons why I don’t believe AI is going to replace software testing or quality design just yet, as there is so much more to software testing that is focused on proper software design. Teams should be focusing n these aspects of software development more and sign their software to meet the correct needs of its customers and this is not something that should be shipped off to AI systems – at least not yet.


Though that doesn’t mean that we can’t use AI in other ways to help us with our software quality – with the below uses cases things that many companies should be looking into:


Understanding User Behaviour

By making use of ML systems, we can better understand human behaviour on our software system and use that to not just understand them and design better features based on this behaviour but form the basis of some of our testing efforts and prioritizing our testing better.


Automated Test Generation

By presenting a system with the relevant boundaries of an API/Module of code, we can train systems to identify the specific use cases and create the necessary tests for each endpoint, along with the different data needed for that. This is especially feasible for unit and API testing where requires are quite defined and finite and AI systems can easily be used here. Also, as we change the underlying code or modify an API the test can modify to cater for the changes.


  • Differential testing — comparing application versions overbuilds, classifying the differences, and learning from feedback on the classification.

  • Visual testing — leveraging image-based learning and screen comparisons to test the look and feel of an application.

  • Declarative testing — specifying the intent of a test in a natural or domain-specific language, and having the system figure out how


Self-healing automation

We might want to priories righting our tests before we write our code, ad while I would recommend using a human test expert to do this first part, there is definitely opportunities to use AI to help us with the future maintenance of our automated tests – especially at a unit test level where it is easy to map the test to parts of the code.


Auto-correcting element selection in tests when code or UI changes. It will allow for more consistent test execution and failures will likely be the result of genuine errors rather than poor test maintenance.


Static Analysis

We can essentially easily utilize AI to test all the decision paths in our code, ensure code is designed for testability, plus scan for things like security. All of this is vital to driving better code quality from the front and allowing testers to focus on the bigger problems, like improved design and exploratory testing.


Defect Analytics

One of the most important tasks for any software development team is understanding why our software sometimes goes wrong and how we can prevent it in future. Through increased monitoring and access to data, we can program AI systems to help us identify these issues leading to faster resolution times and better mitigation processes.


Predictability

All this will help to remove the margin of error and therefore allow for increased in our development process. We can also use data to identify all of our different tasks and provide us with more accurate estimates based on people’ actual performances again similar task than the variable approach we otherwise apply to it.


AI and Machine Learning are our friends when it comes to software development and utilising these tools to help understand and maintain our software system is a massive gain we can get from the technology if utilised correctly and something all companies should be looking towards to further enhance their testing systems.

0 comments

Thanks for subscribing!

bottom of page