top of page
  • Writer's pictureCraig Risi

Good test design hasn't gone away


There is a big focus in the software development and testing world on automation and growth of technical skills. After all, testing has evolved considerably and the importance of testers in understanding the technical aspects of the software they are working on and be able to write code to test it quickly is incredibly important. However, that does not remove the need for developers and testers to focus on ensuring that the right things are tested and that correct test design is adhered to along the way. After all, automation is simply the case of automatic replication of a test and automation a poorly designed test simply replicates its design flaw and leaves you with not just the same inherent design flaw, but arguably even more frustration as more effort is spent on writing and maintaining it.


The following concepts of good test design should be adhered to if you want to ensure that your test design and automation reap the best returns for your testing effort. 


Specific

Know what you’re testing. Tests should be focused on testing one specific thing and it should be easily identifiable what causes failure and what doesn’t. No none wants to navigate log messages to uncover why a test failed If a test failed, it should be clear what went wrong and if a test fails for any other reason, it should be logged as an execution failure and not a test failure.


With the push to automate we can often try and get our tests to do too much for the sake of streamlining only to fins debugging and maintaining these tests long term completely erodes any efficiency gains from these faster tests. Whether it be unit, component or integration testing, keeping things simple is always the best option.


Clear

Tests should be well commented and easy to follow. No one should need to read through the details or code of a test to figure out what is going on. Having a clear name which explains what the test is doing with the relevant comments goes a long way to improving the maintainability of your tests. This I especially useful when test cover parts of functionality that rarely change and thereby are not familiar with the team when they do need to be updated. No one has the time to try and figure out what a test is trying to do and it should be clear how it works, what it is trying to achieve and importantly for a developer, knowing exactly what criteria they need to code to.


Traceable

Know why the test exists in the first place. Having tests easily trace back to a requirement, defect or piece of code all forms part of making its failure and what need to be done to fix the test or code that much easier. I see too many teams try and shortcut this step because it feels like extra admin effort, yet honestly, the extra few minutes you spend ensuring clear traceability makes the easier maintenance effort completely worth it 


Self-Contained

Remove dependencies on other tests. Like with the first point on specificity, not only should tests assert one clear thing, they shouldn’t be reliant on the successful completion of other tests. While there will no doubt be dependencies and other aspects that can cause tests to fail, these should be isolated as mu as possible and should never confuse the test execution or results. 


Segregated

Test and Config are separated. This is a commonly known concept of object-oriented design that most should be familiar with but remains an important tenant of test design to point out regardless. From a maintenance perspective, the test and the data required to execute it should be kept separate for maintenance reasons. While the code that drives tests seldom change, the underlying data behind it needs to in order to stay relevant and so for the sake of easy maintenance it should be easy for the data to be amended without needing to change the code of a test itself, which only adds risk to the quality of the test and project itself.


Prioritized

Knowing importance helps you to scale. Not all tests are equal and as such it be easy for any teams to known which failures they can get to fixing later and which require immediate response. It doesn’t just make support and planning the fixing of code a whole lot easier but is especially useful as your test suites grow in scale. Having 1000s of tests execute each commit simultaneously is a waste of time and it makes more sense for certain critical tests to execute first and always so that you can keep your pipelines fast and responsive, while ensuring everything else still gets executed during a sprint.


No one wants to identify a showstopper defect on the last day of a sprint – it should be uncovered the moment the code was committed by ensuring all the high priority tests that could make a defect a showstopper were written and prioritised to execute first. Yes, priorities can also change quite frequently based on the code being tested and so pipelines should also be designed in a way where priority can be easily managed and updated so that the right tests are prioritised based on the features under test. 


Predictable

If your test keeps on failing, what hope does your code have? When a test fails, it should never be up for debate whether it is an environment issue, code issue or the test itself that is the problem. Tests should be built to provide consistent results and not be reliant on other environmental factors that will affect its results. If a test does fail for any reason other than a direct failure, it should be described as such and not be mark as a test failure. This does require putting other measures in place within your test and the different runners to identify the difference between these, but effort that is well worth it. This will save an immense amount of time in debugging.


So, while teams should understandably push for more automation and faster testing closer to the code, they shouldn’t take shortcuts in designing their tests properly that affect the usefulness of their tests long-term.

0 comments

Thanks for subscribing!

bottom of page