There are lots of types of testing, carried out at every stage of a project. It’s a truism of software development that the earlier you catch a problem, the less money and time it costs to fix it! A key mantra of the Agile methodology is that everybody involved tests:
- The Business Analysts carry out Static Testing by reviewing requirements and specification documents.
- The Developers carry out Unit Testing when they check a piece of code works before merging it in.
- The Test Analysts perform Smoke Testing when a build is deployed, carry out Acceptance Testing and Integration Testing in the system test environments and perform Regression Testing on major releases.
- The client/product owner carries out User Acceptance Testing (UAT) to verify the product meets their business needs.
Static testing is looking at the system without any code running; this can be as simple as taking a high-level look at the business processes and checking the proposed solution will work, or examining and reviewing a requirements document minutely to ensure data definitions are clear and correct.
It can also refer to reviewing code while it is ‘static’; i.e. not running! This could be a developer running through what they’ve coded to ensure it makes sense or reviewing another’s work as part of a code review or merge request.
Unit testing involves a developer ensuring the code they’ve created works locally on their machine before committing it to be merged for deployment. Showing it Works on My MachineTM (WOMM) doesn’t necessarily mean it’ll work in the big wide world of the system as a whole, but if it doesn’t WOMM, it probably won’t work elsewhere, and we can save the time of merging, deploying and system testing it.
Smoke testing is usually a quick series of tests to check the basic and critical functionality of a newly-deployed build of a system is functional. They’re planned to represent some of the more common paths through a system a user might take, sometimes called ‘red routes’ or ‘the happy path’.
The aim of a smoke test is to spot any glaring problems with a deployment before either starting system testing, UAT or a live release.
Acceptance and Integration Testing
Acceptance and Integration testing are the bread and butter of what Test Analysts do (probably what you think of when you think about testing) and are effectively carried out as one process in the System Testing environments. Acceptance testing involves verifying that the implemented functionality meets the needs of the client, normally by comparing it to the acceptance criteria on the ticket. Development work on each ticket is done in isolation, but the code needs to work in conjunction with all other elements of the system. Integration testing checks the communication, interface and data flow between different modules of a system. Effectively, if the tester has planned properly, these two testing types happen simultaneously.
A piece of software is (to me at least) an impossibly complex assembly of moving parts, and adding anything new to it can have unforeseen consequences… it’s not at all uncommon for something completely unrelated to break when new code is merged. These are known as Regression Bugs and we carry out Regression Testing to try and find any! This involves thorough testing of the software, carrying out as many tests as possible across the whole system within the time available.
As mentioned, it’s impossible to test every possible process and combination of inputs, so this is where the skill of the tester comes in, planning a test to maximise the coverage of a regression test to build as much confidence in a system as possible.
There are three main reasons why a ticket might not pass testing;
- It doesn’t do what it’s supposed to; it doesn’t meet the acceptance criteria. The ticket is failed and sent back to the developers.
- The system does something it’s not supposed to; there’s a bug! A new ticket is created for the bug. The bug may be a blocker and the ticket cannot pass the system test until it’s resolved (the submit button’s missing), or just unwanted behaviour (clicking the submit button changes the font).
- The ticket is dependent on other functionality not yet implemented or requires test data that isn’t ready. This is known as a failing test entry; we cannot begin testing yet.
When carrying out testing, you might come across ‘undesired behaviour’, or as we usually call it: a bug!
Raising a bug involves creating a ticket containing all the information someone would need to replicate it, including, among other things:
- Which part of the system we’re in
- What version we’re using
- What we did
- What we expected to happen
- What actually happened
This enables a PM or client to understand what’s happening and decide whether a bug is critical and needs fixing immediately, whether it can wait, or whether it’s out of the scope of the project. It also allows a developer to see at exactly what point something went wrong, giving them the best chance of figuring out how to fix it!
Find out more about a day in the life of a Test Analyst at Software Solved, plus a previous blog covering the importance of software testing.
Are you considering improving your current system in place or have a new piece of software you are looking to have developed? Get in contact with us today.