05 Jun 2022 at 21:23
An interesting problem comes up when exploratory testing finds a defect in an application. If the defect cannot be fixed immediately, the fix is going to be assigned to the backlog and eventually scheduled to be worked on by the development team. The challenge now is what to put into the automated test. should the test fail or should it pass?
- A failing test is the obvious choice, since the test should specify correct behavior, and when the developers get around to making the necessary changes, the test will just pass without any extra work required by the QA team.
- Under PyTest there is the option to mark a test as XFail, which reports the count of tests that failed as expected (XFAIL) and the count of tests that were expected to fail but passed (XPASS)
- A passing test that encodes the current behavior of the code as correct is another option for the test. The passing test is a Change Detector in that the test will fail when the defect is fixed.
Of the three options, a failing test will mean that every time the test suite runs, one or more tests will fail and there is some overhead of deciding which of the failures are expected and which are not expected. The xfail test has a similar problem, but without the benefit of a stack trace, so if the actual behavior changes, but is not fully fixed, then the test may still report as an XFAIL, so there is still some overhead of checking the test suite result.
A passing test is cheap to evaluate. If all tests pass, then nothing has changed and there is nothing to investigate. If however a test fails, then the normal process of investigating the error starts – and if the passing test had a good error explanation referring to the defect, then the investigation should be short. It is then a short process to amend the test case to reflect the new correct behavior and now the entire test suite should then pass. Some exploratory testing is still necessary to make sure that the fix did not introduce any other weird behavior, but the process of getting the test suite passing again should be trivial.
04 Jun 2022 at 23:06
Rails and Phoenix introduced the idea of testing to web development, see Rails Guide and Hexdocs Testing Guide their main focus is on Unit Testing, Testing Views and Testing Controllers, while not saying much about System testing. They both make a nod towards Integration testing, but the focus is more on what developers should know about testing rather than a full system test.
With Unit Tests, the test case setup and teardown is relatively minimal, so often a unit test will have a single assertion – basically a unit test is testing just a single thing. This is appropriate for a Unit Test, since the execution time for the test suite is relatively insensitive to the the number of tests when the testing framework can execute a suite of 1,000 unit tests in a few seconds.
For Controller and View tests, both Rails and Phoenix have a nice way of mocking out the webserver interaction so that the behavior of the Controller that responds to the GET/POST/PUT/DELETE requests from the browser can be tested without needing the full stack. Both include the idea of a Test Database for use by the test suite that gets populated by fixtures that work in conjunction with the test cases to provide appropriate data for the tests. Typically the data is set up before each test case and then cleaned up after each test case so that the failure of one test case does not impact the other test cases. Typically these tests run slower than Unit tests, but a reasonable suite of several hundred tests can run in less than 10 seconds.
Both Unit and the Controller/View tests tend to be relatively simple with few assertions. System Testing is different. For a start the setup and teardown time for these tests can be significant, especially if you have a microservice architecture and the test case covers the interactions between multiple services. So system tests have to pay back the larger setup and teardown time by doing more work inside each system test case which means
- the scenario for a system test case has to be more of a Soap Opera
- there should be many more assertions about the steps along the scenario so that if there is an error, the source of the error can be found quickly
- the scenarios should follow multiple alternate paths through the system