27 Feb 2022 at 22:13
Although Brian Marick was not the originator of the concept, I first heard about Soap Opera Tests from Brian. Rather than a test covering a single, simple scenario, instead exaggerate and complicate the scenario to push the system to see where the failures can occur. This gets around the problem that is often seen in Agile projects where the team tries to simplify the problem domain by ignoring what could be considered to be edge-cases and just addressing the simple scenarios.
The lens of a Soap Opera can be useful to review the test suite for an application that goes beyond the simplistic code coverage that is often reported from unit tests and component tests within a deployment pipeline
- How many tests (outside of unit tests) have a trivial sequence of setup, do action, check result, teardown, (or to use the Agile terms, how many tests are of the form Given, When, Then) rather than a connected sequence of transactions that represents a complex scenario?
- For parameterized tests, how many are truly distinct tests rather than just equivalent values that exercise the exact same code path?
- Are the System tests already covered by the Component level tests already implemented by the developers? (Typically the developer written tests may consider some possible failures, but miss others)
- Do the System tests touch multiple parts of the architecture as part of a test scenario? (This is where a Soap Opera mindset helps, making sure that the test addresses what happens at team and component boundaries.)
- Do the System tests address the full scope of the system and cover all interacting systems? (A common failing is that of not testing that the data replicated to the associated data lake/swamp/warehouse accurately represents the system data.)
Overall whenever evaluating a test, it is useful to know what risk is it addressing. Ideally any descriptive text included in the automated test case should include information about the motivation for the test, why it is important and the consequences of skipping the test. My take is that System tests should not be just repeating what can already be done by unit and component level tests (e.g. view and controller tests in Phoenix Testing terminology), they have to go beyond those simple scenarios and probe the interfaces between the various components.
Basically all tests have to answer the economic question as to what is the value of this test case?
17 Feb 2022 at 00:11
CUPID is Dan’s response to the SOLID principles and back story. Rather than another set of principles, Dan instead chose to focus on the properties of the software.
Composable – code that works well with others and does not have too many external dependencies that make it harder to use in another context, ideally with Intention Revealing terminology
Unix philosophy – related to the composability property, does one thing well and works well with others to build a larger solution
Predictable – or as the saying goes, “does what it says on the tin.” Dan calls this a generalization of Testability, it should behave as expected, be deterministic and observable
Idiomatic – naturally fits in with the way code is written in the implementation language, so for example in Python, rather than open, write and then close a text file, the natural way to write this is as below, where Python automatically handles the closing of the file
with open("file.txt", 'w') as textfile:
- Domain based – uses words and language in a way that would be familiar to practitioners in that domain.
06 Feb 2022 at 04:48
When working with interpreted languages like Ruby, Elixir and Python it is great to use the REPL to discover the capabilities of the various variables that you are dealing with.
irb, Elixir uses
irb and to be different Python jumps directly into the interactive prompt using
python. In each of these you have the full power of the language to use whatever libraries you have installed by just typing code at the relevant prompt. So at the
python prompt you could do the following to see how Playwright interacts with the browser - using code borrowed from an earlier post.
from playwright.sync_api import sync_playwright
playwright = sync_playwright().start()
browser = playwright.chromium.launch()
page = browser.new_page()
title = page.title()
The nice thing with each of these REPLs is that they allow you to see the type of the object and the associated attributes and methods, and hence get a better understanding of the library by trying things out and getting immediate success or failure - with an associated error message and stack dump, immediately followed by the REPL prompt for you to try again. Amusingly this even works for overly complex APIs like the Amazon Boto3 python library that you need to interact with the AWS services.
04 Feb 2022 at 23:22
Normally I avoid any hint of political comment, but this just hit the sweet spot of asking who in influencing our wetware: how do we decide what to care about and what to argue about.
03 Feb 2022 at 00:25
When using the playwright codegen utility, it provides a nice preview of the available selector when hovering the mouse over any part of the web page. When tried with the Phoenix Liveview default application, it can be started with the command
> playwright codegen http://localhost:4000/
and after navigating to the LiveDashboard, the selector for the refresh speed shows up in the chromium browser
It also does a good job of generating some sample code that can then be copied into a pytest test case for future reuse
# Click text=Ports
# with page.expect_navigation(url="http://localhost:4000/dashboard/ports"):
# Select 2
Note that it will delay the script with
expect_navigation until the Ports page is displayed - although it is not waiting for a specific url unlike the commented out part of the code.