At least write one test.
A key philosophy of mine is 'at least write one test'.
This is intended to be a pragmatic rule of thumb that allows developers to prioritise some amount of expedience, while also writing code that is maintainable.
The idea is, if we write a single test now - while we're writing the feature, then later, if something goes wrong, or we need to extend the feature, it's going to be a lot easier to write a second test then - we can simply follow the existing pattern.
Whereas, if no such test existed, when another developer (or yourself, in six months time) comes across the feature and needs to extend it, there's a lot more friction in writing a test, and chances are higher that they will continue doing what they've seen already, and also not write a test.
This philosophy does not prescribe writing comprehensive tests, or strict coverage standards. Whether comprehensive tests are required/a good idea is a different story, and will be organisation/context specific. The 'at least write one test' philosphy allows for those advocating for a more expedient approach, to utilise a test-light approach, while also putting themselves into a position to increase their test coverage later.
Tests as a test of your codes consumability
There's a second philosophy here, and this is really the core of the argument - the value of a test isn't just in its testing of code's correctness, a test also tests how usable some code is.
Often some code will rely on some amount of context or external behaviour existing. For example the code might make an external API call or need to to access some kind of instantiated class instance.
Just writing one test isn't necessarily a trivial task - and that's the point - writing the one test let's you know what assumptions you're making about the context that is required for this code to work.
Let's take some example code.
Example Code 1 - Naive fetch
If I write a function like this:
1type Foo = {
2 id: string;
3 values: Array<number>;
4}
5
6
7export async function getFooValue(fooId: string) : Promise<number>{
8 const res = await fetch(`https://example.com/foo/${fooId}`);
9 const json = await res.json() as Foo;
10
11 return json.values.reduce((acc, cur) => acc + cur, 0);
12}
How would we write a test for some code like this?
Well, we're going to have to do some kind of fetch mocking, maybe using a tool like MSW.
1import { describe,expect,it} from "@jest/globals";
2import { getFooValue } from "./getFooValue";
3
4describe(getFooValue, () => {
5
6 it("sums the values", async () => {
7 const result = await getFooValue("1", async () => {
8 return {
9 id: '1',
10 values: [1,2,3],
11 }
12 });
13 })
14});
15
Now this might be seem fair enough, reasonable test.
But understand what we're establishing here - anytime we want to use the getFooValue
function, we need to exist in a context where the fetch behaviour for GET /foo/{id}
is defined.
If we're writing a test for some code that uses getFooValue
, we will also need to define fetch behaviour there too (or, we would mock getFooValue
away). And probably, the way we would do that, would be looking at how we did the tests for getFooValue
and copying that.
So how else could we write this test?
Example Code 2 - Injecting fetch functionalility as a function parameter
What if we recieve fetching function as a parameter:
1import { describe,expect,it} from "@jest/globals";
2import { getFooValue } from "./getFooValue";
3
4describe(getFooValue, () => {
5
6 it("sums the values", async () => {
7 const result = await getFooValue("1", async () => {
8 return {
9 id: '1',
10 values: [1,2,3],
11 }
12 });
13
14 expect(result).toBe(6);
15 })
16});
17
Of course, this does just shift the responsability of doing the actual fetching up to the calling function, and if we want to test that, we'd still need to define fetching mocking behaviour up for that test.
And this does seem like a somewhat redundant function - why not just pass the already fetched Foo
object in.
Example Code 3 - Using a service singleton
We could take a more object-oriented approach, and we write some code like this:
1type Foo = {
2 id: string;
3 values: Array<number>;
4}
5
6class FooService {
7
8 public getFooValue(fooId: string) : Promise<Foo> {
9 throw new Error("not implemented");
10 }
11
12 public _setFooValueFn(fn: (fooId: string) => Promise<Foo>) {
13 this.getFooValue = fn;
14 }
15}
16
17export const fooService = new FooService();
18
19export async function getFooValue(fooId: string) : Promise<number>{
20 const json = await fooService.getFooValue(fooId);
21
22 return json.values.reduce((acc, cur) => acc + cur, 0);
23}
Now we can write our test be defining the behaviour of the FooService
Something I quite like about this approach, is that it allows us to define an default behavior - which will be to have the service throw 'not implemented' errors.
5 it("errors if the service function is not instantiated", () => {
6 expect(async () => {
7 await getFooValue("1");
8 }).rejects.toThrow("not implemented");
9 });
10
This is much nicer than say having your tests making real API requests, and those maybe erroring, maybe not, and having to dig into what request is failing and why. Not implmented errors give a clear, immediate nudge as to what the problem is.
Conclusions
Just writing one test is a good sense check on how easy your code is to work with.
If just writing one test is a trivial task, then great! It's not going to be a problem to do it.
If just writing one test is actually an ordeal, then, in my opinion it suggests a code smell, and you're becoming aware of a problem.
So what if that is the case? Do we put tools down until we can write that one test?
Well, I guess that's a topic for another day - but in the meantime you can read How to get started on a codebase that has no tests.
Questions? Comments? Criticisms? Get in the comments! 👇
Spotted an error? Edit this page with Github