ReactTDD.com

Understanding Arrange‑Act‑Assert

If there’s one piece of advice that will help you write better unit tests, it’s this: always, always write your tests in Arrange-Act-Assert order.

it("makes a post request when submitting the form", () => {
  // Arrange
  spyOn(api, "post").return(fetchOkResponse({ status: "success" }));
  const data = { ... };
  mount(<FormComponent data={data} />);

  // Act
  click(submitButton());

  // Assert
  expect(api.post).toHaveBeenCalledWith(data);
});

The arrange step is when we set up dependencies and any input values, such as the api.post function and the data object in the example above.

The act step is the code you’re exercising in the test. Sometimes this is a function, sometimes it’s a method on a class. Almost always though, it’s a single method call.

The assert is where are all your assertions and expectations happen.

You don’t always need an arrange step, by the way. Most of the time you will, but for simple tests you may not.

Simple, right? Perhaps, but there’s one unanswered question: Why do we structure our tests this way?

I’ll explain with a counter example. Imagine you have a test that reads like the one below. It checks that a re-mount of your component causes the counter to increase in value.

it("increments the counter on page when re-rendering", () => {
  mount(<CounterComponent />);
  expect(container.textContent).toContain("Count: 1");

  mount(<CounterComponent />);
  expect(container.textContent).toContain("Count: 2");
});

Asides from breaking Arrange-Act-Assert (this is Arrange-Act-Assert-Act-Assert), this test is in good form.

But what happens if a code change causes this test to fail?

Where did the break happen? Was it the first mount… or the second mount? Well, you won’t know until you go digging. Which exception failed? What line of the stack trace? And so on.

A well-written unit test will very quickly pinpoint errors when it fails.

Surface area

There is a problem with the surface area of this code. There are two separate invocations of your code here, which do slightly different things. Perhaps “separate” is the wrong word here, because some of the executed code run on the second mount call will overlap with the code in the first. So really you have two problems: there is an extended surface area, and in addition some fo the executed code is called twice.

A failure could be in any part of that code.

There’s more. What happens if the first exception fails? Does the second get run? Should we even bother continuing to run the rest of the test in that case?

Even if your second test case runs, it’s in an invalid state anyway: any pass will need be re-tested when you fix the first failure, and any failure is noise while you try to figure out the first.

The solution is straightforward: split out the test.

it("initially starts the counter at 1", () => {
  mount(<CounterComponent />);
  expect(container.textContent).toContain("Count: 1");
});

it("increments the counter on page when re-rendering", () => {
  // Arrange
  mount(<CounterComponent />);

  // Act - the previous call will have been tested in the test above!
  mount(<CounterComponent />);

  // Assert
  expect(container.textContent).toContain("Count: 2");
});

See how the act phase of the first is repeated as the arrange phase of the second? That means there’s a dependency between these tests, which we’ll come back to in just a second. First, let’s think about what happens if either of these unit tests fails.

If the first test breaks, you know immediately that the problem is with the initial mount code.

If the second test breaks, you know immediately that the problem is with updating state.

But what happens if both tests fail? This is as possibility because the second test has a dependency on the success of the first. You can see that because the arrange phase of the second test is the act phase of the first.

In this case, you start from the first test and fix that, and then re-run your tests. Most test frameworks don’t have a way of explicitly defining dependencies between tests, but they do have an implicit way of doing: test ordering. Put dependent tests below their dependencies. That way, when your test run report shows a bunch of test failures, you can start at the top and work down.

Following the Arrange-Act-Assert pattern is one of the most fundamental tools we have for writing “good” tests. If this is new to you, try this: next time a test breaks and it’s not in AAA form, write a new test with the just the first act and assert within it. See if that fails. If not, repeat with a second test, and so on. Finally, get rid of the original test.

Unfortunately, you’ll now need to think of separate names for each of those tests, so be prepared to engage your brain for that one…

— Written by Daniel Irvine on January 9, 2020.