Jest Logo

Testing React Apps with Jest

Testing is straight up fun for me.

Not only do I gain deeper connection to our codebase, but also I get to geek out with sugary syntax and nifty command-line test runners.

Go, nyancat Go!

But the best thing about it is the sense of chipping away at vulnerabilities and bugs, one test at a time. Finding a bug in production code with a test you’ve coded is a validation of the whole process and a big win.

I just finished a week of testing with Jest. We used a library called Chakram to handle API calls.

Previously we used different variations of Mocha/Chai/Enzyme and Blue Tape/Sinon. It wasn’t perfect but it worked reasonably well. Yet recent improvements in Jest mocks inspired my boss to refactor our entire testing suite. For us, the timing was right. Besides, Jest is the native testing environment for React.

My boss gave me a wonderful set of command line utilities within Yarn for running tests. He set up the iterative process with custom scripted Yarn commands like test (run once), or testWatch (continuous, no coverage), testWatchCover (continuous w/coverage). Within watch mode, I had command line flags to run only failed tests, or only changed tests. What a joy to use!

He talked me through running an http-server from the command prompt to serve a web page showing red/green bar graphs and detailed tables of test coverage. He also showed me how to use atom-lcov, a package in my editor, Atom, which displays red, green and yellow dots to indicate coverage alongside the code.

My awesome boss also provided me with cheatsheets in markdown, samples of tests against the thornier parts of the codebase as well as some iconic model and controller tests with successful database connectivity. He also set up global helpers when setting up the testing framework, which allowed me to use simple commands to create mocks such as create.location() or create.account({‘name’: ‘TestUser’}).


While I’m talking about it, I should mention that my boss also set up a database rollback system which allows us to pull a copy of the test database into memory, beat the heck out of it with tests and then rollback the promised transactions once the tests are done so they are never committed to the database. He had to create his own globals to re-write the Jest functions to work with it. Amazing! Don’t try it at home, kids.

It’s no surprise with all that help that I able in just a week to bring coverage on all files of the app to 64% statements, 57% branches, 75% functions and 65% lines. It’s not a huge app: only 259 functions with 1054 statements.

Soon I will get our coverage closer to 75% or even 80% for statements. But testing has diminishing returns the longer you spend time on it. The trick is to grab as much low hanging fruit as you can to cover the major branches of your codebase. The models and controllers are key — make sure your API is working and the database is returning the expected values to the expected places. We have done that. Most of what remains are a few of the helpers and some lesser used functions. These will take much longer and produce smaller chunks of coverage.

Some quick advice: read the Jest docs. Maintain granularity and strict consistency in style from test to test. Do not comment out tests, rather use test.skip, which will tally up in the results and serve as a TO DO list. Sometimes after digging deeper on a skipped test I found a bug in my test but more than once I had uncovered a bug in production code.

Also, it is rather easy to mistake a passing test for a good one. For example, I have accidentally forgotten to change a directory name when cutting and pasting, so a few tests were passing but for the wrong API call. If you forget to add an assertion, the test will pass. Or, you can force a test to pass with results that are in fact not the intended business logic. Also, you’ll want to test some failing conditions, like fails to create an account without the required permissions; however, it may be failing for unrelated reasons, ie, db query failures or connectivity issues.

Simply testing for body.success is less preferable than checking actual values in the response when testing create and edit APIs.

Next up: manual testing, which means logging in account by account for testing with cherry-picked data culled from copies of recent live data.

Because testing makes us sleep better.

nyancat in snow

Leave a Reply

Your email address will not be published. Required fields are marked *