When writing tests, mock out a subsystem if and only if it’s prohibitive to test against the real thing.
Our product uses Redis. It’s an awesome technology.
We’ve avoided needing Redis in our unit tests. But when I added a product feature that made deep use of Redis, I wrote its unit tests to use it, and changed our development fabfile to instantiate a test Redis server when running the unit tests locally.
(A QA purest might argue that unit tests should never touch major system components outside of the unit under test. I prefer to do as much testing as possible in unit tests, provided they don’t take too long to run, and setup and teardown aren’t too much of a PITA.)
This was a contributory reason for our builds now failing on our Hudson CI server. Redis wasn’t installed on it!
Why didn’t I immediately install Redis on our CI server?
- Our CI server had other problems
- I intended to nuke it and re-create it with the latest version of Jenkins. I just needed to first clear some things off my plate
- Our dev team had shrunk down to just two people
- We were both strict about running unit tests before checking code into the pool
- We were up to our necks in other alligators
From a test-quality perspective, if code uses X in production, it’s better for tests to run with X than with a simulation of X.
One of the many joys of working with Ryan is that he challenges my assumptions and makes me consider alternatives. Because of a perceived lack of elegance in needing Redis on our CI server, and because his work had been temporarily blocked by my code changes, he challenged me to replace my unit tests’ use of Redis with a mock.
I walked into work yesterday and it was quiet. All our critical bugs blocking Saturday’s release were closed. I thought, why not? I’ll give it a go. Today’s a good day to see what’s involved with replacing Redis with a mock!