Sometimes you don’t write unit tests. Your reason for not doing so always falls into one of two categories.
The code you just wrote would be so much easier to test using system-level testing. For example…
- The setup and teardown would be 10x the test code.
- There’s too much interaction with multiple data stores or third-party vendors.
- Your dev boxes or CI server don’t all have the necessary technology installed.
These are rational reasons to not write unit tests for new code. You’re fine.
But sometimes you don’t write unit tests because the code you just wrote is so darn obvious.
It’s really simple. It’s straightforward. It’s nearly trivial. Why both writing unit tests for it?
Well, I’ll tell you why you should test it. In fact I’ll give you three reasons.
It might not be quite so obvious. We’re very good at fooling ourselves in an infinite number of ways. “When this code executes, we know XYZ has already been done.” “If this function call returns without an exception, we know the other server is up.” “If the key exists, it will have a value.”
But maybe one of your assumptions is wrong!
- You didn’t consider a corner case.
- You didn’t stop to think what would happen if your
unreliable piece of shit poorly constructedcloud network has a transient between point A and point B.
- You didn’t realize that when the XYZ wrapper initializes, there’s a brief period when the key exists but the value is empty.
At least one of your assumptions may not be true. Wouldn’t it be better to find out in a unit test?
If the code is so easy, writing the unit tests should be easy. If your new chunk of code is trivial, then writing the unit tests should be easy.
Oh, it’s not easy? Because constructing the arguments isn’t so easy, or concocting the expected results is a little tricky? That should tell you something, pardner.
Clunky code unit tests can expose bugs in unexpected ways. When I test “obvious” code, I use a technique I call “clunky code.” (I’m not good at naming things because I’m not in marketing.)
It’s simple: If creating the expected results requires mimicking the code under test, I deliberately create the expected results in the most awkward, different, and simple possible way. So the unit test is the negative inverse reciprocal of being a cut-and-paste of the code under test.
Here’s an example. I had a function that established an FTP connection, and returned a list of (filename, creation date) tuples from a directory. The low-level FTP code had been used for months, but the results creation was new. It had regex and slice extraction in a loop, and converted “mmm dd hh:mm” into “mmm dd yyyy”. No big deal, right?
I coded the test to concoct the expected results in a butt-simple loop with different slice code. And guess what, I found a couple of bugs. One slice was brain-dead wrong, and another slice’s start and end indexes were both off by one. I found those bugs only because I computed the expected results differently.
I’ve had other instances where my clunky unit test code exposed bugs in functions that weren’t “official” test targets. I was pleasantly surprised to find the other bugs and squash them dead.
So, test your obvious code!
One thought on “Unit test your obvious code”
Non-atomic functions should always be part of a unit test. Functions that span interfaces should always be part of stress-tests. Functions that are data-sensitive should always be part of regression tests.
Unfortunately, the same time/resource pressures that gave rise to UML and Agile methdologies – have tended to favor test cases being generated by separate individuals not intimately familiar with code construction – at the same time that new programming paradigms have started to “abstract away” topology complexity and these basic rules.
Modern automated testing, in addition, has often enabled the practice of SQA leaning heavily toward automated regression testing being “bolted” directly on unit testing as a (very) poor substitute for best practices.
Sometimes system integration testing is eliminated entirely as a regiment – being replaced with “end-to-end” system testing which is not precisely the same thing, as in, each unit tester should be performing unit integration testing with those components that provide functional interfaces to their “unit” prior to any “end-to-end” system test.
I’ve seen massive projects considered to have been sufficiently tested because they’ve passed a “majority” of their (non-descript) “tests” – by clueless managers – who are always looking for a way to shorted the cycle – so they can jam-in some crowd-pleasing feature and establish their dominance over the process and collect their year-end bonuses.
Before I establish my own unit testing regiment on a “contract gig” – I judge the sophistication and maturity of the overall SDLC – before I set the “rigor” of my approach. If unit-integration, stress, regression or other important aspects are missing – I “beef-up” my unit testing regiment – sometimes including my own test-beds and tools – to ensure that my own code is sufficiently bullet-proof that I don’t spend my time debugging the SDLC – instead of my code.
As far as your “clunky code” technique – It’s a sound one – as is developing portions of conditionalized code that are deliberately “fragile” – such that they fail “cataclysmically” in the event that some (very) basic assumption is violated during unit or system integration…As you have clearly stated – It’s better to catch problems with your code, your neighbor’s or the entire process – at the unit-level – before all of the “shirts” are standing around, glad-handing each other on their mutual success.