Catching Assertionless Tests
In Shopify we have more than 300,000 tests in our core monolith. These tests cover most of our application code and provide us with a great level of confidence when making changes to our app. But do all tests still perform the duties they were intended to even if some of these tests were added more than 10 years ago? In this article we will explore how Shopify revised all tests to reveal and fix ones that were testing nothing while still being executed.
Ordinary Minitest test
To start, let’s create and run an ordinary Minitest test. This test is going to verify the status transition of an Order
object:
def test_order_archive_moves_status_to_archived
order = Order.new
assert_equal(:open, order.status)
order.archive!
assert_equal(:archived, order.status)
end
Now, let’s run the test:
# Running:
1 runs, 2 assertions, 0 failures, 0 errors, 0 skips
The output looks good, all as we expected: 2
assertions have been reported as a result of two assert_equal
calls.
Now, have a look at another test. This time the test verifies that MyCollection
serialization happens by calling the encode
method of an injected serializer. This test doesn’t care about serialization details so we will create a mock to serve as a test serializer:
def test_collection_is_using_injected_serializer_to_encode_data
serializer_mock = Minitest::Mock.new
my_collection = MyCollection.new(serializer: serializer_mock)
my_collection.push("data")
expected_serialized_data = "[my_serialized_collection]"
serializer_mock.expect(:encode, expected_serialized_data) do |arg|
assert_equal(["data"], arg)
assert_equal(expected_serialized_data, my_collection.serialize)
end
end
Running it:
1 runs, 0 assertions, 0 failures, 0 errors, 0 skips
That’s unexpected! No assertions even though we know we are calling assert_equal
at least twice.
What a great opportunity to utilize the puts debugging technique and provide a little more insight into the execution flow of the test:
def test_collection_is_using_injected_serializer_to_encode_data
puts "test starts"
serializer_mock = Minitest::Mock.new
my_collection = MyCollection.new(serializer: serializer_mock)
my_collection.push("data")
expected_serialized_data = "[my_serialized_collection]"
puts "before stubbing the :encode call"
serializer_mock.expect(:encode, expected_serialized_data) do |arg|
puts "before assertions"
assert_equal(["data"], arg)
assert_equal(expected_serialized_data, my_collection.serialize)
puts "after assertions"
end
puts "test ends"
end
And run the test:
# Running:
starting the test
before stubbing the :encode call
test ends
It looks like the test doesn’t execute the block passed to the expect
method and thus doesn’t perform the assertion. Let’s have a look at the Minitest::Mock#expect docs.
Ah! We have confused Minitest::Mock#expect
method usage with the Object#stub
provided by Minitest which stubs the method call for the duration of the block. Unlike Object#stub
, Minitest::Mock#expect
only requires block if we want to perform additional assertions on arguments passed to the method which is not needed in our case. Let’s fix the #expect
method call:
def test_collection_is_using_injected_serializer_to_encode_data
serializer_mock = Minitest::Mock.new
my_collection = MyCollection.new(serializer: serializer_mock)
my_collection.push("data")
expected_serialized_data = "[my_serialized_collection]"
serializer_mock.expect(:encode, expected_serialized_data, [["data"]])
assert_equal(expected_serialized_data, my_collection.serialize)
end
And run the test once again:
1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
Awesome! Now it actually performs the assertion.
Could have this been prevented?
What an easy mistake to make but hard to spot! That could have easily gotten merged if not for our attention to the assertion. Was there a way to prevent this from the beginning?
To prevent similar situations from happening again we will patch MiniTest::Unit::LifecycleHooks#after_teardown
hook with a logic that fails the test if the one performs no assertions:
module CheckAssertions
def after_teardown
super
return if skipped? || error?
raise(Minitest::Assertion, "Test is missing assertions") if assertions.zero?
end
end
Minitest::Test.prepend(CheckAssertions)
We are not interested in skipped or naturally failed tests so we will add an early return
. Otherwise we will fail the test if it performed 0
assertions.
Running our assertionless test from the beginning results in:
1) Failure:
MyTest#test_collection_is_using_injected_serializer_to_encode_data [minitest_example_test.rb:39]:
Test is missing assertions
1 runs, 0 assertions, 1 failures, 0 errors, 0 skips
That serves as a much more efficient measure against this test being merged into the codebase and doing nothing, without us noticing.
Raising an exception is a clean solution for a newly created or a small application but for a large existing codebases you may benefit from utilizing a solution like deprecation_toolkit to record existing violations first to gradually address them but to immediately prevent new violations.
Two types of assertionless tests
After learning about assertionless tests and how to catch them let’s get familiar with two categories we can put assertionless tests into: truly broken tests and tests that are valid but still report no assertions.
Truly broken tests
The first category includes tests that are truly broken and do not perform intended assertions. We just explored an example of such test but let’s have a look on one more:
def test_all_published_posts_should_have_a_reviewer do
published_posts = Post.published.to_a
published_posts.each do |published_post|
assert_predicate published_post.reviewer, :present?
end
end
The test above iterates over a published posts array and verifies that each published post has a reviewer. Is this an assertionless test? It depends. It may suddenly become an assertionless tests if published
scope changes in a way that it returns no records in your test environment iterating over an empty array and never calling assert_predicate
. Luckily the check we added will prevent this test from ever passing while issuing no assertions.
In addition to the assertionless tests reporter I prefer verifying the setup of the test before even attempting to make any assertions, for example:
def test_all_published_posts_should_have_a_reviewer do
published_posts = Post.published.to_a
flunk("test requires non-empty published posts collection") if published_posts.empty?
published_posts.each do |published_post|
assert_predicate published_post.reviewer, :present?
end
end
The flunk
line will prevent published_posts
from ever becoming empty. It doesn’t have to be flunk
, it can also be an exception raise or an assertion similar to the assert_equal(:open, order.status)
we added in our first test to make sure that objects in our test satisfy the initial requirement. I prefer using flunk
to distinguish setup-type assertions from assertions that verify behavior of the code.
Valid but assertionless tests
The second category consists of tests that will trigger the assertionless exception we just added while being perfectly valid tests. The most common example of a test from this category would be a test that is expected to fail when exception is raised, otherwise the test should be considered as successfully passed. For example:
def test_passed_if_nothing_raised
MyCode.doesnt_raise
end
A test like the one above will fail the test run because of the assertionless exception raise we added. And the most common solution would be to wrap the code in an assert_nothing_raised
call which will increment the assertions count behind the scenes. Though some people may prefer adding an explicit assertion of the code return value instead of assert_nothing_raised
which is also a valid approach.
def test_passed_if_nothing_raised
assert_nothing_raised { MyCode.doesnt_raise }
end
In rare cases when there is no suitable assertion to be used or we are using a manually built assertion helper - explicitly incrementing assertions counter is a fair option. For example:
def test_passed_if_nothing_raised
MyCode.doesnt_raise
# The test passed if the line below has been executed
self.assertions += 1
end
This is the downside of having an assertionless check added to the test suite - some tests will have to become more verbose in order to satisfy the check. However, some may argue that it’s actually a benefit and those tests become more readable and intentions of the tests are clearer.
Opportunity to contribute to open source projects
Improving tests is a great way of making your first contribution to open source along with learning about software
internals. Changing tests has no risk of breaking main code and test improvements, especially if those improvements fix
completely broken tests, always welcomed by maintainers. Give it a try in your favorite codebases and see how many
asertionless tests it reveals. You may use one of the PRs from rails/rails
as an example - https://github.com/rails/rails/pull/48065
RSpec
Unfortunately, at this moment RSpec does not have an assertions/expectations counter so it is not feasible for us to setup a similar logic to catch assertionless tests in RSpec. However, there is an open issue in the rspec-core
repository that is waiting to be picked up.
Conclusion
Testing the tests themselves is highly beneficial, enhancing overall confidence in the test suite. It also presents a valuable opportunity for open source contributions, allowing individuals to optimize slow tests and ensure their intended behavior. By actively engaging in these contributions, we can collectively strengthen the reliability and effectiveness of open source projects.