Skip to content

Latest commit

 

History

History
1571 lines (1194 loc) · 45.9 KB

chapter_21_mocking_2.asciidoc

File metadata and controls

1571 lines (1194 loc) · 45.9 KB

Mocks and Mocking 2: Using Mocks for Test Isolation

Warning, Recently Updated

🚧 Warning, chapter in progress! 🚧

I’ve recently updated this chapter, and there are some new discussions of the pros and cons of mocking, which I’d love your feedback on!

In this chapter we’ll finish up our login system. While doing so, we’ll explore an alternative use of mocks: to isolate parts of the system from each other, enabling more targeted testing, fight combinatorial explosion, and reduce duplication between tests.

Note
In doing so, we start to drift towards what’s called "London-school TDD", which is a variant on the "Classical" or "Detroit" style of TDD that I mostly show in the book. We won’t get into the details here, but London school TDD places more emphasis on mocking and isolating parts of the system. As always there are pros and cons! Check out [appendix_purist_unit_tests] for a longer exploration of the London-style approach.

Along the way, we’ll learn a few more useful features of unittest.mock, and we’ll also have a discussion about how many tests is "enough".

Using Our Auth Backend in the Login View

We got our auth backend ready in the last chapter, now we need use the backend in our login view. First we add it to settings.py:

Example 1. src/superlists/settings.py (ch20l001)
AUTH_USER_MODEL = "accounts.User"
AUTHENTICATION_BACKENDS = [
    "accounts.authentication.PasswordlessAuthenticationBackend",
]

[...]

Next let’s write some tests for what should happen in our view. Looking back at the spike again:

Example 2. src/accounts/views.py
def login(request):
    print("login view", file=sys.stderr)
    uid = request.GET.get("uid")
    user = auth.authenticate(uid=uid)
    if user is not None:
        auth.login(request, user)
    return redirect("/")

We call django.contrib.auth.authenticate, and then, if it returns a user, we call django.contrib.auth.login.

Tip
This is a good time to check out the Django docs on authentication for a little more context.

Straightforward Non-Mocky Test for our View

Here’s the most obvious test we might want to write: we think in terms of the behaviour we want:

  1. If someone has a valid Token, they should get logged in

  2. If someone tries to use an invalid Token (or none), it should not log them in.

Here’s how we might add the happy-path test for (1):

Example 3. src/accounts/tests/test_views.py (ch20l002)
from django.contrib import auth
[...]

class LoginViewTest(TestCase):
    def test_redirects_to_home_page(self):
        [...]

    def test_logs_in_if_given_valid_token(self):
        anon_user = auth.get_user(self.client)  # (1)
        self.assertEqual(anon_user.is_authenticated, False)  # (2)

        token = Token.objects.create(email="[email protected]")
        self.client.get(f"/accounts/login?token={token.uid}")

        user = auth.get_user(self.client)
        self.assertEqual(user.is_authenticated, True)  # (3)
        self.assertEqual(user.email, "[email protected]")  # (3)
  1. We use Django’s auth.get_user() to extract the current user from the Test Client.

  2. We verify we’re not logged in before we start (this isn’t strictly necessary, but it’s always nice to know you’re on firm ground).

  3. And here’s where we check that we’ve been logged, with a user with the right email address:

And that will fail as expected:

    self.assertEqual(user.is_authenticated, True)
AssertionError: False != True

We can get it to pass by "cheating", like this:

Example 4. src/accounts/views.py (ch20l003)
from django.contrib import auth, messages
[...]


def login(request):
    User = auth.get_user_model()
    user = User.objects.create(email="[email protected]")
    auth.login(request, user)
    return redirect("/")

Which forces us to write another test:

Example 5. src/accounts/tests/test_views.py (ch20l004)
def test_shows_login_error_if_token_invalid(self):
    response = self.client.get("/accounts/login?token=invalid-token", follow=True)
    user = auth.get_user(self.client)
    self.assertEqual(user.is_authenticated, False)
    message = list(response.context["messages"])[0]
    self.assertEqual(
        message.message,
        "Invalid login link, please request a new one",
    )
    self.assertEqual(message.tags, "error")

And now we get that passing using the most straightforward implementation…​

Example 6. src/accounts/views.py (ch20l005)
def login(request):
    if Token.objects.filter(uid=request.GET["token"]).exists():  # (1) (2)
        User = auth.get_user_model()
        user = User.objects.create(email="[email protected]")  # (3)
        auth.login(request, user)
    else:
        messages.error(request, "Invalid login link, please request a new one")  # (4)
    return redirect("/")
  1. Oh wait, we forgot about our authentication backend and just did the query directly from the Token model? Well that’s arguably more straightforward, but how do we force ourselves to write the code the way we want it to, ie using the Django’s auth API?

  2. Oh dear and the email address is still hardcoded. We might have to think about writing an extra test to force ourselves to fix that.

  3. Oh—​also, we’re hardcoding the creation of a user every time, but actually, we want to have the get-or-create logic that we implemented in our backend

  4. This bit is OK at least! 😅

Is this starting to feel a bit familiar? We’ve already written all the tests for the various permutations of our authentication logic, and we’re considering writing equivalent tests at the views layer.

Combinatorial Explosion

Let’s recap the tests we might want to write at each layer in our application:

Table 1. What We Want to Test in Each Layer
Views Layer Authentication Backend Models Layer
  • Valid Token means user is logged in

  • Invalid Token means user is not logged in

  • Returns correct existing user for a valid token

  • Creates a new user for a new email address

  • Returns None for an invalid token

  • Token associates email and uid

  • User can be retrieved from token UID

We already have 3 tests in the models layer, and 5 in the authentication layer. We started off writing the tests in the views layer, where, conceptually, we only really want two test cases, and we’re finding ourselves wondering if we need to write a whole bunch of tests that essentially duplicate the authentication layer tests.

This is an example of the combinatorial explosion problem.

The Car Factory Example

Imagine we’re testing a car factory, where:

  • First we choose the car type: normal, station-wagon, or convertible

  • Then we choose the engine type: petrol, diesel, or electric

  • And then we choose the colour: red, white, or hot pink.

How many tests do we need? Well, the upper bound to test every possible combination is 3 x 3 x 3 = 27 tests. That’s a lot!

def build_car(car_type, engine_type, colour):
    engine = _create_engine(engine_type)
    naked_car = _assemble_car(engine, car_type)
    finished_car = _paint_car(naked_car, colour)
    return finished_car

How many tests do we actually need to write? Well, it depends on how we’re testing, how the different parts of the factory are integrated, and what we know about the system.

Do we need to test every single colour? Maybe! Or, maybe, if we’re happy that we can do 2 different colours, then we’re happy we can do any number, whether it’s 2, 3, or hundreds. Perhaps we need 2 tests, perhaps 3.

OK, but do we need to test that painting woks for all the different engine types? Well, the painting process is probably independent of engine type: if we can paint a diesel in red, we can paint it in pink or white too.

But, perhaps it is affected by the car type: painting a convertible with a fabric roof might be a very different technological process to painting a hard-bodied car.

So we’d probably want to test that painting in general works for each car type (3 tests) but we don’t need to test that painting works for every engine type.

What we’re analysing here is the level of "coupling" between the different parts of the system. Painting is tightly coupled to car type, but not to engine type. Painting "needs to know" about car types, but it does not "need to know" about engine types.

Tip
The more tightly coupled two parts of the system are, the more tests you’ll need to write to cover all the combinations of their behaviour.

Another way of thinking about it is, what level are we writing tests at? You can choose to write low-level tests that cover only one part of the assembly process, or higher-level ones that test several steps together, or perhaps all of them end-to-end. See Analysing how many tests are needed at different levels.

An illustration of the car factory, with boxes for each step in the process (build engine, assemble, paint), and descriptions of testing each step separately vs testing them in combination.
Figure 1. Analysing how many tests are needed at different levels

Analysing things in these terms, we think about the inputs and outputs that apply to each type of test, as well as which attributes of the inputs matter, and which don’t.

Testing the first stage of the process, building the engine, is straightforward. The "engine type" input has three possible values, as inputs, so we need three tests of the output, which is the engine. If we’re testing at the end-to-end level, no matter how many tests we have in total, we know we’ll need at least 3 of to be the tests that check we can produce a car with a working engine of each type.

Testing the painting needs a bit more thought. If we test at the low level, the inputs are a naked car, and a paint colour. There are theoretically 9 types of naked car, do we need to test all of them? No, the engine type doesn’t matter; we only need to test 1 of each body type. Does that mean 3 x 3 = 9 tests? No. The colour and body type are independent. We can just test that all 3 colours work, and that all three body types work, so that’s 6 tests.

What about at the end-to-end level? It depends if we’re being rigorous about "black box" testing, where we’re not supposed to know anything about how the production process works. In that case maybe we do need 27 tests. But if we allow that we know about the internals, then we can apply similar reasoning to what we used at the lower level. However many tests we end up with, we need 3 of them to be checking on each colour, and 3 that check that each body type can be painted.

Using Mocks to Test Parts of Our System in Isolation

To recap, so far we have some minimal tests at the models layer, and we have comprehensive tests of our authentication backend, and we’re now wondering how many tests we need at the views layer.

Here’s the current state of our view:

Example 7. src/accounts/views.py
def login(request):
    if Token.objects.filter(uid=request.GET["token"]).exists():
        User = auth.get_user_model()
        user = User.objects.create(email="[email protected]")
        auth.login(request, user)
    else:
        messages.error(request, "Invalid login link, please request a new one")
    return redirect("/")

We know we want to transform it to something like this:

Example 8. src/accounts/views.py
def login(request):
    if user := auth.authenticate(uid=request.GET.get("token"))  # (1)
        auth.login(request, user)  # (2)
    else:
        messages.error(request, "Invalid login link, please request a new one")  # (3)

    return redirect("/")
  1. We want to refactor our logic to use the authenticate() function from our backend

  2. We have the "happy path" branch where the user gets logged in

  3. We have the "unhappy" path where the user gets an error message instead.

But currently our tests are letting us "get away" with the cheating/wrong implementation.

Here are three possible options for getting ourselves to the right state:

  1. Add more tests for all possible combinations at the views level (token exists but no user, token exists for existing user, invalid token, etc) until we end up duplicating all the logic in the auth backend in our view, and then feel justified in refactoring across to just calling the auth backend.

  2. Stick with our current two tests, and decide it’s OK to refactor already.

  3. Test the view in isolation, using mocks to verify that we call the auth backend.

Each option has pros and cons! If I was going for option (1), essentially going all in on test coverage at the views layer, I’d probably think about deleting all the tests at the auth layer afterwards.

If you were to ask me what my personal preference or instinctive choice would be, I’d say at this point it might be to go with (2), and say with one happy path and one unhappy path test, we’re OK to refactor and switch across already.

But since this chapter is about mocks, let’s investigate option (3) instead. Besides, it’ll be an excuse to do fun things with them, like playing with .return_value.

So far we’ve used mocks to test external dependencies, like Django’s mail-sending function. The main reason to use a mock was to isolate ourselves from external side effects, in this case, to avoid sending out actual emails during our tests.

In this section we’ll look at a different possible use case for mocks, which are around testing parts of our own code in isolation from each other, as a way of reducing duplication and avoiding combinatorial explosion in our tests.

Mocks Can Also Let You Test the Implementation, When It Matters

On top of that, the fact that we’re using the Django auth.authenticate function rather than calling our own code directly is relevant. Django has already introduce an abstraction, to decouple the specifics of authentication backends from the views that use them. This makes it easier for us to add further backends in future.

So in this case (in contrast to the example in [mocks-tightly-coupled-sidebar]) the implementation does matter, because we’ve decided to use a particular, specific interface to implement our authentication system, which is something we might want to document and verify in our tests, and mocks are one way to enable that.

Starting Again, Test-Driving our Implementation With Mocks

Let’s see how things would look if we had made this decision in the first place. We’ll start by reverting all the authentication stuff, both from our test and from our view.

Let’s disable the test first (we can re-enable them later to sense-check things):

Example 9. src/accounts/tests/test_views.py (ch20l006)
class LoginViewTest(TestCase):
    def test_redirects_to_home_page(self):  (1)
        [...]
    def DONT_test_logs_in_if_given_valid_token(self):  (2)
        [...]
    def DONT_test_shows_login_error_if_token_invalid(self):  (2)
        [...]
  1. We can leave the test for the redirect, since that doesn’t involve the auth framework.

  2. I call this "dontifying" tests :)

Now let’s revert the view, and replace our hacky code with some TODOs:

Example 10. src/accounts/views.py (ch20l007)
# from django.contrib import auth, messages  # (1)
from django.contrib import messages
[...]


def login(request):
    # TODO: call authenticate(),  # (2)
    # then auth.login() with the user if we get one,
    # or messages.error() if we get None.
    return redirect("/")
  1. In order to demonstrate a common error message shortly, I’m also reverting our import of the contrib.auth module.

  2. And here’s where we delete our first implementation and replace it with some TODOs.

Let’s check all our tests pass:

$ python src/manage.py test accounts
[...]
Ran 14 tests in 0.021s

OK

Now let’s start again with mock-based tests. First we can write a test that checks we call authenticate() correctly:

Example 11. src/accounts/tests/test_views.py (ch20l008)
class LoginViewTest(TestCase):
    [...]

    @mock.patch("accounts.views.auth")  # (1)
    def test_calls_authenticate_with_uid_from_get_request(self, mock_auth):  # (2)
        self.client.get("/accounts/login?token=abcd123")
        self.assertEqual(
            mock_auth.authenticate.call_args,  # (3)
            mock.call(uid="abcd123"),  # (4)
        )
  1. We expect to be using the django.contrib.auth module in views.py, and we mock it out here. Note that this time, we’re not mocking out a function, we’re mocking out a whole module, and thus implicitly mocking out all the functions (and any other objects) that module contains.

  2. As usual, the mocked object is injected into our test method.

  3. This time, we’ve mocked out a module rather than a function. So we examine the call_args not of the mock_auth module, but of the mock_auth.authenticate function. Because all the attributes of a mock are more mocks, that’s a mock too. You can start to see why Mock objects are so convenient, compared to trying to build your own.

  4. Now, instead of "unpacking" the call args, we use the call function for a neater way of saying what it should have been called with—​that is, the token from the GET request. (See On Mock call_args.)

On Mock call_args

The .call_args property on a mock represents the positional and keyword arguments hat the mock was called with. It’s a special "call" object type, which is essentially a tuple of (positional_args, keyword_args). positional_args is itself a tuple, consisting of the set of positional arguments. keyword_args is a dictionary.

>>> from unittest.mock import Mock, call
>>> m = Mock()
>>> m(42, 43, 'positional arg 3', key='val', thing=666)
<Mock name='mock()' id='139909729163528'>

>>> m.call_args
call(42, 43, 'positional arg 3', key='val', thing=666)

>>> m.call_args == ((42, 43, 'positional arg 3'), {'key': 'val', 'thing': 666})
True
>>> m.call_args == call(42, 43, 'positional arg 3', key='val', thing=666)
True

So in our test, we could have done this instead:

Example 12. src/accounts/tests/test_views.py
    self.assertEqual(
        mock_auth.authenticate.call_args,
        ((,), {'uid': 'abcd123'})
    )
    # or this
    args, kwargs = mock_auth.authenticate.call_args
    self.assertEqual(args, (,))
    self.assertEqual(kwargs, {'uid': 'abcd123'})

But you can see how using the call helper is nicer.

What happens when we run the test? The first error is this:

$ python src/manage.py test accounts
[...]
AttributeError: <module 'accounts.views' from
'...goat-book/src/accounts/views.py'> does not have the attribute 'auth'
Tip
module foo does not have the attribute bar is a common first failure in a test that uses mocks. It’s telling you that you’re trying to mock out something that doesn’t yet exist (or isn’t yet imported) in the target module.

Once we re-import django.contrib.auth, the error changes:

Example 13. src/accounts/views.py (ch20l009)
from django.contrib import auth, messages
[...]

Now we get:

FAIL: test_calls_authenticate_with_uid_from_get_request [...]
[...]
AssertionError: None != call(uid='abcd123')

It’s telling us that the view doesn’t call the auth.authenticate function at all. Let’s fix that, but get it deliberately wrong, just to see:

Example 14. src/accounts/views.py (ch20l010)
def login(request):
    # TODO: call authenticate(),
    auth.authenticate("bang!")
    # then auth.login() with the user if we get one,
    # or messages.error() if we get None.
    return redirect("/")

Bang indeed!

$ python src/manage.py test accounts
[...]
AssertionError: call('bang!') != call(uid='abcd123')
[...]
FAILED (failures=1)

Let’s give authenticate the arguments it expects then:

Example 15. src/accounts/views.py (ch20l011)
def login(request):
    # TODO: call authenticate(),
    auth.authenticate(uid=request.GET["token"])
    # then auth.login() with the user if we get one,
    # or messages.error() if we get None.
    return redirect("/")

That gets us to passing tests:

$ python src/manage.py test accounts
Ran 15 tests in 0.023s

OK

Using mock.return_value

Next we want to check that if the authenticate function returns a user, we pass that into auth.login. Let’s see how that test looks:

Example 16. src/accounts/tests/test_views.py (ch20l012)
@mock.patch("accounts.views.auth")  # (1)
def test_calls_auth_login_with_user_if_there_is_one(self, mock_auth):
    response = self.client.get("/accounts/login?token=abcd123")
    self.assertEqual(
        mock_auth.login.call_args,  # (2)
        mock.call(
            response.wsgi_request,  # (3)
            mock_auth.authenticate.return_value,  # (4)
        ),
    )
  1. We mock the contrib.auth module again.

  2. This time we examine the call args for the auth.login function.

  3. We check that it’s called with the request object that the view sees,

  4. and the "user" object that the authenticate() function returns. Because authenticate() is also mocked out, we can use its special .return_value attribute.

When you call a mock, you get another mock. But you can also get a copy of that returned mock from the original mock that you called. Boy, it sure is hard to explain this stuff without saying "mock" a lot! Another little console illustration might help here:

>>> m = Mock()
>>> thing = m()
>>> thing
<Mock name='mock()' id='140652722034952'>
>>> m.return_value
<Mock name='mock()' id='140652722034952'>
>>> thing == m.return_value
True
Avoid Mock’s Magic assert_called…​ Methods?

If you’ve used unittest.mock before, you may have come across its special assert_called…​ methods, and you may be wondering why I didn’t use them. For example, instead of doing:

self.assertEqual(a_mock.call_args, call(foo, bar))

You can just do:

a_mock.assert_called_with(foo, bar)

And the mock library will raise an AssertionError for you if there is a mismatch.

Why not use that? For me, the problem with these magic methods is that it’s too easy to make a silly typo and end up with a test that always passes:

a_mock.asssert_called_with(foo, bar)  # will always pass

Unless you get the magic method name exactly right, then you will just get a "normal" mock method, which just silently return another mock, and you may not realise that you’ve written a test that tests nothing at all.

That’s why I prefer to always have an explicit unittest method in there.

In any case, what do we get from running the test?

$ python src/manage.py test accounts
[...]
AssertionError: None != call(<WSGIRequest: GET '/accounts/login?t[...]

Sure enough, it’s telling us that we’re not calling auth.login() at all yet. Let’s try doing that. Deliberately wrong as usual first!

Example 17. src/accounts/views.py (ch20l013)
def login(request):
    # TODO: call authenticate(),
    auth.authenticate(uid=request.GET["token"])
    # then auth.login() with the user if we get one,
    auth.login("ack!")
    # or messages.error() if we get None.
    return redirect("/")

Ack indeed!

$ python src/manage.py test accounts
[...]

ERROR: test_redirects_to_home_page
[...]
TypeError: login() missing 1 required positional argument: 'user'

FAIL: test_calls_auth_login_with_user_if_there_is_one [...]
[...]
AssertionError: call('ack!') != call(<WSGIRequest: GET
'/accounts/login?token=[...]
[...]

Ran 16 tests in 0.026s

FAILED (failures=1, errors=1)

That’s one expected failure from our mocky test, and one (more) unexpected one from the nonmocky one.

Let’s see if we can fix them:

Example 18. src/accounts/views.py (ch20l014)
def login(request):
    # TODO: call authenticate(),
    user = auth.authenticate(uid=request.GET["token"])
    # then auth.login() with the user if we get one,
    auth.login(request, user)
    # or messages.error() if we get None.
    return redirect("/")

Well, that does fix our mocky test, but not the other one; it now has a slightly different complaint:

ERROR: test_redirects_to_home_page
(accounts.tests.test_views.LoginViewTest.test_redirects_to_home_page)
[...]
  File "...goat-book/src/accounts/views.py", line 33, in login
    auth.login(request, user)
[...]
AttributeError: 'AnonymousUser' object has no attribute '_meta'

It’s because we’re still calling auth.login indiscriminately on any kind of user, and that’s causing problems back in our original test for the redirect, which isn’t currently mocking out auth.login.

We can get back to passing like this:

Example 19. src/accounts/views.py (ch20l015)
def login(request):
    # TODO: call authenticate(),
    if user := auth.authenticate(uid=request.GET["token"]):  # (1)
        # then auth.login() with the user if we get one,
        auth.login(request, user)
  1. If you haven’t seen this before, the := is known as the "walrus operator" (more formally, it’s the operator for an "assignment expression"), which was a controversial new feature from Python 3.8 (Guido pretty much burned out over it), and it’s not often useful, but it is quite neat for cases like this, where you have a variable and want to do a conditional on it straight away. See this article for more explanation.

This gets our unit test passing:

$ python src/manage.py test accounts
[...]

OK

Using .return_value during test setup

I’m a little nervous that we’ve introduced an if without an explicit test for it. Testing the unhappy path will reassure me. We can use our existing test for the error case to crib from.

We want to be able to set up our mocks to say: auth.authenticate() should return None. We can do that by setting the .return_value on the mock:

Example 20. src/accounts/tests/test_views.py (ch20l016)
    @mock.patch("accounts.views.auth")
    def test_adds_error_message_if_auth_user_is_None(self, mock_auth):
        mock_auth.authenticate.return_value = None  # (1)

        response = self.client.get("/accounts/login?token=abcd123", follow=True)

        message = list(response.context["messages"])[0]
        self.assertEqual(  # (2)
            message.message,
            "Invalid login link, please request a new one",
        )
        self.assertEqual(message.tags, "error")
  1. We use .return_value on our mock once again, but this time, we assign to it, before it’s used, (in the setup part of the test, aka the "arrange" or "given" phase). rather than reading from it (in the assert/when part) as we did earlier.

  2. Our asserts are copied across from DONT_test_shows_login_error_if_token_invalid()

That gives us this somewhat cryptic, but expected failure:

ERROR: test_adds_error_message_if_auth_user_is_None [...]
[...]
    message = list(response.context["messages"])[0]
              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range

Essentially that’s saying there are no messages in our response.

We can get it passing like this, starting with a deliberate mistake as always:

Example 21. src/accounts/views.py (ch20l017)
def login(request):
    # TODO: call authenticate(),
    if user := auth.authenticate(uid=request.GET["token"]):
        # then auth.login() with the user if we get one,
        auth.login(request, user)
    else:
        # or messages.error() if we get None.
        messages.error(request, "boo")
    return redirect("/")

Which gives us

AssertionError: 'boo' != 'Invalid login link, please request a new one'

And so:

Example 22. src/accounts/views.py (ch20l018)
def login(request):
    # TODO: call authenticate(),
    if user := auth.authenticate(uid=request.GET["token"]):
        # then auth.login() with the user if we get one,
        auth.login(request, user)
    else:
        # or messages.error() if we get None.
        messages.error(request, "Invalid login link, please request a new one")
    return redirect("/")

Now our tests pass:

$ python src/manage.py test accounts
[...]

Ran 17 tests in 0.025s

OK

And we can do a final refactor to remove those comments:

Example 23. src/accounts/views.py (ch20l019)
def login(request):
    if user := auth.authenticate(uid=request.GET["token"]):
        auth.login(request, user)
    else:
        messages.error(request, "Invalid login link, please request a new one")
    return redirect("/")

Lovely! What’s next?

UnDONTifying

Remember we still have the DONTified, nonmocky tests? Let’s re-enable now to sense-check that our mocky tests have driven us to the right place:

Example 24. src/accounts/tests/test_views.py (ch20l020)
@@ -63,7 +63,7 @@ class LoginViewTest(TestCase):
         response = self.client.get("/accounts/login?token=abcd123")
         self.assertRedirects(response, "/")

-    def DONT_test_logs_in_if_given_valid_token(self):
+    def test_logs_in_if_given_valid_token(self):
         anon_user = auth.get_user(self.client)
         self.assertEqual(anon_user.is_authenticated, False)

@@ -74,7 +74,7 @@ class LoginViewTest(TestCase):
         self.assertEqual(user.is_authenticated, True)
         self.assertEqual(user.email, "[email protected]")

-    def DONT_test_shows_login_error_if_token_invalid(self):
+    def test_shows_login_error_if_token_invalid(self):
         response = self.client.get("/accounts/login?token=invalid-token", follow=True)

Sure enough they both pass:

$ python src/manage.py test accounts
[...]
Ran 19 tests in 0.025s

OK

Deciding Which Tests To Keep

We now definitely have duplicate tests:

Example 25. src/accounts/tests/test_views.py
class LoginViewTest(TestCase):
    def test_redirects_to_home_page(self):
        [...]

    def test_logs_in_if_given_valid_token(self):
        [...]

    def test_shows_login_error_if_token_invalid(self):
        [...]

    @mock.patch("accounts.views.auth")
    def test_calls_authenticate_with_uid_from_get_request(self, mock_auth):
        [...]

    @mock.patch("accounts.views.auth")
    def test_calls_auth_login_with_user_if_there_is_one(self, mock_auth):
        [...]

    @mock.patch("accounts.views.auth")
    def test_adds_error_message_if_auth_user_is_None(self, mock_auth):
        [...]

The redirect test could stay the same whether we’re using mocks or not. We then have two non-mocky tests for the happy and unhappy paths, and three mocky tests:

  • One checks that we are integrated with our auth backend correctly

  • One checks that we call the built-in auth.login function correctly, which tests the happy path.

  • And one that checks we set an error message in the unhappy path.

I think there are lots of ways to justify different choices here, but my instinct tends to be to avoid using mocks if you can. So, I propose we delete the two mocky tests for the happy and unhappy paths, since they are reasonably covered by the non-mocky ones, but I think we can justify keeping the first mocky test, because it adds value by checking that we’re doing our authentication the "right" way, ie by calling into Dango’s auth.authenticate() function (instead of, eg, instantiating and calling our auth backend ourselves, or even just implementing authentication inline in the view).

Tip
"Test behaviour, not implementation" is a GREAT rule of thumb for tests. But sometimes, the fact that you’re using one implementation rather than another really is important. In these cases, a mocky test can be useful.

So let’s delete our last two mocky tests. I’m also going to rename the remaining one to make our intention clear, we want to check we are using the Django auth library:

Example 26. src/accounts/tests/test_views.py (ch20l021)
    @mock.patch("accounts.views.auth")
    def test_calls_django_auth_authenticate(self, mock_auth):
        [...]

And we’re down to 17 tests:

$ python src/manage.py test accounts
[...]
Ran 17 tests in 0.015s

OK

The Moment of Truth: Will the FT Pass?

I think we’re just about ready to try our functional test!

Let’s just make sure our base template shows a different nav bar for logged-in and non–logged-in users (which our FT relies on):

Example 27. src/lists/templates/base.html (ch20l022)
<nav class="navbar">
  <div class="container-fluid">
    <a class="navbar-brand" href="/">Superlists</a>
    {% if user.email %}
      <span class="navbar-text">Logged in as {{ user.email }}</span>
      <form method="POST" action="TODO">
        {% csrf_token %}
        <button id="id_logout" class="btn btn-outline-secondary" type="submit">Log out</button>
      </form>
    {% else %}
      <form method="POST" action="{% url 'send_login_email' %}">
        <div class="input-group">
          <label class="navbar-text me-2" for="id_email_input">
            Enter your email to log in
          </label>
          <input
            id="id_email_input"
            name="email"
            class="form-control"
            placeholder="[email protected]"
          />
          {% csrf_token %}
        </div>
      </form>
    {% endif %}
  </div>
</nav>

OK there’s a TODO in there about the log out button, we’ll get to that, but how does our FT look now?

$ python src/manage.py test functional_tests.test_login
[...]
.
 ---------------------------------------------------------------------
Ran 1 test in 3.282s

OK

It Works in Theory! Does It Work in Practice?

Wow! Can you believe it? I scarcely can! Time for a manual look around with runserver:

$ python src/manage.py runserver
[...]
Internal Server Error: /accounts/send_login_email
Traceback (most recent call last):
  File "...goat-book/accounts/views.py", line 20, in send_login_email

ConnectionRefusedError: [Errno 111] Connection refused

Using Our New Environment Variable, and Saving It to .env

You’ll probably get an error, like I did, when you try to run things manually. It’s because of two things:

  • Firstly, we need to re-add the email configuration to settings.py.

Example 28. src/superlists/settings.py (ch20l023)
EMAIL_HOST = "smtp.gmail.com"
EMAIL_HOST_USER = "[email protected]"
EMAIL_HOST_PASSWORD = os.environ.get("EMAIL_PASSWORD")
EMAIL_PORT = 587
EMAIL_USE_TLS = True
  • Secondly, we (probably) need to re-set the EMAIL_PASSWORD in our shell.

$ export EMAIL_PASSWORD="yoursekritpasswordhere"
Using a Local .env File for Development

Until now we’ve only used a .env file on the server, (where we called it superlists/.env). That’s because we’ve made sure all the other settings have sensible defaults for dev, but there’s just no way to get a working login system without this one!

Just as we do on the server, you can also use a .env file to save project-specific environment variables. We’ll call this one literally just .env; that’s a convention which makes it a hidden file, on Unix-like systems at least:

$ echo .env >> .gitignore  # we don't want to commit our secrets into git!
$ echo EMAIL_PASSWORD="yoursekritpasswordhere" >> .env
$ set -a; source .env; set +a;

It does mean you have to remember to do that weird set -a; source…​ dance, every time you start working on the project, as well as remembering to activate your virtualenv.

If you search or ask around, you’ll find there are some tools and shell plugins that load virtualenvs and .env files automatically, and/or django plugins that do this stuff too.

And now…​

$ python src/manage.py runserver

…​you should see something like Check your email…​..

de-spiked site with success message
Figure 2. Check your email…​.

Woohoo!

I’ve been waiting to do a commit up until this moment, just to make sure everything works. At this point, you could make a series of separate commits—​one for the login view, one for the auth backend, one for the user model, one for wiring up the template. Or you could decide that, since they’re all interrelated, and none will work without the others, you may as well just have one big commit:

$ git status
$ git add .
$ git diff --staged
$ git commit -m "Custom passwordless auth backend + custom user model"

Finishing Off Our FT, Testing Logout

The last thing we need to do before we call it a day is to test the logout button We extend the FT with a couple more steps:

Example 29. src/functional_tests/test_login.py (ch20l024)
        [...]
        # she is logged in!
        self.wait_for(
            lambda: self.browser.find_element(By.CSS_SELECTOR, "#id_logout"),
        )
        navbar = self.browser.find_element(By.CSS_SELECTOR, ".navbar")
        self.assertIn(TEST_EMAIL, navbar.text)

        # Now she logs out
        self.browser.find_element(By.CSS_SELECTOR, "#id_logout").click()

        # She is logged out
        self.wait_for(
            lambda: self.browser.find_element(By.CSS_SELECTOR, "input[name=email]")
        )
        navbar = self.browser.find_element(By.CSS_SELECTOR, ".navbar")
        self.assertNotIn(TEST_EMAIL, navbar.text)

With that, we can see that the test is failing because the logout button doesn’t have a valid URL to submit to:

$ python src/manage.py test functional_tests.test_login
[...]
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate
element: input[name=email]; [...]

So let’s tell the base template that we want a new url named "logout":

Example 30. src/lists/templates/base.html (ch20l025)
          {% if user.email %}
            <span class="navbar-text">Logged in as {{ user.email }}</span>
            <form method="POST" action="{% url 'logout' %}">
              {% csrf_token %}
              <button id="id_logout" class="btn btn-outline-secondary" type="submit">Log out</button>
            </form>
          {% else %}

If you try the FTs at this point, you’ll see an error saying that URL doesn’t exist yet:

$ python src/manage.py test functional_tests.test_login
Internal Server Error: /
[...]
django.urls.exceptions.NoReverseMatch: Reverse for 'logout' not found. 'logout'
is not a valid view function or pattern name.

======================================================================
ERROR: test_login_using_magic_link
(functional_tests.test_login.LoginTest.test_login_using_magic_link)
[...]

selenium.common.exceptions.NoSuchElementException: Message: Unable to locate
element: #id_logout; [...]

Implementing a logout URL is actually very simple: we can use Django’s built-in logout view, which clears down the user’s session and redirects them to a page of our choice:

Example 31. src/accounts/urls.py (ch20l026)
from django.contrib.auth import views as auth_views
from django.urls import path

from . import views

urlpatterns = [
    path("send_login_email", views.send_login_email, name="send_login_email"),
    path("login", views.login, name="login"),
    path("logout", auth_views.LogoutView.as_view(next_page="/"), name="logout"),
]

And that gets us a fully passing FT—​indeed, a fully passing test suite:

$ python src/manage.py test functional_tests.test_login
[...]
OK
$ cd src && python manage.py test
[...]
Ran 57 tests in 78.124s

OK
Warning
We’re nowhere near a truly secure or acceptable login system here. Since this is just an example app for a book, we’ll leave it at that, but in "real life" you’d want to explore a lot more security and usability issues before calling the job done. We’re dangerously close to "rolling our own crypto" here, and relying on a more established login system would be much safer.

In the next chapter, we’ll start trying to put our login system to good use. In the meantime, do a commit and enjoy this recap:

On Mocking in Python
Using mock.return_value

The .return_value attribute on a mock can be used to access the return value of a mocked-out function, and thus check on how it gets used later in your code; this usually happens in the "Assert" or "Then" part of your test. It can also be assigned to in the "Arrange" or "Given" part of your test, as a way to say "we want this mocked-out function to return a particular value"

Mocks can ensure test isolation and reduce duplication

You can use mocks to isolate different parts of your code from each other, and thus test them independently. This can help you to avoid duplication, because you’re only testing a single layer at a time, rather than having to think about combinations of interactions of different layers. Used extensively, this approach leads to "London-style" TDD, but that’s quite different from the style I mostly follow and show in this book.

Mocks can allow you to verify implementation details

Most tests should test behaviour, not implementation. At some point though, we decided that the fact that we used a particular implementation was important, and so we used a mock as a way to verify that, and document it for our future selves.

There are alternatives to mocks, but they require rethinking how your code is structured

In a way, mocks make it "too easy". In other programming languages that lack Python’s dynamic ability to monkepatch things at runtime, developers have had to work on alternative ways to test code with dependencies. While these techniques can be more complex, they do force you to think about how your code is structured, to cleanly identify your dependencies, and to build clean abstractions and interfaces around them. Further discussion is beyond the scope of this book, but check out Cosmic Python.

There’s a longer worked example of mocks and using them to improve the structure of code in [appendix_purist_unit_tests].