top of page

25% Discount For All Pricing Plans "welcome"

Handling Expected Failures, Conditional Skipping, Continuous Integration, and Code Coverage



Handling Expected Failures and Conditional Skipping in Python Testing


Imagine a row of traffic lights along a long highway, each light reflecting the status of a particular stretch of road. Green means everything is smooth, but when a light turns red, there's a problem somewhere along that stretch. This is similar to a test suite in Python, which turns green when all tests pass and red when any test fails.


1. Introduction to the state of a test suite:


A Python test suite can be thought of as a control panel for your code. Each test represents a different functionality in your code. When all functionalities perform as expected, every test passes, making the test suite green. On the contrary, even a single failing test will turn the suite red.


2. Handling "False Alarms":


Sometimes, a test may fail, but it's expected to fail. These are known as "false alarms". For instance, you might have a piece of code that is not yet fully implemented, and you have written tests for it in advance. These tests will fail for now but pass in the future when the code is complete.

In this case, the failing test is not indicating an error but rather the ongoing development. It is an integral part of Test-Driven Development (TDD).


3. Example of Test-Driven Development:


Let's consider an example of creating a function called train_model() using TDD. Before we implement train_model(), we write a test for it. This test will fail because train_model() does not exist yet.

def test_train_model():
    model = train_model()
    assert model is not None

If you run this test now, it will fail and turn the test suite red. This is an expected failure or a "false alarm".


4. Using the "xfail" decorator in pytest:


Pytest offers a handy decorator for such scenarios - the xfail decorator. We mark tests that we expect to fail with @pytest.mark.xfail.

import pytest

@pytest.mark.xfail
def test_train_model():
    model = train_model()
    assert model is not None

Now, when we run the tests, pytest will know that this test is expected to fail and will not turn the suite red.


5. Discussing conditional test failures:


There are situations when a test fails only under specific conditions. For instance, some functions might not work properly under a specific Python version. In such cases, we want to skip the test conditionally.


6. Using the "skipif" decorator:


To achieve this, pytest offers another decorator - skipif. We specify the condition under which the test should be skipped.

import sys
import pytest

@pytest.mark.skipif(sys.version_info < (3,0), reason="Does not work on Python 2.7")
def test_train_model():
    model = train_model()
    assert model is not None

In the above example, the test is skipped when the Python version is less than 3.0.


7. Providing reasons for skipping tests:


You might have noticed the 'reason' argument in the skipif decorator. This is a string that explains why the test was skipped, helping anyone reading the test results to understand the reason behind the skipped test.

@pytest.mark.skipif(sys.version_info < (3,0), reason="The function is not compatible with Python 2.7")


8. Applying pytest decorators to entire test classes:


The pytest decorators can also be applied to entire test classes. All tests within the class will inherit the decorator from the class.

import sys
import pytest

@pytest.mark.skipif(sys.version_info < (3,0), reason="Does not work on Python 2.7")
class TestModelTraining:
    def test_train_model(self):
        model = train_model()
        assert model is not None

    def test_train_model_with_data(self):
        model = train_model(data)
        assert model is not None

In the above example, both test_train_model() and test_train_model_with_data() will be skipped if the Python version is less than 3.0.

If you run these tests, you will see that pytest will clearly indicate the tests that were expected to fail or were skipped along with the reasons.


Implementing Continuous Integration and Code Coverage


Consider the busy highway from before. It would be quite impractical for you to manually check each traffic light, wouldn't it? This is where Continuous Integration (CI) comes into play. It's like having a robot that automatically checks the traffic lights for you. Moreover, this robot doesn't just stop at the traffic light status; it goes ahead and checks what percentage of the road is covered by these lights - that's what we refer to as Code Coverage.


1. Introduction to Code Coverage and Build Status:


Code coverage is a measurement of how many lines/blocks of your code are executed while the automated tests are running. A high percentage of code coverage indicates that your code base is well-tested.

Build status, on the other hand, gives you a quick glimpse of the current state of your project. If the build is passing, it means your code is stable.


2. Build Status Badges:


Build status badges serve as an indicator of the stability of a project. They are small icons usually placed at the top of a GitHub README file, showing if the build is passing or failing.


3. Setting up Travis CI for a GitHub project:


Travis CI is a hosted, distributed continuous integration service used to build and test software projects hosted on GitHub.


To use Travis CI, you will need to create a .travis.yml configuration file at the root of your GitHub project and push it to GitHub. Here's an example of a basic .travis.yml for a Python project:

language: python
python:
  - "3.8"
install:
  - pip install -r requirements.txt
script:
  - pytest


4. Installing Travis CI from GitHub Marketplace:


Once you have your .travis.yml file set up, you will need to add Travis CI to your GitHub repository. This can be done from the GitHub Marketplace.


5. Triggering builds in Travis CI:


After installing Travis CI, every push to your GitHub repository will trigger a build in Travis CI, and it will run the tests as defined in .travis.yml.


6. Displaying the build status badge:


After setting up Travis CI, you can get a build status badge from Travis CI and add it to your README file in your GitHub repository. The code for adding the badge will look something like this:

[![Build Status](<https://travis-ci.com/username/repository.svg?branch=master>)](<https://travis-ci.com/username/repository>)

Remember to replace username and repository with your GitHub username and repository name.


7. The Importance of Code Coverage:


While a passing build status indicates a stable code base, it does not necessarily mean that your code is well-tested. This is why we also look at code coverage. A high code coverage percentage indicates that your tests cover a large part of your code, increasing confidence in its reliability.


8. Introduction to Codecov:


Codecov is a tool that helps you monitor your code coverage. It integrates well with GitHub and can also provide a badge showing your coverage percentage.


9. Modifying .travis.yml for Codecov:


To generate coverage reports, we need to modify the .travis.yml file to include a new script for pytest-cov, a plugin for pytest that generates coverage reports.

language: python
python:
  - "3.8"
install:
  - pip install -r requirements.txt
  - pip install pytest-cov
script:
  - pytest --cov=./
after_success:
  - bash <(curl -s <https://codecov.io/bash>)

Here, pytest --cov=./ generates a coverage report for the entire project, and the bash <(curl -s <https://codecov.io/bash>) command sends this report to Codecov.


10. Installing Codecov from GitHub Marketplace:


Once you have updated .travis.yml, you will need to install Codecov from the GitHub Marketplace and enable it for your repository.


11. Generating Codecov reports:


After setting up Codecov, each commit or pull request will trigger a build that generates a coverage report. You can view this report on the Codecov website.


12. Displaying the Codecov badge:


Codecov provides a badge that displays your coverage percentage. You can add this badge to your README file, similar to the Travis CI badge.

[![codecov](<https://codecov.io/gh/username/repository/branch/master/graph/badge.svg?token=YOURTOKEN>)](<https://codecov.io/gh/username/repository>)

Remember to replace username, repository, and YOURTOKEN with your GitHub username, repository name, and your Codecov token, respectively.


Conclusion:


Practicing these concepts is crucial for any developer aiming to maintain a stable code base. By handling expected failures and conditional skipping, you can accurately assess the status of your tests. By implementing Continuous Integration and code coverage, you can ensure your code is both stable and well-tested. It's like having a dedicated robot that not only monitors the traffic lights along the highway, but also keeps track of how much of the highway is well-lit. These practices save manual effort, improve your code quality, and give anyone who interacts with your code (including future you) the confidence that it works as expected.

Happy coding!

Comments


bottom of page