Skip to content

Latest commit

 

History

History

README.md

URST Test Infrastructure

This directory contains the test infrastructure for the URST (MicroPython Reliable Serial Transport) project.

Overview

The test infrastructure provides:

  • Base test class with comprehensive assertion methods
  • Test runner with individual file execution with reporting
  • Coverage tracking capabilities to measure test effectiveness
  • Mock helpers for test data generation and error simulation

Files

Core Infrastructure

  • base_test.py - Base test class with common assertion methods and utilities
  • testrunner.py - Test runner with reporting; runs all or specific tests
  • coverage_tracker.py - Coverage tracking utility for test effectiveness measurement
  • _mocks.py - Mock infrastructure for MicroPython compatibility
  • _colors.py - ANSI color constants used for consistent runner output

Test Files

  • test_base_infrastructure.py - Tests for the base test infrastructure itself
  • test_codec.py - Tests for urst/codec.py
  • test_handler.py - Tests for urst/handler.py
  • test_protocol.py - Tests for urst/protocol.py
  • test_transport.py - Tests for urst/transport.py

Usage

Running All Tests

# From device/tests
python testrunner.py

Options

# Quiet output
python testrunner.py --quiet

# List available test files
python testrunner.py --list

# Verbose diagnostics (if supported)
python testrunner.py --verbose

Running Specific Tests

# Run a specific test file
python testrunner.py test_protocol.py

# Run multiple files
python testrunner.py test_codec.py test_transport.py

# Short names are supported (auto-adds test_ prefix and .py)
python testrunner.py protocol transport

Running Tests Directly

# Run individual test file directly
python test_base_infrastructure.py
python test_codec.py
python test_handler.py
python test_protocol.py
python test_transport.py

BaseTest Class

The BaseTest class provides a comprehensive set of assertion methods:

Basic Assertions

  • assert_equal(actual, expected, message) - Assert equality
  • assert_not_equal(actual, expected, message) - Assert inequality
  • assert_true(condition, message) - Assert condition is true
  • assert_false(condition, message) - Assert condition is false
  • assert_none(value, message) - Assert value is None
  • assert_not_none(value, message) - Assert value is not None

Container Assertions

  • assert_in(item, container, message) - Assert item is in container
  • assert_not_in(item, container, message) - Assert item is not in container

Specialized Assertions

  • assert_bytes_equal(actual, expected, message) - Assert bytes equality with hex output
  • assert_raises(exception_type, callable, *args, **kwargs) - Assert exception is raised

Test Lifecycle

  • setup() - Override for test setup (called before tests)
  • teardown() - Override for test cleanup (called after tests)
  • run_all_tests() - Execute all test methods and return results

Creating New Test Files

  1. Import the base class:

    from _base_test import BaseTest, MockTestHelper
  2. Create test class:

    class TestMyComponent(BaseTest):
        def __init__(self):
            super().__init__("MyComponent")
  3. Add test methods:

    def test_basic_functionality(self):
        """Test basic functionality."""
        result = my_function()
        self.assert_equal(result, expected_value, "Function returns expected value")
  4. Add main execution:

    if __name__ == "__main__":
        test_suite = TestMyComponent()
        results = test_suite.run_all_tests()
        sys.exit(0 if results['failed'] == 0 else 1)

Mock Helpers

The MockTestHelper class provides utilities for test data generation:

  • create_test_data(size, pattern=0x55) - Create test data with pattern
  • create_random_data(size, seed=42) - Create pseudo-random test data
  • corrupt_data(data, position, new_value=0xFF) - Corrupt data for error testing

Coverage Tracking

The coverage tracker provides simple coverage analysis:

from _coverage_tracker import create_coverage_tracker

# Create tracker
tracker = create_coverage_tracker("../urst")
tracker.analyze_source_files()
tracker.start_tracking()

# Run tests...

tracker.stop_tracking()
tracker.print_coverage_report(detailed=True)

Test Output Format

The test runner provides detailed output:

================================================================================
Running 2 test files...
================================================================================
--- Running test: test_example.py ---
PASS: Basic functionality works
FAIL: Edge case handling
  Expected: True
  Actual:   False
--- Finished test: test_example.py ---

================================================================================
TEST SUMMARY
================================================================================
Files:  1/2 passed
Tests:  15/16 passed
Duration: 0.123s
Overall Success Rate: 93.8%
================================================================================

Best Practices

  1. Use descriptive test method names starting with test_
  2. Include meaningful assertion messages to help with debugging
  3. Test normal cases, edge cases, and error conditions
  4. Use setup/teardown for resource management
  5. Keep tests independent - each test should work in isolation
  6. Use mock helpers for consistent test data generation

Compatibility

The test infrastructure is designed to work with both:

  • CPython (development environment)
  • MicroPython (target environment)

Mock objects are used to simulate MicroPython-specific modules when running on CPython.