This directory contains comprehensive unit tests for the core components of the ride_rails Rails analysis agent.
-
test_agent_config.py- Tests foragent.config.AgentConfig- Configuration loading and validation
- Environment variable handling
- Default and custom configurations
-
test_tool_registry.py- Tests foragent.tool_registry.ToolRegistry- Tool initialization and management
- Error handling for failed tools
- Tool schema generation for LLMs
-
test_base_tool.py- Tests fortools.base_tool.BaseTool- Abstract base class functionality
- Debug logging and validation
- Tool execution patterns
-
test_ripgrep_tool.py- Tests fortools.ripgrep_tool.RipgrepTool- Ripgrep command construction
- Output parsing and error handling
- File type filtering and search options
-
test_state_machine.py- Tests foragent.state_machine.ReActStateMachine- Step tracking and tool usage statistics
- Search attempt recording
- State management and reset functionality
-
test_response_analyzer.py- Tests foragent.response_analyzer.ResponseAnalyzer- Response analysis and finalization decisions
- Reasoning extraction and action detection
- Confidence scoring and step progression
conftest.py- Shared pytest fixtures and test utilities- Temporary Rails project structure
- Mock objects for testing
- Common test data and helpers
# Run all tests
python tests/run_tests.py
# Run with verbose output
python tests/run_tests.py -v
# Run specific test file
python tests/run_tests.py test_agent_config.py
# Run with coverage
python tests/run_tests.py -c# Run all tests
pytest tests/
# Run specific test file
pytest tests/test_agent_config.py
# Run with coverage
pytest tests/ --cov=agent --cov=tools --cov-report=term-missing
# Run specific test method
pytest tests/test_agent_config.py::TestAgentConfig::test_default_configuration# Run only configuration tests
pytest tests/test_agent_config.py tests/test_tool_registry.py
# Run only tool tests
pytest tests/test_base_tool.py tests/test_ripgrep_tool.py
# Run with specific markers (if added)
pytest -m "unit" tests/The tests require these packages (typically available in the project environment):
pytest- Test frameworkpytest-cov- Coverage reporting (optional)- Standard library
unittest.mockfor mocking
The test suite covers:
- ✅ Configuration Management - Environment variables, validation, defaults
- ✅ Tool System - Registration, initialization, execution patterns
- ✅ Search Tools - Ripgrep integration, output parsing, error handling
- ✅ State Management - Step tracking, tool usage statistics
- ✅ Response Analysis - Finalization logic, reasoning extraction
- ✅ Error Handling - Graceful failure handling across components
- ✅ Debug Features - Debug logging and diagnostic capabilities
- Test files:
test_<component_name>.py - Test classes:
Test<ComponentName> - Test methods:
test_<specific_behavior>
def test_with_project_root(temp_project_root):
"""Test using temporary Rails project structure."""
tool = SomeTool(temp_project_root)
# Test with real file structure@patch('subprocess.run')
def test_external_command(mock_run):
"""Test tool that uses external commands."""
mock_run.return_value = Mock(returncode=0, stdout="output")
# Test without actually running external commandsdef test_error_handling():
"""Test proper error handling."""
tool = SomeTool()
with pytest.raises(ValueError, match="Expected error message"):
tool.execute(invalid_params)These tests are designed to run in CI environments:
- No external dependencies (Rails, ripgrep mocked)
- Fast execution (under 30 seconds for full suite)
- Deterministic results (no flaky tests)
- Clear failure messages
When running with coverage (-c flag), reports are generated:
- Terminal output with missing lines
- HTML report in
tests/coverage_html/(if pytest-cov installed)
# Run one specific test
pytest tests/test_agent_config.py::TestAgentConfig::test_default_configuration -vdef test_with_debug(capfd):
"""Test with captured output."""
tool = SomeTool(debug=True)
result = tool.execute(params)
captured = capfd.readouterr()
assert "debug message" in captured.outEach test runs in isolation:
- Fresh instances of components
- Temporary directories cleaned up automatically
- Mocks reset between tests
- Test Independence - Each test should run independently
- Clear Names - Test names should describe what they test
- Single Responsibility - One test per behavior/scenario
- Mock External Dependencies - Don't rely on external tools
- Use Fixtures - Reuse common setup via fixtures
- Test Edge Cases - Include error conditions and edge cases
- Fast Execution - Keep tests fast for developer productivity