Testing CoquiTitle Lambdas
This guide covers the testing infrastructure for CoquiTitle backend lambdas, including how to run tests, the mock module pattern, and how to add tests to new lambdas.
Running Tests
All test commands are run from the lambdas directory:
cd backend/coquititle/lambdas
Available Make Targets
| Command | Description |
|---|---|
make test-all | Run all tests sequentially |
make test-rights | Run shared/title_state tests |
make test-report | Run report-generator tests |
make test-evidence | Run evidence-resolver tests |
make test-pending | Run pending-docs-processor tests |
make test-title-state-builder | Run title-state-builder tests |
make coverage | Run title_state tests with coverage |
make coverage-all | Run all tests with coverage |
PYTHONPATH Requirements
Each lambda requires specific PYTHONPATH entries to resolve imports correctly. The Makefile handles this automatically, but when running pytest directly:
# For shared/title_state tests
PYTHONPATH=$(pwd) pytest shared/title_state/tests -v
# For lambda tests (include both root and lambda's src/)
PYTHONPATH=$(pwd):$(pwd)/evidence-resolver/src pytest evidence-resolver/tests -v
Running Individual Tests
# Run a specific test file
PYTHONPATH=$(pwd):$(pwd)/evidence-resolver/src pytest evidence-resolver/tests/test_utils.py -v
# Run a specific test function
PYTHONPATH=$(pwd):$(pwd)/evidence-resolver/src pytest evidence-resolver/tests/test_utils.py::test_parse_evidence -v
MockModule Pattern
Why We Mock Heavy Dependencies
CoquiTitle lambdas depend on heavy external libraries (Google Genai, Langfuse, boto3, etc.) that:
- Require credentials to import
- Have complex transitive dependencies
- Slow down test collection
We mock these at the module level before test collection so tests can run without all production dependencies installed.
How testing_utils.py Works
The shared testing utilities provide a module mocking system:
# testing_utils.py
class MockModule(ModuleType):
"""A mock module that returns MagicMock for any attribute access."""
def __getattr__(self, name):
if name.startswith('_'):
raise AttributeError(name)
mock = MagicMock()
self.__dict__[name] = mock # Cache for consistent access
return mock
Key functions:
MockModule- A module type that returns MagicMock for any attribute lookupmock_module(name)- Installs a mock module and all parent modules intosys.modulesinstall_mock_modules(modules)- Batch installs multiple mock modulesCOMMON_MOCK_MODULES- Pre-configured list of heavy dependencies to mock
Common Mock Modules
The following are mocked by default via COMMON_MOCK_MODULES:
COMMON_MOCK_MODULES = [
'langfuse',
'langfuse.decorators',
'google',
'google.genai',
'google.genai.types',
'google.cloud',
'google.cloud.documentai',
'google.auth',
'google.oauth2',
'google.api_core',
'psycopg2',
'boto3',
'botocore',
'tenacity',
]
Adding New Mocks
To add a new mock, extend COMMON_MOCK_MODULES in your lambda's conftest.py:
from testing_utils import install_mock_modules, COMMON_MOCK_MODULES
LAMBDA_MOCKS = COMMON_MOCK_MODULES + [
'new_heavy_dependency',
'new_heavy_dependency.submodule',
]
install_mock_modules(LAMBDA_MOCKS)
Why Tests Run Sequentially
Tests run sequentially (not in parallel) due to sys.modules pollution:
- Different mock sets - Each lambda's
conftest.pyinstalls different mock modules intosys.modules - Pytest discovery - When pytest discovers tests, it processes
conftest.pyfiles which modify the global module state - ImportPathMismatchError - Running in parallel causes pytest to discover the same
conftest.pythrough different import paths
From the Makefile:
# Run all tests sequentially to avoid conftest collisions.
# Each lambda's conftest.py installs different mock modules into sys.modules.
test-all:
$(MAKE) test-rights && \
$(MAKE) test-report && \
$(MAKE) test-evidence && \
$(MAKE) test-pending && \
$(MAKE) test-title-state-builder
Future: Parallel Execution
Parallel execution may be revisited using pytest-xdist with --forked mode, which runs each test in a separate process with its own sys.modules.
Adding Tests to New Lambdas
1. Create Test Directory Structure
your-lambda/
├── src/
│ └── handler.py
├── tests/
│ ├── __init__.py
│ ├── conftest.py
│ └── test_handler.py
└── requirements.txt
2. Create conftest.py
Use the shared testing utilities to mock dependencies:
"""
Pytest configuration for your-lambda tests.
Uses shared testing utilities for module mocking.
"""
from testing_utils import install_mock_modules, COMMON_MOCK_MODULES
# Additional mocks specific to your lambda
YOUR_LAMBDA_MOCKS = COMMON_MOCK_MODULES + [
'shared',
'shared.db',
'shared.vertex_ai_utils',
'shared.langfuse_client',
# Add any other dependencies your lambda imports
]
install_mock_modules(YOUR_LAMBDA_MOCKS)
3. Add Makefile Target
Add a new target to the Makefile with the correct PYTHONPATH:
test-your-lambda:
PYTHONPATH=$(PWD):$(PWD)/your-lambda/src pytest your-lambda/tests -v
Update test-all to include the new target:
test-all:
$(MAKE) test-rights && \
$(MAKE) test-report && \
$(MAKE) test-evidence && \
$(MAKE) test-pending && \
$(MAKE) test-title-state-builder && \
$(MAKE) test-your-lambda
4. Write Testable Code
Follow the pure function extraction pattern for testability:
# Instead of testing the handler directly (which has AWS dependencies),
# extract pure functions that can be tested in isolation
# handler.py
def handler(event, context):
data = parse_event(event)
result = process_data(data) # Pure function - easy to test
return format_response(result)
# utils.py - Pure functions
def process_data(data: dict) -> dict:
"""Pure function with no external dependencies."""
return {"processed": data["input"]}
See evidence-resolver/src/utils.py for a real example of this pattern.
CI Workflows
Each lambda has its own GitHub Actions workflow with path-based triggers.
Workflow Structure
name: Test Your Lambda
on:
push:
branches: [main]
paths:
- 'backend/coquititle/lambdas/your-lambda/**'
- 'backend/coquititle/lambdas/shared/**'
- 'backend/coquititle/lambdas/testing_utils.py'
pull_request:
paths:
- 'backend/coquititle/lambdas/your-lambda/**'
- 'backend/coquititle/lambdas/shared/**'
jobs:
test:
runs-on: ubuntu-latest
defaults:
run:
working-directory: backend/coquititle/lambdas
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
cache: 'pip'
- name: Install dependencies
run: pip install -r requirements-dev.txt
- name: Run tests with coverage
run: |
PYTHONPATH=$(pwd):$(pwd)/your-lambda/src pytest your-lambda/tests -v \
--cov=your-lambda/src \
--cov-report=xml \
--cov-report=term-missing
timeout-minutes: 2
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
files: backend/coquititle/lambdas/coverage.xml
flags: your-lambda
name: your-lambda
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
Key Workflow Features
- Path triggers - Only runs when relevant files change
- Python 3.12 - Matches production runtime
- 2-minute timeout - Tests should be fast
- Codecov integration - Coverage reports per lambda
Example Workflows
See existing workflows for reference:
.github/workflows/test-evidence-resolver.yml.github/workflows/test-report-generator.yml.github/workflows/test-pending-docs-processor.yml.github/workflows/test-title-state-builder.yml.github/workflows/test-title-state.yml
Related
- Local Development Guide - Setting up your development environment
- CoquiTitle Architecture - Understanding the extraction pipeline