Code Quality & Security
Testing, security best practices, and code standards
This guide covers Trinity's approach to code quality, testing, and security.
Testing Strategy
Test Suite Organization
tests/
├── test_engine.py # Core engine tests
├── test_async_llm_client.py # Async client tests
├── test_async_performance.py # Performance benchmarks
├── test_cache.py # Cache layer tests
├── test_neural_healer.py # Neural healer tests
├── test_integration_v0.5.py # Integration tests
└── test_e2e_neural.py # End-to-end testsRunning Tests
Quick test:
make testWith coverage:
make test-cov
# View HTML report
open htmlcov/index.htmlSpecific test file:
pytest tests/test_engine.py -vSpecific test function:
pytest tests/test_engine.py::test_content_generation -vAsync tests only:
make test-asyncPerformance benchmarks:
make test-perfWriting Tests
Basic test structure:
import pytest
from trinity.components.brain import ContentEngine
def test_content_generation():
"""Test basic content generation"""
engine = ContentEngine()
result = engine.generate_content("test.txt", "brutalist")
assert result is not None
assert "title" in result
assert len(result["title"]) > 0Async tests:
import pytest
from trinity.components.async_brain import AsyncContentEngine
@pytest.mark.asyncio
async def test_async_generation():
"""Test async content generation"""
async with AsyncContentEngine() as engine:
result = await engine.generate_content_async(
"test.txt",
"brutalist"
)
assert result is not None
assert "title" in resultParameterized tests:
@pytest.mark.parametrize("theme,expected_color", [
("brutalist", "gray"),
("enterprise", "blue"),
("hacker", "green"),
])
def test_theme_colors(theme, expected_color):
"""Test theme color palettes"""
config = load_theme(theme)
assert expected_color in config["color_palette"]["primary"]Fixtures:
@pytest.fixture
def sample_content():
"""Provide sample content for tests"""
return {
"title": "Test Portfolio",
"description": "A test portfolio",
"projects": []
}
def test_with_fixture(sample_content):
"""Test using fixture"""
assert sample_content["title"] == "Test Portfolio"Test Coverage Goals
| Component | Target |
|---|---|
| Core Engine | 90% |
| LLM Client | 85% |
| Cache Manager | 90% |
| Neural Healer | 80% |
| Predictor | 85% |
| Overall | 85% |
Run make test-cov to see current coverage.
Security Best Practices
Anti-Vibecoding Rules
Trinity follows strict engineering principles documented as "Anti-Vibecoding Rules":
Critical Security Rules:
Rule #6: Security-First Design
- Never load untrusted pickle files
- Validate all external inputs
- Use allowlists, not denylists
- Fail securely (deny by default)
Rule #7: Explicit Error Handling
- No silent failures
- Log all errors with context
- Graceful degradation
- User-friendly error messages
Rule #8: No Magic Numbers
- All constants named and documented
- Configuration-driven values
- Type hints for clarity
Rule #13: Don't Hack sys.path
- Use proper package structure
- Rely on package managers (Poetry)
- No runtime path manipulation
Rule #18: Proper Package Imports
- Use
from trinity.x import y - No relative imports beyond local module
- Clear import hierarchy
Rule #28: Structured Logging
- JSON-compatible metadata
- Correlation IDs for tracing
- Performance metrics
- Security event logging
Rule #30: Testable Design
- Small, pure functions
- Dependency injection
- Mocked external services
- Isolated test cases
Pickle Model Security
WARNING: Critical Security Consideration
Trinity uses joblib (pickle-based) for ML model serialization. Pickle can execute arbitrary code during deserialization.
Safe Usage:
# SAFE: Load your own model
from trinity.components.predictor import LayoutRiskPredictor
predictor = LayoutRiskPredictor.load("models/my_model.pkl")Unsafe Usage:
# UNSAFE: Never do this
import joblib
# Downloaded from internet
model = joblib.load("untrusted_model.pkl") # CAN EXECUTE MALICIOUS CODEMitigation Strategies:
Only load models you trained yourself
bashmake train # Generate your own modelVerify model source
python# Check metadata before loading metadata_path = model_path.replace(".pkl", "_metadata.json") with open(metadata_path) as f: metadata = json.load(f) assert metadata["created_by"] == "trinity-core"Use ONNX format (future)
- Migration planned for v0.8.0
- Safer serialization format
- Better portability
Runtime Warnings:
Trinity logs warnings when loading pickle models:
WARNING: Loading pickle-serialized model. Only load from trusted sources.
Model: models/layout_risk_predictor.pklLLM Endpoint Security
Local LLM (Ollama, LM Studio):
# config/settings.yaml
llm:
provider: ollama
api_url: http://127.0.0.1:11434 # Localhost only
model: llama3.2:3bSecurity Checklist:
- Use
127.0.0.1(localhost) not0.0.0.0 - Firewall blocks external access
- No authentication credentials in code
- Use environment variables for API keys
Cloud LLM (OpenAI, Claude):
# Environment variable (not in code)
export OPENAI_API_KEY=sk-...
# Verify TLS certificates
export LLM_VERIFY_SSL=trueSecurity Checklist:
- API keys in environment variables
- Rotate keys regularly
- Monitor API usage for anomalies
- Use read-only keys when possible
- Enable rate limiting
Docker Container Security
Non-root user:
# Create non-root user
RUN useradd -m -u 1000 trinity
USER trinityRead-only volumes:
# docker-compose.yml
volumes:
- ./data:/app/data:ro # Read-only
- ./output:/app/output # Read-writeNetwork isolation:
networks:
trinity-network:
driver: bridge
internal: true # No external accessSecurity scanning:
# Scan Docker image for vulnerabilities
docker scan trinity-coreDependency Security
Automated scanning:
- GitHub Dependabot enabled
- Weekly security advisory reviews
- Critical updates applied within 48 hours
Manual audit:
# Check for known vulnerabilities
pip install pip-audit
pip-audit
# Or with make
make securityUpdate dependencies:
# Update all dependencies
pip install --upgrade -r requirements.txt
# Update specific package
pip install --upgrade httpxInput Validation
Content validation:
from pydantic import BaseModel, validator
class PortfolioContent(BaseModel):
title: str
description: str
@validator('title')
def title_length(cls, v):
if len(v) > 200:
raise ValueError('Title too long')
return v
@validator('description')
def safe_description(cls, v):
# Remove potential XSS
return html.escape(v)File path validation:
from pathlib import Path
def safe_file_path(user_input: str) -> Path:
"""Validate file path to prevent directory traversal"""
path = Path(user_input).resolve()
# Ensure path is within allowed directory
allowed_dir = Path("output").resolve()
if not path.is_relative_to(allowed_dir):
raise ValueError("Path outside allowed directory")
return pathGDPR & Privacy Compliance
Data Collection:
- No user data collected or transmitted
- Training datasets generated locally
- LLM requests stay on local network (by default)
- Generated HTML contains no tracking scripts
Data Storage:
- Training data:
data/training_dataset.csv(local only) - ML models:
models/*.pkl(local only) - Output HTML:
output/*.html(static files) - No remote databases or cloud storage
Data Deletion:
# Delete all training data
rm -rf data/
# Delete all models
rm -rf models/
# Clear cache
make cache-clearCode Style Standards
Python Code Style
Formatter: Black
# Format all code
make format
# Check formatting
make format-checkConfiguration:
# pyproject.toml
[tool.black]
line-length = 100
target-version = ['py310']Linter: Ruff
# Lint code
make lint
# Auto-fix issues
make lint-fixConfiguration:
# pyproject.toml
[tool.ruff]
line-length = 100
select = ["E", "F", "I", "N", "W"]
ignore = ["E501"] # Line too long (handled by black)Type Checking
Type checker: mypy
# Type check
make type-checkConfiguration:
# pyproject.toml
[tool.mypy]
python_version = "3.10"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = trueType hints required:
# Good
def predict_risk(content_len: int, theme: str) -> float:
"""Predict layout risk"""
return 0.5
# Bad (missing type hints)
def predict_risk(content_len, theme):
return 0.5Documentation Standards
Docstring format: Google style
def generate_content(
self,
input_file: str,
theme: str,
use_cache: bool = True
) -> Dict[str, Any]:
"""
Generate content using LLM.
This function loads input content, generates structured output
using the configured LLM provider, and validates the result
against the expected schema.
Args:
input_file: Path to input content file
theme: Theme name (e.g., "brutalist", "enterprise")
use_cache: Whether to use cached responses
Returns:
Dictionary containing generated content with keys:
- title: Portfolio title
- description: Portfolio description
- projects: List of project dictionaries
Raises:
FileNotFoundError: If input file doesn't exist
ValidationError: If LLM response doesn't match schema
LLMError: If LLM request fails after retries
Example:
>>> engine = ContentEngine()
>>> content = engine.generate_content(
... "data/portfolio.txt",
... "brutalist"
... )
>>> print(content["title"])
"My Portfolio"
"""Required documentation:
- Module-level docstrings
- Class docstrings
- Public function/method docstrings
- Complex algorithm explanations
- Security warnings (where applicable)
Forbidden:
- TODO comments (use GitHub Issues)
- Commented-out code (delete it, Git remembers)
- Vague comments ("fix later", "hack")
Continuous Integration
GitHub Actions
Workflow: tests.yml
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install pytest pytest-cov
- name: Run tests
run: pytest --cov=src --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v3Workflow: quality.yml
name: Code Quality
on: [push, pull_request]
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install tools
run: pip install black ruff mypy
- name: Check formatting
run: black --check src/
- name: Lint
run: ruff check src/
- name: Type check
run: mypy src/Pre-commit Hooks
Install pre-commit:
pip install pre-commit
pre-commit installConfiguration (.pre-commit-config.yaml):
repos:
- repo: https://github.com/psf/black
rev: 23.11.0
hooks:
- id: black
language_version: python3.10
args: [--line-length=100]
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.1.6
hooks:
- id: ruff
args: [--fix]
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.7.1
hooks:
- id: mypy
additional_dependencies: [types-all]Performance Profiling
CPU Profiling
# Profile with cProfile
python -m cProfile -o profile.stats main.py --theme brutalist
# View results
python -m pstats profile.stats
>>> sort cumtime
>>> stats 20Memory Profiling
# Install memory profiler
pip install memory-profiler
# Profile specific function
python -m memory_profiler main.pyPerformance Benchmarks
# Run performance tests
make test-perf
# Custom benchmark
pytest tests/test_async_performance.py -v --benchmark-onlyNext Steps
- Retry Logic with Heuristics - Understanding the pipeline
- Setup Guide - Installation and configuration
- Self-Healing Features - Predictor and healer details