Setup Guide
Installation, configuration, and development workflow
This guide covers everything you need to start using Trinity:
- Quick start (3 commands)
- Development setup (Python, dependencies, LLM providers)
- Makefile reference (50+ commands)
- Docker deployment
- Contributing guidelines
Quick Start
Get running in 3 commands:
# 1. Clone repository
git clone https://github.com/fabriziosalmi/trinity.git
cd trinity
# 2. Install dependencies
make setup
# 3. Build your first portfolio
make buildOutput: output/index.html (brutalist theme)
View locally: make serve → http://localhost:8000
Prerequisites
Required
Python 3.10+ (3.12 recommended)
bashpython --version # Should be >= 3.10pip (Python package manager)
bashpip --version
Optional
Redis (for distributed caching)
bash# macOS brew install redis brew services start redis # Ubuntu/Debian sudo apt-get install redis-server sudo systemctl start redis # Docker docker run -d -p 6379:6379 redis:7-alpineDocker (for containerized deployment)
bashdocker --version # Should be >= 24.0Node.js 20+ (for VitePress documentation)
bashnode --version npm --version
Installation
Method 1: Makefile (Recommended)
Simplest approach for development:
# Full setup (venv + dependencies + themes)
make setup
# Verify installation
python --version
pip list | grep trinity
# Run tests to confirm
make testWhat make setup does:
- Creates virtual environment (
.venv/) - Installs Python dependencies
- Configures default settings
- Verifies LLM connectivity (if configured)
Method 2: Manual Installation
For custom setups or CI/CD:
# 1. Create virtual environment
python3 -m venv .venv
source .venv/bin/activate # macOS/Linux
# .venv\Scripts\activate # Windows
# 2. Upgrade pip
pip install --upgrade pip
# 3. Install dependencies
pip install -r requirements.txt
# 4. Install development dependencies (optional)
pip install pytest pytest-asyncio pytest-cov black ruff mypy
# 5. Verify installation
python -c "from trinity.components.brain import ContentEngine; print('OK')"Method 3: Poetry (Alternative)
For reproducible builds:
# Install Poetry
curl -sSL https://install.python-poetry.org | python3 -
# Install dependencies
poetry install
# Activate environment
poetry shell
# Verify
poetry run python main.py --helpConfiguration
LLM Providers
Trinity supports multiple LLM providers. Choose one:
Option 1: Ollama (Local, Free)
Best for: Development, privacy, cost control
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull model
ollama pull llama3.2:3b
# Start server (runs on http://localhost:11434)
ollama serve
# Test connection
curl http://localhost:11434/api/tagsConfigure Trinity:
export LLM_PROVIDER=ollama
export LLM_API_URL=http://localhost:11434
export LLM_MODEL=llama3.2:3bOr edit config/settings.yaml:
llm:
provider: ollama
api_url: http://localhost:11434
model: llama3.2:3bOption 2: OpenAI (Cloud, Paid)
Best for: Production quality, long context
# Get API key from https://platform.openai.com/api-keys
export OPENAI_API_KEY=sk-...
# Configure Trinity
export LLM_PROVIDER=openai
export LLM_MODEL=gpt-4-turbo-previewconfig/settings.yaml:
llm:
provider: openai
model: gpt-4-turbo-preview
api_key: ${OPENAI_API_KEY} # Read from environmentOption 3: LM Studio (Local GUI)
Best for: Experimenting with different models
# 1. Download LM Studio: https://lmstudio.ai
# 2. Load a model (e.g., Qwen 2.5 Coder 7B)
# 3. Start local server (default: http://localhost:1234)
# Configure Trinity
export LLM_PROVIDER=lm_studio
export LLM_API_URL=http://localhost:1234/v1
export LLM_MODEL=qwen2.5-coder-7b-instructProvider Comparison
| Provider | Cost | Speed | Privacy | Best For |
|---|---|---|---|---|
| Ollama | Free | Fast | 100% Local | Development |
| LM Studio | Free | Fast | 100% Local | Experimentation |
| OpenAI GPT-4 | $$$$ | Medium | Cloud | Production |
| Claude | $$$ | Medium | Cloud | Long context |
| Gemini | $ | Fast | Cloud | Fast iteration |
Cache Configuration
config/settings.yaml:
cache:
enabled: true
# Cache tiers (in priority order)
tiers:
- memory # In-process LRU (100 entries, <1ms)
- redis # Distributed (optional, 5-10ms)
- filesystem # Persistent (.cache/, 20-50ms)
# Redis configuration (if enabled)
redis:
host: localhost
port: 6379
db: 0
password: null # Optional
# Cache TTL (time-to-live)
ttl: 3600 # 1 hour in seconds
# Filesystem cache
filesystem:
directory: .cache
max_size_mb: 100Disable caching (for testing):
export CACHE_ENABLED=false
python main.py --theme brutalistLogging Configuration
config/logging.yaml:
default_profile: development # or production, testing
profiles:
development:
level: DEBUG
format: human # Colored, human-readable
production:
level: INFO
format: json # Structured for log aggregation
testing:
level: WARNING
format: jsonSwitch profiles:
# Development (verbose, colored)
LOG_PROFILE=development python main.py
# Production (JSON logs)
LOG_PROFILE=production python main.py
# Testing (minimal output)
LOG_PROFILE=testing pytestMakefile Reference
Why Use Makefile?
# Without Makefile:
source .venv/bin/activate && PYTHONPATH=. python main.py --theme brutalist --output output/
# With Makefile:
make buildBenefits:
- Consistent environment (PYTHONPATH, venv activation)
- Self-documenting (
make helpshows all commands) - Portable (works on macOS, Linux, WSL)
Most Used Commands
# Setup
make setup # First-time setup (venv + deps)
make install # Install dependencies only
# Testing
make test # Run all tests
make test-cov # Tests with coverage report
make test-async # Only async tests
make test-fast # Skip slow benchmarks
# Code Quality
make format # Auto-format code (black)
make lint # Lint code (ruff)
make type-check # Type check (mypy)
make check # All quality checks
# Build
make build # Build sample portfolio
make build-all # Build all themes
make serve # Build and serve on http://localhost:8000
# Cache
make cache-clear # Clear all caches
make cache-stats # Show cache statistics
# Logs
make logs # View all logs (human-readable)
make logs-json # View JSON logs
make logs-errors # View only errors
make logs-analyze # Analyze with jq
# Docker
make docker-build # Build image
make docker-run # Run container
make docker-logs # View container logs
# Maintenance
make clean # Clean build artifacts
make clean-all # Full cleanup (artifacts + venv)
make reset # Complete reset + setupQuick Aliases
make t # → make test
make tc # → make test-cov
make f # → make format
make l # → make lint
make b # → make build
make c # → make clean
make s # → make serveAll Commands (50+ Total)
Category: Setup (4 commands)
make setup- Full setup (venv + dependencies + config)make venv- Create virtual environment onlymake install- Install Python dependenciesmake install-dev- Install dev dependencies (pytest, black, etc.)
Category: Testing (7 commands)
make test- Run all testsmake test-async- Async tests onlymake test-cov- Coverage report (HTML + terminal)make test-perf- Performance benchmarksmake test-cache- Cache-specific testsmake test-fast- Skip slow benchmarksmake test-watch- Watch mode for TDD
Category: Code Quality (6 commands)
make format- Auto-format with black (line-length 100)make format-check- Check formatting without changesmake lint- Lint with ruffmake lint-fix- Auto-fix linting issuesmake type-check- Type check with mypymake check- All checks (format + lint + type)
Category: Build (4 commands)
make build- Build sample portfolio (brutalist theme)make build-all- Build all theme variantsmake serve- Serve at http://localhost:8000make dev- Development watch mode
Category: Cache (3 commands)
make cache-stats- Show cache hit/miss statisticsmake cache-clear- Clear all cache tiersmake cache-size- Show cache directory size
Category: Logging (8 commands)
make logs- View all logs (human-readable)make logs-json- View JSON logsmake logs-errors- View only ERROR levelmake logs-performance- View performance metricsmake logs-analyze- Analyze with jqmake logs-clear- Clear all log filesmake logs-test- Test logging configurationmake logs-follow- Tail logs in real-time
Category: Docker (3 commands)
make docker-build- Build Docker imagemake docker-run- Run containermake docker-dev- Run Docker in development mode
Category: Maintenance (3 commands)
make clean- Clean build artifacts and cachemake clean-all- Full cleanup (artifacts + cache + venv)make reset- Complete reset and setup
Category: Documentation (2 commands)
make docs-serve- Serve documentation at http://localhost:8001make docs-check- Check documentation links
Category: Git (3 commands)
make git-status- Git status with statisticsmake git-stats- Show contribution statisticsmake tag-release- Tag a new release
Category: Utilities (9 commands)
make help- Show all commands with descriptionsmake version- Show Trinity versionmake deps- Show dependency treemake deps-update- Update all dependenciesmake benchmark- Run async vs sync benchmarkmake lines- Count lines of codemake migrate-themes- Migrate themes.json to themes.yamlmake demo- Run demo scriptmake info- Show project information
Total: 50+ commands across 10 categories
Adding Custom Commands
Edit Makefile:
# Your custom target
.PHONY: my-command
my-command:
@echo "Running my custom command"
source .venv/bin/activate && python my_script.pyUsage:
make my-commandDocker Deployment
Quick Start
# Build and run with docker-compose
docker-compose up -d
# View logs
docker-compose logs -f trinity
# Stop services
docker-compose downdocker-compose.yml
Production-ready stack with Redis:
version: '3.8'
services:
trinity:
build:
context: .
dockerfile: Dockerfile.dev
container_name: trinity-core
environment:
- LOG_PROFILE=production
- CACHE_REDIS_HOST=redis
- LLM_PROVIDER=ollama
- LLM_API_URL=http://host.docker.internal:11434
volumes:
- ./logs:/app/logs
- ./output:/app/output
- ./.cache:/app/.cache
depends_on:
- redis
networks:
- trinity-network
redis:
image: redis:7-alpine
container_name: trinity-redis
ports:
- "6379:6379"
volumes:
- redis-data:/data
command: redis-server --appendonly yes
networks:
- trinity-network
volumes:
redis-data:
networks:
trinity-network:
driver: bridgeDockerfile
Multi-stage production build:
# Stage 1: Build
FROM python:3.10-slim AS builder
WORKDIR /app
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt
# Stage 2: Runtime
FROM python:3.10-slim
WORKDIR /app
# Copy installed dependencies
COPY --from=builder /root/.local /root/.local
# Copy application code
COPY . .
# Create non-root user
RUN useradd -m -u 1000 trinity && \
chown -R trinity:trinity /app
USER trinity
# Environment variables
ENV PATH=/root/.local/bin:$PATH \
PYTHONPATH=/app \
LOG_PROFILE=production
# Volumes for output and logs
VOLUME /app/logs /app/output
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD python -c "import sys; sys.exit(0)"
# Default command
CMD ["python", "main.py", "--theme", "brutalist"]Running in Docker
Single container:
# Build image
docker build -t trinity-core .
# Run with volume mounts
docker run -v $(pwd)/output:/app/output trinity-core
# Run with custom theme
docker run -e THEME=hacker trinity-coreWith docker-compose:
# Start all services (detached)
docker-compose up -d
# View logs
docker-compose logs -f trinity
# Restart after code changes
docker-compose restart trinity
# Rebuild after dependency changes
docker-compose up --build
# Stop all services
docker-compose down
# Stop and remove volumes
docker-compose down -vEnvironment Variables
# Log configuration
LOG_LEVEL=INFO
LOG_FORMAT=json
LOG_PROFILE=production
TRINITY_ENV=Production # Enable JSON telemetry to stdout
# Cache configuration
CACHE_ENABLED=true
CACHE_REDIS_HOST=redis
CACHE_REDIS_PORT=6379
# LLM configuration
LLM_PROVIDER=ollama
LLM_API_URL=http://host.docker.internal:11434
LLM_MODEL=llama3.2:3b
# Build configuration
THEME=brutalist
OUTPUT_DIR=/app/outputDevelopment Workflow
Daily Workflow
# 1. Pull latest changes
git pull origin main
# 2. Update dependencies
make install
# 3. Run tests
make test
# 4. Make your changes
vim src/trinity/components/brain.py
# 5. Format and lint
make format
make lint
# 6. Test your changes
make test
# 7. Build and verify
make build
make serve # http://localhost:8000
# 8. Commit changes
git add .
git commit -m "feat: add new feature"
git push origin feature/your-featureCode Quality Standards
Anti-Vibecoding Rules - strict engineering discipline:
Key Rules:
- Rule #6: Security-first (never load untrusted pickle files)
- Rule #7: Explicit error handling (no silent failures)
- Rule #8: No magic numbers (all constants named)
- Rule #13: Don't hack
sys.path(proper package structure) - Rule #15: CLI over manual typing (hence Makefile)
- Rule #18: Use proper imports (
from trinity.x import y) - Rule #28: Structured logging (JSON metadata)
- Rule #30: Testable design (small, pure functions)
Code Style:
# Format with black (line-length 100)
make format
# Lint with ruff
make lint
# Type check (optional but recommended)
make type-check
# Run all checks
make checkDocumentation Standards
Docstring format (Google style):
def predict_layout_risk(
content_len: int,
theme: str,
active_strategy: str
) -> float:
"""
Predict probability of layout breakage before rendering.
Uses trained Random Forest classifier to estimate failure risk
based on content metrics and CSS configuration.
Args:
content_len: Character count of input content
theme: Tailwind theme name (e.g., "brutalist")
active_strategy: Current healing strategy being applied
Returns:
Probability of layout failure (0.0-1.0)
Raises:
ModelNotFoundError: If no trained model exists
Example:
>>> risk = predict_layout_risk(500, "brutalist", "NONE")
>>> if risk > 0.7:
... apply_preventive_healing()
"""Required:
- Type hints for all function signatures
- Docstrings for all public functions/classes
- Explanatory comments for complex logic
Forbidden:
- TODO comments (use GitHub Issues instead)
- Commented-out code (delete it, Git remembers)
- Magic numbers without explanation
Testing
Run tests:
# All tests
make test
# With coverage
make test-cov
# Async tests only
make test-async
# Fast tests (skip benchmarks)
make test-fast
# Watch mode for TDD
make test-watchWrite tests:
# tests/test_my_feature.py
import pytest
from trinity.components.brain import ContentEngine
def test_content_generation():
"""Test basic content generation"""
engine = ContentEngine()
result = engine.generate_content("test.txt", "brutalist")
assert result is not None
assert "title" in result
assert len(result["title"]) > 0
@pytest.mark.asyncio
async def test_async_generation():
"""Test async content generation"""
from trinity.components.async_brain import AsyncContentEngine
async with AsyncContentEngine() as engine:
result = await engine.generate_content_async("test.txt", "brutalist")
assert result is not None
assert "title" in resultContribution Workflow
1. Create Feature Branch
git checkout -b feature/your-feature-name2. Make Changes
# Edit files
vim src/trinity/components/brain.py
# Format and lint
make format lint
# Test
make test3. Commit Changes
git add .
git commit -m "feat: add new feature"
# Commit message format:
# feat: new feature
# fix: bug fix
# docs: documentation
# test: add tests
# refactor: code refactoring
# perf: performance improvement4. Push and Create PR
git push origin feature/your-feature-name
# Open PR on GitHub
# - Describe changes
# - Link related issues
# - Add screenshots if UI changes5. Address Review Comments
# Make requested changes
git add .
git commit -m "fix: address review comments"
git push origin feature/your-feature-name6. Merge
# After PR approval, merge via GitHub UI
# - Squash and merge (preferred)
# - Delete branch after mergeTroubleshooting
Common Issues
Issue: ModuleNotFoundError: No module named 'trinity'
# Solution: Ensure PYTHONPATH is set
export PYTHONPATH=.
python main.py
# Or use Makefile (sets PYTHONPATH automatically)
make buildIssue: LLM connection refused
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Start Ollama if needed
ollama serve
# Or use OpenAI instead
export LLM_PROVIDER=openai
export OPENAI_API_KEY=sk-...Issue: Redis connection failed
# Check if Redis is running
redis-cli ping # Should return "PONG"
# Start Redis
brew services start redis # macOS
sudo systemctl start redis # Linux
# Or disable Redis caching
vim config/settings.yaml # Remove "redis" from cache.tiersIssue: Tests failing
# Update dependencies
make install
# Clear cache
make cache-clear
# Run with verbose output
pytest -v -s
# Run specific test
pytest tests/test_engine.py::test_content_generation -vGetting Help
- Documentation: https://fabriziosalmi.github.io/trinity
- Issues: https://github.com/fabriziosalmi/trinity/issues
- Discussions: https://github.com/fabriziosalmi/trinity/discussions
Next Steps
- Retry Logic with Heuristics - Understand the 5-layer pipeline
- Async & MLOps - Deep dive into Phase 6 features
- Code Quality - Testing and security best practices
- Self-Healing Features - Predictor and Healer guides