Skip to content

Setup Guide

Installation, configuration, and development workflow

This guide covers everything you need to start using Trinity:

  • Quick start (3 commands)
  • Development setup (Python, dependencies, LLM providers)
  • Makefile reference (50+ commands)
  • Docker deployment
  • Contributing guidelines

Quick Start

Get running in 3 commands:

bash
# 1. Clone repository
git clone https://github.com/fabriziosalmi/trinity.git
cd trinity

# 2. Install dependencies
make setup

# 3. Build your first portfolio
make build

Output: output/index.html (brutalist theme)

View locally: make servehttp://localhost:8000


Prerequisites

Required

  • Python 3.10+ (3.12 recommended)

    bash
    python --version  # Should be >= 3.10
  • pip (Python package manager)

    bash
    pip --version

Optional

  • Redis (for distributed caching)

    bash
    # macOS
    brew install redis
    brew services start redis
    
    # Ubuntu/Debian
    sudo apt-get install redis-server
    sudo systemctl start redis
    
    # Docker
    docker run -d -p 6379:6379 redis:7-alpine
  • Docker (for containerized deployment)

    bash
    docker --version  # Should be >= 24.0
  • Node.js 20+ (for VitePress documentation)

    bash
    node --version
    npm --version

Installation

Simplest approach for development:

bash
# Full setup (venv + dependencies + themes)
make setup

# Verify installation
python --version
pip list | grep trinity

# Run tests to confirm
make test

What make setup does:

  1. Creates virtual environment (.venv/)
  2. Installs Python dependencies
  3. Configures default settings
  4. Verifies LLM connectivity (if configured)

Method 2: Manual Installation

For custom setups or CI/CD:

bash
# 1. Create virtual environment
python3 -m venv .venv
source .venv/bin/activate  # macOS/Linux
# .venv\Scripts\activate   # Windows

# 2. Upgrade pip
pip install --upgrade pip

# 3. Install dependencies
pip install -r requirements.txt

# 4. Install development dependencies (optional)
pip install pytest pytest-asyncio pytest-cov black ruff mypy

# 5. Verify installation
python -c "from trinity.components.brain import ContentEngine; print('OK')"

Method 3: Poetry (Alternative)

For reproducible builds:

bash
# Install Poetry
curl -sSL https://install.python-poetry.org | python3 -

# Install dependencies
poetry install

# Activate environment
poetry shell

# Verify
poetry run python main.py --help

Configuration

LLM Providers

Trinity supports multiple LLM providers. Choose one:

Option 1: Ollama (Local, Free)

Best for: Development, privacy, cost control

bash
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Pull model
ollama pull llama3.2:3b

# Start server (runs on http://localhost:11434)
ollama serve

# Test connection
curl http://localhost:11434/api/tags

Configure Trinity:

bash
export LLM_PROVIDER=ollama
export LLM_API_URL=http://localhost:11434
export LLM_MODEL=llama3.2:3b

Or edit config/settings.yaml:

yaml
llm:
  provider: ollama
  api_url: http://localhost:11434
  model: llama3.2:3b

Option 2: OpenAI (Cloud, Paid)

Best for: Production quality, long context

bash
# Get API key from https://platform.openai.com/api-keys
export OPENAI_API_KEY=sk-...

# Configure Trinity
export LLM_PROVIDER=openai
export LLM_MODEL=gpt-4-turbo-preview

config/settings.yaml:

yaml
llm:
  provider: openai
  model: gpt-4-turbo-preview
  api_key: ${OPENAI_API_KEY}  # Read from environment

Option 3: LM Studio (Local GUI)

Best for: Experimenting with different models

bash
# 1. Download LM Studio: https://lmstudio.ai
# 2. Load a model (e.g., Qwen 2.5 Coder 7B)
# 3. Start local server (default: http://localhost:1234)

# Configure Trinity
export LLM_PROVIDER=lm_studio
export LLM_API_URL=http://localhost:1234/v1
export LLM_MODEL=qwen2.5-coder-7b-instruct

Provider Comparison

ProviderCostSpeedPrivacyBest For
OllamaFreeFast100% LocalDevelopment
LM StudioFreeFast100% LocalExperimentation
OpenAI GPT-4$$$$MediumCloudProduction
Claude$$$MediumCloudLong context
Gemini$FastCloudFast iteration

Cache Configuration

config/settings.yaml:

yaml
cache:
  enabled: true
  
  # Cache tiers (in priority order)
  tiers:
    - memory      # In-process LRU (100 entries, <1ms)
    - redis       # Distributed (optional, 5-10ms)
    - filesystem  # Persistent (.cache/, 20-50ms)
  
  # Redis configuration (if enabled)
  redis:
    host: localhost
    port: 6379
    db: 0
    password: null  # Optional
  
  # Cache TTL (time-to-live)
  ttl: 3600  # 1 hour in seconds
  
  # Filesystem cache
  filesystem:
    directory: .cache
    max_size_mb: 100

Disable caching (for testing):

bash
export CACHE_ENABLED=false
python main.py --theme brutalist

Logging Configuration

config/logging.yaml:

yaml
default_profile: development  # or production, testing

profiles:
  development:
    level: DEBUG
    format: human  # Colored, human-readable
  
  production:
    level: INFO
    format: json  # Structured for log aggregation
  
  testing:
    level: WARNING
    format: json

Switch profiles:

bash
# Development (verbose, colored)
LOG_PROFILE=development python main.py

# Production (JSON logs)
LOG_PROFILE=production python main.py

# Testing (minimal output)
LOG_PROFILE=testing pytest

Makefile Reference

Why Use Makefile?

bash
# Without Makefile:
source .venv/bin/activate && PYTHONPATH=. python main.py --theme brutalist --output output/

# With Makefile:
make build

Benefits:

  • Consistent environment (PYTHONPATH, venv activation)
  • Self-documenting (make help shows all commands)
  • Portable (works on macOS, Linux, WSL)

Most Used Commands

bash
# Setup
make setup          # First-time setup (venv + deps)
make install        # Install dependencies only

# Testing
make test           # Run all tests
make test-cov       # Tests with coverage report
make test-async     # Only async tests
make test-fast      # Skip slow benchmarks

# Code Quality
make format         # Auto-format code (black)
make lint           # Lint code (ruff)
make type-check     # Type check (mypy)
make check          # All quality checks

# Build
make build          # Build sample portfolio
make build-all      # Build all themes
make serve          # Build and serve on http://localhost:8000

# Cache
make cache-clear    # Clear all caches
make cache-stats    # Show cache statistics

# Logs
make logs           # View all logs (human-readable)
make logs-json      # View JSON logs
make logs-errors    # View only errors
make logs-analyze   # Analyze with jq

# Docker
make docker-build   # Build image
make docker-run     # Run container
make docker-logs    # View container logs

# Maintenance
make clean          # Clean build artifacts
make clean-all      # Full cleanup (artifacts + venv)
make reset          # Complete reset + setup

Quick Aliases

bash
make t              # → make test
make tc             # → make test-cov
make f              # → make format
make l              # → make lint
make b              # → make build
make c              # → make clean
make s              # → make serve

All Commands (50+ Total)

Category: Setup (4 commands)

  • make setup - Full setup (venv + dependencies + config)
  • make venv - Create virtual environment only
  • make install - Install Python dependencies
  • make install-dev - Install dev dependencies (pytest, black, etc.)

Category: Testing (7 commands)

  • make test - Run all tests
  • make test-async - Async tests only
  • make test-cov - Coverage report (HTML + terminal)
  • make test-perf - Performance benchmarks
  • make test-cache - Cache-specific tests
  • make test-fast - Skip slow benchmarks
  • make test-watch - Watch mode for TDD

Category: Code Quality (6 commands)

  • make format - Auto-format with black (line-length 100)
  • make format-check - Check formatting without changes
  • make lint - Lint with ruff
  • make lint-fix - Auto-fix linting issues
  • make type-check - Type check with mypy
  • make check - All checks (format + lint + type)

Category: Build (4 commands)

  • make build - Build sample portfolio (brutalist theme)
  • make build-all - Build all theme variants
  • make serve - Serve at http://localhost:8000
  • make dev - Development watch mode

Category: Cache (3 commands)

  • make cache-stats - Show cache hit/miss statistics
  • make cache-clear - Clear all cache tiers
  • make cache-size - Show cache directory size

Category: Logging (8 commands)

  • make logs - View all logs (human-readable)
  • make logs-json - View JSON logs
  • make logs-errors - View only ERROR level
  • make logs-performance - View performance metrics
  • make logs-analyze - Analyze with jq
  • make logs-clear - Clear all log files
  • make logs-test - Test logging configuration
  • make logs-follow - Tail logs in real-time

Category: Docker (3 commands)

  • make docker-build - Build Docker image
  • make docker-run - Run container
  • make docker-dev - Run Docker in development mode

Category: Maintenance (3 commands)

  • make clean - Clean build artifacts and cache
  • make clean-all - Full cleanup (artifacts + cache + venv)
  • make reset - Complete reset and setup

Category: Documentation (2 commands)

  • make docs-serve - Serve documentation at http://localhost:8001
  • make docs-check - Check documentation links

Category: Git (3 commands)

  • make git-status - Git status with statistics
  • make git-stats - Show contribution statistics
  • make tag-release - Tag a new release

Category: Utilities (9 commands)

  • make help - Show all commands with descriptions
  • make version - Show Trinity version
  • make deps - Show dependency tree
  • make deps-update - Update all dependencies
  • make benchmark - Run async vs sync benchmark
  • make lines - Count lines of code
  • make migrate-themes - Migrate themes.json to themes.yaml
  • make demo - Run demo script
  • make info - Show project information

Total: 50+ commands across 10 categories

Adding Custom Commands

Edit Makefile:

makefile
# Your custom target
.PHONY: my-command
my-command:
	@echo "Running my custom command"
	source .venv/bin/activate && python my_script.py

Usage:

bash
make my-command

Docker Deployment

Quick Start

bash
# Build and run with docker-compose
docker-compose up -d

# View logs
docker-compose logs -f trinity

# Stop services
docker-compose down

docker-compose.yml

Production-ready stack with Redis:

yaml
version: '3.8'

services:
  trinity:
    build:
      context: .
      dockerfile: Dockerfile.dev
    container_name: trinity-core
    environment:
      - LOG_PROFILE=production
      - CACHE_REDIS_HOST=redis
      - LLM_PROVIDER=ollama
      - LLM_API_URL=http://host.docker.internal:11434
    volumes:
      - ./logs:/app/logs
      - ./output:/app/output
      - ./.cache:/app/.cache
    depends_on:
      - redis
    networks:
      - trinity-network

  redis:
    image: redis:7-alpine
    container_name: trinity-redis
    ports:
      - "6379:6379"
    volumes:
      - redis-data:/data
    command: redis-server --appendonly yes
    networks:
      - trinity-network

volumes:
  redis-data:

networks:
  trinity-network:
    driver: bridge

Dockerfile

Multi-stage production build:

dockerfile
# Stage 1: Build
FROM python:3.10-slim AS builder

WORKDIR /app

# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
    gcc \
    && rm -rf /var/lib/apt/lists/*

# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt

# Stage 2: Runtime
FROM python:3.10-slim

WORKDIR /app

# Copy installed dependencies
COPY --from=builder /root/.local /root/.local

# Copy application code
COPY . .

# Create non-root user
RUN useradd -m -u 1000 trinity && \
    chown -R trinity:trinity /app
USER trinity

# Environment variables
ENV PATH=/root/.local/bin:$PATH \
    PYTHONPATH=/app \
    LOG_PROFILE=production

# Volumes for output and logs
VOLUME /app/logs /app/output

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD python -c "import sys; sys.exit(0)"

# Default command
CMD ["python", "main.py", "--theme", "brutalist"]

Running in Docker

Single container:

bash
# Build image
docker build -t trinity-core .

# Run with volume mounts
docker run -v $(pwd)/output:/app/output trinity-core

# Run with custom theme
docker run -e THEME=hacker trinity-core

With docker-compose:

bash
# Start all services (detached)
docker-compose up -d

# View logs
docker-compose logs -f trinity

# Restart after code changes
docker-compose restart trinity

# Rebuild after dependency changes
docker-compose up --build

# Stop all services
docker-compose down

# Stop and remove volumes
docker-compose down -v

Environment Variables

bash
# Log configuration
LOG_LEVEL=INFO
LOG_FORMAT=json
LOG_PROFILE=production
TRINITY_ENV=Production  # Enable JSON telemetry to stdout

# Cache configuration
CACHE_ENABLED=true
CACHE_REDIS_HOST=redis
CACHE_REDIS_PORT=6379

# LLM configuration
LLM_PROVIDER=ollama
LLM_API_URL=http://host.docker.internal:11434
LLM_MODEL=llama3.2:3b

# Build configuration
THEME=brutalist
OUTPUT_DIR=/app/output

Development Workflow

Daily Workflow

bash
# 1. Pull latest changes
git pull origin main

# 2. Update dependencies
make install

# 3. Run tests
make test

# 4. Make your changes
vim src/trinity/components/brain.py

# 5. Format and lint
make format
make lint

# 6. Test your changes
make test

# 7. Build and verify
make build
make serve  # http://localhost:8000

# 8. Commit changes
git add .
git commit -m "feat: add new feature"
git push origin feature/your-feature

Code Quality Standards

Anti-Vibecoding Rules - strict engineering discipline:

Key Rules:

  • Rule #6: Security-first (never load untrusted pickle files)
  • Rule #7: Explicit error handling (no silent failures)
  • Rule #8: No magic numbers (all constants named)
  • Rule #13: Don't hack sys.path (proper package structure)
  • Rule #15: CLI over manual typing (hence Makefile)
  • Rule #18: Use proper imports (from trinity.x import y)
  • Rule #28: Structured logging (JSON metadata)
  • Rule #30: Testable design (small, pure functions)

Code Style:

bash
# Format with black (line-length 100)
make format

# Lint with ruff
make lint

# Type check (optional but recommended)
make type-check

# Run all checks
make check

Documentation Standards

Docstring format (Google style):

python
def predict_layout_risk(
    content_len: int,
    theme: str,
    active_strategy: str
) -> float:
    """
    Predict probability of layout breakage before rendering.
    
    Uses trained Random Forest classifier to estimate failure risk
    based on content metrics and CSS configuration.
    
    Args:
        content_len: Character count of input content
        theme: Tailwind theme name (e.g., "brutalist")
        active_strategy: Current healing strategy being applied
        
    Returns:
        Probability of layout failure (0.0-1.0)
        
    Raises:
        ModelNotFoundError: If no trained model exists
        
    Example:
        >>> risk = predict_layout_risk(500, "brutalist", "NONE")
        >>> if risk > 0.7:
        ...     apply_preventive_healing()
    """

Required:

  • Type hints for all function signatures
  • Docstrings for all public functions/classes
  • Explanatory comments for complex logic

Forbidden:

  • TODO comments (use GitHub Issues instead)
  • Commented-out code (delete it, Git remembers)
  • Magic numbers without explanation

Testing

Run tests:

bash
# All tests
make test

# With coverage
make test-cov

# Async tests only
make test-async

# Fast tests (skip benchmarks)
make test-fast

# Watch mode for TDD
make test-watch

Write tests:

python
# tests/test_my_feature.py
import pytest
from trinity.components.brain import ContentEngine

def test_content_generation():
    """Test basic content generation"""
    engine = ContentEngine()
    result = engine.generate_content("test.txt", "brutalist")
    
    assert result is not None
    assert "title" in result
    assert len(result["title"]) > 0

@pytest.mark.asyncio
async def test_async_generation():
    """Test async content generation"""
    from trinity.components.async_brain import AsyncContentEngine
    
    async with AsyncContentEngine() as engine:
        result = await engine.generate_content_async("test.txt", "brutalist")
        
        assert result is not None
        assert "title" in result

Contribution Workflow

1. Create Feature Branch

bash
git checkout -b feature/your-feature-name

2. Make Changes

bash
# Edit files
vim src/trinity/components/brain.py

# Format and lint
make format lint

# Test
make test

3. Commit Changes

bash
git add .
git commit -m "feat: add new feature"

# Commit message format:
# feat: new feature
# fix: bug fix
# docs: documentation
# test: add tests
# refactor: code refactoring
# perf: performance improvement

4. Push and Create PR

bash
git push origin feature/your-feature-name

# Open PR on GitHub
# - Describe changes
# - Link related issues
# - Add screenshots if UI changes

5. Address Review Comments

bash
# Make requested changes
git add .
git commit -m "fix: address review comments"
git push origin feature/your-feature-name

6. Merge

bash
# After PR approval, merge via GitHub UI
# - Squash and merge (preferred)
# - Delete branch after merge

Troubleshooting

Common Issues

Issue: ModuleNotFoundError: No module named 'trinity'

bash
# Solution: Ensure PYTHONPATH is set
export PYTHONPATH=.
python main.py

# Or use Makefile (sets PYTHONPATH automatically)
make build

Issue: LLM connection refused

bash
# Check if Ollama is running
curl http://localhost:11434/api/tags

# Start Ollama if needed
ollama serve

# Or use OpenAI instead
export LLM_PROVIDER=openai
export OPENAI_API_KEY=sk-...

Issue: Redis connection failed

bash
# Check if Redis is running
redis-cli ping  # Should return "PONG"

# Start Redis
brew services start redis  # macOS
sudo systemctl start redis  # Linux

# Or disable Redis caching
vim config/settings.yaml  # Remove "redis" from cache.tiers

Issue: Tests failing

bash
# Update dependencies
make install

# Clear cache
make cache-clear

# Run with verbose output
pytest -v -s

# Run specific test
pytest tests/test_engine.py::test_content_generation -v

Getting Help


Next Steps

Released under the MIT License.