LLM Content Generation
Read structured JSON input or generate content with local LLMs (Ollama, LM Studio) or cloud providers (OpenAI). Responses are cached to avoid redundant API calls.
Generate pages from structured data, apply themes, and auto-fix layout issues.
Trinity is a Python CLI tool that:
Guardian and the ML predictor are disabled by default; they require explicit flags and setup.
# Install dependencies
pip install -r requirements.txt
# Build with static JSON content
trinity build --input data/input_content.json --theme brutalist
# Build with LLM content generation (requires running LLM endpoint)
trinity build --input data/raw_portfolio.txt --llm --theme enterprise
# Enable Guardian layout validation
trinity build --input data/input_content.json --guardian --theme brutalistTrinity uses a layered pipeline:
Input → Brain (LLM) → Skeleton (Theme) → Healer (CSS Fixes) → Output
↓ ↑
Caching Predictor (ML, optional)
↓ ↑
Structured Logging Guardian (DOM Validation, optional)Learn More:
make test
make test-cov
pytest tests/test_e2e_complete.py -v
pytest tests/test_multiclass_pipeline.py -vtrinity mine-generate before it has a model to loadconfig/themes.yaml are available for rendering; the default available_themes config lists only enterprise, brutalist, and editorialMIT License - see LICENSE for details.