Threadon is a composable stack of open-core modules for AI experiment governance, temporal memory learning, and adaptive reasoning — CPU-friendly, self-hosted, and auditable.
# Install & run $ pip install fastapi uvicorn pyyaml $ git clone https://github.com/threadon/spo $ python run_server.py INFO: spo-lite API created · profile=lite port=8765 # Run an experiment through the gate pipeline $ curl -X POST http://127.0.0.1:8765/experiment/run \ -H "Content-Type: application/json" \ -d '{"experiment_id":"my_exp","benchmark":"generic"}' { "overall_required_status": "PASS", "gates_passed": 5, "gates_failed": 0, "duration_sec": 4.2 }
SPO acts as the governance layer between your AI components — reasoning agents query it before any promotion, learning modules emit capability gates into it. Bring your own AI, SPO validates the checkpoints.
Self-improving decision loops that query SPO before promoting any model or strategy to the next stage. Gate outcomes drive curriculum and recovery logic.
Gate-based PASS/FAIL pipeline for AI experiments. REST API, web dashboard, and extensible gate catalog for any cognitive AI system.
Temporal or causal learning systems that emit structured capability gates — convergence, memory, stability, causality — directly into SPO's evaluation pipeline.
14 built-in gates. Extend with your own in YAML — hot-reload without restarting the server.
LLM API key travels in request header only. Middleware clears it after response — no logs, no disk.
Cloud-only routes return 404 server-side. Gate profile is immutable — not overridable per request.
Dark-mode React SPA. Gate accordion with description, objective, threshold, and raw metrics per gate.
Sync benchmark runners execute in thread pools — uvicorn event loop stays free during 90s+ jobs.
Gate catalog → YAML today. Postgres tomorrow. Replace _load_catalog(). API unchanged.
Every module ships a spec — decision framework, exit gate, file list — before a single line is written. Audit happens before implementation, not after.
A system either passes a gate or it doesn't. No "mostly passing." Binary outcomes force clear threshold thinking up front, eliminating judgment calls at ship time.
Every module must evaluate on commodity hardware without GPU. Cloud accelerators are a deployment option, not a requirement for running governance checks.
Gate engine, memory layer, reasoning loop — open. Policy enforcement, two-pass review, cloud orchestration — the commercial layer built on top.
Every layer reads from files first. Postgres, object storage, or vector DB can replace the file layer without touching upstream API contracts.
Gate-based governance for your AI experiments. Self-hosted in under 5 minutes.