Architecture¶
Salient is a self-hosted platform built on a modern, lightweight stack designed to run on minimal infrastructure (a single $6/mo VPS).
Stack Overview¶
Browser ──► Caddy (TLS, IP allowlist) ──► FastAPI Backend ──► SQLite
│ │
└── Static React Build └── AI Provider (Anthropic, OpenAI, etc.)
│
└── fastembed (local ONNX embeddings)
| Layer | Technology | Notes |
|---|---|---|
| Frontend | React 18 + Tailwind CSS v3 | Dark theme (slate-950 base, cyan accents) |
| Backend | Python FastAPI + SQLModel | Async, type-safe, auto-generated OpenAPI |
| Database | SQLite | Single-file, zero-ops, auto-migration on startup |
| Reverse Proxy | Caddy 2 | Auto-TLS via Let's Encrypt, IP allowlist, security headers |
| Deployment | Docker Compose | Dev and production configs, 256MB memory limit |
| CI/CD | GitHub Actions | Build images in CI, push to GHCR, droplet pulls |
Three Pillars¶
Salient is organized around three core systems that feed each other:
1. Digital Twin¶
The organization's living security profile. Built from exercises, connector data, uploaded artifacts, and mined facts. Stores everything Salient knows about your people, tools, processes, gaps, and decision patterns.
2. TTX Engine¶
Adaptive incident simulation. Scenarios contain injects with questions scored against NIST CSF, MITRE ATT&CK, and CIS Controls v8. The engine supports three depth levels (Foundational, Intermediate, Advanced) and adaptive branching based on team decisions.
3. Output Engine¶
Transforms exercise results and twin intelligence into actionable outputs: maturity scores, gap analysis, incident response playbooks, posture assessments, and compliance evidence across five frameworks.
AI Layer¶
Salient is provider-agnostic. The AI_PROVIDER environment variable selects the backend:
| Provider | Status | Use Case |
|---|---|---|
anthropic | Primary | Claude for scoring, playbooks, triage, fact mining |
openai | Supported | GPT-4 as alternative |
google | Supported | Gemini as alternative |
ollama | Supported | Fully local, air-gapped deployments |
AI is optional
The platform runs without any AI API key. Scoring falls back to keyword matching, and playbook generation uses templates. AI enhances but is not required.
Local Embeddings¶
Semantic search uses fastembed with the all-MiniLM-L6-v2 model via ONNX Runtime. Embeddings are computed locally — no data leaves your infrastructure.
This powers:
- Natural language twin queries
- Cross-exercise pattern detection
- Event-to-exercise bridging
- Scenario recommendations
Authentication¶
GitHub and Google OAuth provide login. JWTs (7-day expiry) are stored as httpOnly cookies. RBAC enforces three tiers: owner, admin, and member.