Skip to content

AI Model Requirements

Salient is designed to work with the best AI models available. While the platform functions at every tier, the quality of intelligence you get scales directly with model capability.

Not all models are equal

Salient's most powerful features — AI scoring, fact mining, playbook generation, posture assessment, and exercise facilitation — depend on strong reasoning capability. Using a weaker model doesn't break the platform, but it significantly reduces the depth and accuracy of the intelligence you get back.

Platform Model Why
Claude Code Claude Opus 4 / Sonnet 4 Full reasoning, 200K+ context, native MCP, /ttx skill support
Claude Desktop Claude Opus 4 / Sonnet 4 Same capabilities via MCP tools
ChatGPT GPT-4.5 / GPT-4o Strong reasoning, MCP via Developer Mode
Gemini Gemini Ultra / Pro Good reasoning, MCP via CLI or Enterprise
Bedrock Claude Sonnet 4 via Bedrock Enterprise deployment with full capability

Our recommendation: Claude Opus 4 or Sonnet 4 with Claude Code.

This isn't just vendor loyalty — Salient's /ttx skill, evaluator framework, and facilitation prompts are architected for Claude's reasoning style. The compiled twin (SIF) and MCP tools work with any model, but the facilitation quality peaks with Claude.

What Each Model Tier Gets You

Claude Opus 4, Sonnet 4, GPT-4.5, Gemini Ultra

Full capability across every feature:

  • Exercise facilitation: Deep, probing follow-ups. Catches vague answers. Adapts questions to your org's actual tools and team structure.
  • AI scoring: Contextual evaluation against your twin. Not just rubric matching — understands nuance, catches contradictions with prior exercises.
  • Fact mining: Extracts 15-25 facts per exercise. Catches tools, processes, gaps, decision patterns, contradictions, and absences.
  • Playbook generation: Org-specific, gap-driven, actionable runbooks. References your actual tools and team roles.
  • Posture assessment: Cross-source synthesis with board-ready narrative. Weighs source confidence, flags contradictions.
  • Compiled twin consumption: Can process Tier 2-3 SIF (~800-3000 tokens) and reason deeply about organizational context.

Context window matters

Models with 200K+ context windows can consume the full compiled twin AND facilitate complex exercises without losing context. This is where SIF's compression pays off — even Tier 3 full twin fits easily.

Claude Haiku 4.5, GPT-4o-mini, Gemini Flash, Sonnet 3.5

Adequate for basics, but noticeably shallower:

  • Exercise facilitation: Follows the script but probes less. May accept vague answers without pushing deeper.
  • AI scoring: Matches rubric keywords but misses contextual nuance. Scores tend to be more generous (inflated).
  • Fact mining: Extracts 5-10 facts per exercise. Catches obvious tools and gaps but misses decision patterns and contradictions.
  • Playbook generation: Template-quality rather than org-specific. Generic recommendations instead of tool-specific actions.
  • Posture assessment: Produces a summary but without the cross-source insights that make it valuable.
  • Compiled twin consumption: Can process Tier 1 executive SIF well. May lose detail from Tier 2-3.

Still valuable

Mid-tier models still run exercises, save answers, trigger the intelligence loop, and build the twin. The keyword scoring fallback works regardless of model. You're getting less from the AI features, but the platform mechanics still function.

GPT-3.5, Gemini Nano, local Llama/Mistral (7B-13B), free tier APIs

Basic functionality only:

  • Exercise facilitation: Can present questions and record answers, but minimal adaptation or probing.
  • AI scoring: Falls back to keyword scoring (built-in, no AI needed). Accurate for identifying mentioned tools but can't evaluate response quality.
  • Fact mining: Keyword fallback extracts tool mentions and basic gap phrases. ~3-5 facts per exercise.
  • Playbook generation: Template-based fallback (built-in). Functional but generic.
  • Posture assessment: May fail or produce superficial output. The cross-source synthesis requires stronger reasoning.
  • Compiled twin consumption: Tier 1 executive SIF (~150 tokens) is the right level. Higher tiers may overwhelm the model.

The platform still works

Salient is designed with graceful degradation. Every AI-powered feature has a keyword or template fallback. You can run exercises, build your twin, and track maturity scores with any model — or no model at all. The AI features are the multiplier, not the foundation.

How SIF Helps Weaker Models

This is one of the strategic reasons SIF exists. A 150-token Tier 1 executive twin gives even a small model the essential context:

@ORG AcmeCorp ind:MFG emp:250 it:3/MSP risk:moderate
@CTRL ID:65↑ PR:48→ DE:35↑ RS:55↑ RC:42→ | Σ:52↑8/mo
@GAPS.H no-escalation-afterhours(3x,RS.CO) mfa-vpn(2x,PR.AC)

A weaker model can parse this and understand: "Manufacturing company, 250 people, posture score 52 improving, biggest gaps are after-hours escalation and VPN MFA." That's enough to facilitate a meaningful exercise — without the model needing to make 5 tool calls to discover the same information.

SIF is a force multiplier for model capability. The more compressed the context, the less reasoning the model needs to spend on discovery, and the more it can spend on actual analysis.

Backend AI Provider

Salient's backend AI features (scoring, mining, playbook generation) use their own API connection, independent of which AI platform you use for facilitation. Configure in .env:

# Backend AI provider — powers scoring, mining, and generation
AI_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...

Backend model ≠ facilitation model

The backend AI provider scores exercises, mines facts, and generates playbooks server-side. This is separate from whatever model you use to facilitate exercises (Claude Code, ChatGPT, etc.). For best results, use a frontier model for BOTH:

  • Backend: AI_PROVIDER=anthropic with a Claude Sonnet/Opus API key
  • Facilitation: Claude Code with Opus/Sonnet, or equivalent frontier model

Minimum Requirements

Component Minimum Recommended
Facilitation model Any with MCP support Claude Opus 4 / Sonnet 4
Backend AI None (keyword fallback) Anthropic API (Sonnet 4)
Context window 8K+ tokens 200K+ tokens
MCP transport stdio or HTTP stdio (local) or HTTP (remote)

The Bottom Line

Salient is built for frontier models. That's not gatekeeping — it's physics. The intelligence you get out is proportional to the reasoning capability you put in. SIF compression helps bridge the gap, and every feature degrades gracefully, but if you want the full value — the probing facilitation, the contradiction detection, the cross-source posture assessment — use the best model available to you.

The platform is free. The model is the investment.