Twin Intelligence¶
Once your twin has data, it becomes queryable, searchable, and capable of cross-referencing insights across every source. All intelligence features run locally — no data leaves your infrastructure.
Semantic Search¶
Twin intelligence is powered by local embeddings using fastembed with the all-MiniLM-L6-v2 model (ONNX Runtime). The twin chunks and indexes:
- Organization profile sections
- Exercise answers and scores
- Mined facts and their evidence
- Security events and triage results
- Connector sync data
- Uploaded artifact contents
Queries use cosine similarity to find relevant chunks, then synthesize answers using RAG (retrieval-augmented generation).
Natural Language Queries¶
Ask anything about your organization:
query_twin("What EDR tools do we have?")
query_twin("Are we prepared for a ransomware attack?")
query_twin("What are our weakest NIST CSF functions?")
query_twin("Who handles incident escalation after hours?")
The twin searches across all indexed data, ranks results by relevance, and produces a synthesized answer with source citations.
Event Bridging¶
When a real security event occurs, the twin connects it back to training:
This returns:
- Related gaps from past exercises that match this event type
- Exercises where the team practiced this scenario
- Playbooks that cover this incident category
- Operations (purple team activities) relevant to this attack vector
Training meets reality
Event bridging is how Salient closes the loop between exercises and real incidents. When your SOC triages an alert, the twin instantly shows what the team has practiced and where preparation gaps exist.
Pattern Detection¶
The detect_patterns tool groups semantically similar gaps across exercises — even when described differently each time:
Example output:
- "After-hours escalation" — appeared in 3 exercises over 2 months, phrased differently each time
- "MFA gaps on VPN" — identified in exercise scoring and confirmed by Okta connector data
- "EDR alert triage latency" — team consistently takes >4 hours to investigate in exercises
Patterns reveal systemic weaknesses that individual exercise reviews might miss.
Scenario Recommendations¶
Based on detected patterns and maturity scores, the twin recommends what to exercise next:
Recommendations prioritize:
- Weakest NIST CSF functions
- Recurring gap patterns that haven't been addressed
- Areas with no exercise coverage
- Threat intel matches for your industry and tech stack
Answer Mining¶
After every exercise, the AI extracts organizational facts from team answers:
| Category | Examples |
|---|---|
| tool | "We use CrowdStrike for EDR" |
| gap | "We don't have an after-hours escalation path" |
| process | "IR lead triages all P1 alerts" |
| person | "Sarah handles vendor communications" |
| decision_pattern | "Team consistently prioritizes containment over forensics" |
| contradiction | "Said IR plan is quarterly but last update was 18 months ago" |
Confidence Model¶
Every fact in the twin carries a confidence level:
| Level | Meaning | Source |
|---|---|---|
| Declared | Organization stated this | Profile, exercise answers |
| Observed | Seen in multiple exercises | Cross-exercise pattern detection |
| Verified | Confirmed by external data | Connector sync, artifact upload |
| Uncertain | Mentioned once, not confirmed | Single exercise mention |
| Contradicted | Conflicting evidence exists | Declared vs. observed mismatch |
Confidence flows upward: a declared fact becomes observed when seen again, and verified when a connector confirms it. Contradictions are surfaced explicitly — they are often the most valuable insights.