Local-First Privacy & Trust
Clinical Corvus enforces a strict 'Local-First' Trust Boundary. By default, all reasoning and ingestion occur within the secure backend, minimizing the exposure of Patient Health Information (PHI).
Egress-Filtered Information Retrieval
When external evidence is needed, Clinical Corvus uses an egress-filtered approach: external calls should carry clinical keywords, not patient-identifying context.
When the Clinical Research Agent (CRA) requires external evidence (e.g., searching PubMed or the Open Web), it employs a Local Sanitization Layer:
- Sanitization: A rule-based/regex layer strips PHI (names, MRNs, dates) in-process before any network request is formed.
- Anonymized Queries: External search providers receive only anonymized clinical keywords (e.g., "septic shock protocols", "vancomycin dosing"), never patient context.
- Prevention: The architecture prevents 'leakage by default' rather than relying solely on post-hoc filtering.
If an institution prefers to use its own model endpoints (e.g., a private OpenAI/Azure/other account), Clinical Corvus can be pointed to those credentials so data stays within that governance boundary.
Ephemeral Memory Architecture
To align with data sovereignty requirements (GDPR/LGPD), the system avoids long-term retention of raw patient data on Corvus servers.
- CaseState: Persists structured, episode-scoped reasoning state (snapshots + patches) on institutional infrastructure (local storage).
- AgentMemoryService: Stores short-term event timelines (Redis) for conversation continuity.
- Data Lifecycle Management: Automated TTL (Time-To-Live) sweeps sanitize ephemeral logs and age-out long-term entries, ensuring compliance without manual intervention.
Agentic Safety Layers
Trust is not just about privacy; it's about the reliability of the clinical advice. Corvus implements active safety patterns:
1. Goal Verification
Before returning any response, a dedicated verification loop (VerifyGoalCompletion) assesses if the drafted answer actually addresses the user's core intent.
2. The Critic Agent
A "Producer-Critic" pattern employs a separate adversarial model to critique responses for accuracy, completeness, and clarity. It flags responses with INFO, WARNING, or ERROR severity before they reach the clinician.
3. Confidence-Based Escalation
A formalized trust model (Low, General, High Clinical, Critical Life Safety) automatically triggers a "Human-in-the-Loop" pause (HitlPauseTool) if confidence dips below the threshold required for the specific care setting (e.g., < 0.98 for Critical Safety).
Security Posture
- Content-Security-Policy (CSP): Strict headers to prevent XSS.
- Audit Logging: Deterministic logging for all PHI-touching endpoints.
- Rate Limiting: Centralized rate limiting to prevent abuse.
- Opt-In Compliance: For deployments using hosted models (e.g., GPT-4), third-party endpoints are reachable only if explicit Zero-Data Retention (ZDR) and BAA/DPA agreements are verified.