Technical OverviewFor engineers integrating the API or SDK

How Clariva checks an AI request before execution.

Clariva runs as an API/SDK control layer in the customer's approved environment, checking policy, proof, replay state, provider eligibility, and audit requirements before a provider call proceeds.

Lifecycle

The request path is explicit.

Each step has a clear purpose so security, platform, and engineering teams can reason about what happened.

01

Source Workflow

A user action creates an AI-bound request.

02

Payload Handling

Sensitive content can be transformed by the customer application or by Clariva's policy-driven sanitization step inside the deployed control layer.

03

Proof Evidence

The request carries evidence about the control path.

04

Replay Check

Challenge and nonce details are checked to reject reused requests.

05

Policy Decision

Clariva checks whether the request is allowed for that workflow.

06

Provider Route

Approved requests move to an eligible provider route.

07

Rejection Path

Failed requests return a clear reason and do not continue.

08

Audit Record

The decision path is preserved for review.

API / SDK Shape

A request contract that carries context and proof.

The request includes enough structure for Clariva to decide whether the workflow is allowed to reach a provider. Full implementation details can be handled in developer documentation or SDK reference material.

A typical API evaluation starts by routing one AI-bound workflow through the deployed Clariva control layer in the customer's approved environment instead of sending it directly to the model provider.

  • Source application and workflow context.
  • Challenge binding to prevent old requests from being reused.
  • Payload commitment and proof evidence.
  • Policy binding and requested provider route.
Evidence source

Deterministic synthetic harness.

The request contract is evaluated through controlled synthetic/test-tenant evidence artifacts before broader integration review.


Admit / Reject Decision ArtifactRejected decision example
{
  "scenarioId": "tier1_rejected_decision",
  "requestId": "req_tier1_rejected_decision",
  "decision": "REVIEW_REQUIRED",
  "reasonCodes": [
    "proof_verification_failed",
    "policy_denied"
  ],
  "providerExecutionStatus": "not_executed",
  "routeStatus": "rejected",
  "proofReplayStatus": "REJECTED",
  "auditReference": "audit:tier1:rejected"
}
Starter API Contracts

Each starter profile maps to a bounded intake contract.

These starter contract examples show the evaluation inputs Clariva reviews for a bounded workflow: source, requested provider route, policy context, and evidence needs. They are synthetic examples and do not include raw customer data, provider output, tokens, or secrets.

Website-safe illustrative examples. Final endpoint, deployment base URL, payload shape, authentication, production schema, and deployment responsibilities are confirmed during evaluation and contract review. Recommendation path fields are website-safe routing hints from starter intake, not complete executable public endpoint documentation.

API / SDK StarterDirect API contract
{
  "method": "POST",
  "path": "/v1/starter-intake",
  "body": {
    "orgId": "org.default",
    "workspaceId": "workspace.one",
    "starterProfileId": "api_sdk_starter",
    "sourceType": "api",
    "integrationSurface": "direct_api",
    "policyTemplateId": "api_sdk_default",
    "providerRouteId": "provider.route.direct_api",
    "dataClasses": ["api_key", "email", "person_name"],
    "evidenceLevel": "standard",
    "environment": "sandbox",
    "signedIngressRequired": false,
    "regulatedReviewRequired": false,
    "providedProofSurfaces": ["payloadCommitmentHash", "proofArtifact"],
    "completedReadinessItems": [
      "direct_api_client_configured",
      "proof_artifact_ready",
      "tenant_scope_confirmed"
    ],
    "rawContentOnly": false,
    "backendProofSubstitutionRequested": false
  },
  "expected": {
    "recommendedPolicyTemplate": "api_sdk_default",
    "apiSdkPath": "/v1/requests",
    "manualReviewRequired": false
  }
}
Sample Request

Inspect controlled evidence before booking a call.

The examples below are synthetic/test-tenant artifacts intended to show the shape of the review flow. Customer-specific evidence and deployment terms are scoped during evaluation.

Evidence harness

Tier 1 Buyer Trust Evidence Capture Harness

  • Final status: CLOSED_REVALIDATED
  • Result: PASS
  • Decision artifacts: 2
  • Evaluation evidence sections: 3
Harness validation

Validation commands marked PASS.

  • npm run evidence:tier1 PASS
  • npm run validate:static PASS
  • npm run validate:build PASS
  • npm run typecheck PASS
  • npm run test PASS
Synthetic evidence checks shown

Synthetic evidence checks shown.

  • Admitted and rejected synthetic decision artifacts exist Shown in the current website-safe synthetic evidence pack.
  • Artifacts include scenario/request/decision/policy/provider/audit fields Shown in the current website-safe synthetic evidence pack.
  • Artifacts are synthetic/test-tenant and deterministic Shown in the current website-safe synthetic evidence pack.
Provenance

Approved evidence source.

Website-safe evidence source for the selected workflow.

Source: generated website-safe synthetic/test-tenant evidence artifacts.

Admit / reject decision artifactsDecision outcome
[
  {
    "requestId": "req_tier1_admitted_decision",
    "policyDecision": "ALLOW",
    "status": "SUPPORTED",
    "reasonCodes": [],
    "evidenceReferences": [
      {
        "evidenceHash": "9ec4a3ac3655f93f42edcde106273f7ab07f508698bfab72527b14a56783094a",
        "generatedAt": "2026-04-29T04:45:00.000Z"
      }
    ]
  },
  {
    "requestId": "req_tier1_rejected_decision",
    "policyDecision": "REVIEW_REQUIRED",
    "status": "REJECTED",
    "reasonCodes": [
      "proof_verification_failed"
    ],
    "evidenceReferences": [
      {
        "evidenceHash": "d73cb44ef6887aca59c8c058e2048275d64a25c08ad86cd4abddf1aa2a7fd50e",
        "generatedAt": "2026-04-29T04:45:00.000Z"
      }
    ]
  }
]
Day 1 Evaluation

Inspect the deployed request path.

Use this section to inspect how an application would route a narrow API evaluation through the deployed Clariva control layer before broader integration review.

Evaluation snippetRequest example
const response = await fetch("https://<customer-deployment-base-url>/v1/requests", {
  method: "POST",
  headers: {
    "Authorization": "Bearer CLARIVA_API_KEY",
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    workflow: "support_ticket_summary",
    requestedProviderRoute: "provider.route.support_default",
    payload: {
      sanitizedText: "Customer cannot access [ACCOUNT_REFERENCE]."
    }
  })
});
Decision Responses

Approved and rejected paths are both clear.

Clariva is designed so a failed request does not quietly fall through to a provider. Applications can handle rejection reasons directly.

Decision Outcomes
ApprovedPolicy matched, proof verified, replay check passed.
RouteAllowed provider selected.
RecordAudit evidence created.
RejectedMissing proof, stale challenge, disallowed policy, or unavailable route.
No routeProvider execution is blocked.
ReasonApplication receives a structured error.
Read the fail-closed technical brief →
Latency

How much latency does Clariva add?

Clariva sits in the request path before provider execution, so latency is measured during evaluation for the specific workflow, policy depth, provider route, and audit requirements. Evaluation focuses on whether the control step is acceptable for the customer’s production use case before broader rollout.

Technical Review

Review the contract shape.

Use the starter contract and sample decisions to inspect how the deployed control layer handles an API-bound workflow.