FAQ

Answers for technical, security, and deployment questions.

Use these answers to understand Clariva’s request path, verification posture, provider routing model, and deployment scope.

Common Questions

How Clariva fits before deployment.

Grouped answers for the teams evaluating request-path control, provider boundaries, deployment posture, and review evidence.

Request Path

Where does Clariva sit?

Clariva runs as an API/SDK control layer in the customer's approved environment. Application workflows route AI-bound requests through the deployed Clariva control layer, which evaluates proof, policy, replay state, provider eligibility, and audit requirements before provider execution.

What happens if verification fails?

The request is rejected with structured status and reason information instead of continuing to provider execution.

How does starter intake fail closed?

Starter intake rejects drift. The API rejects unknown profiles, mismatched policy templates, mismatched provider routes, unsupported integration surfaces, incomplete proof surfaces, incomplete readiness items, raw-content-only submissions, and backend proof substitution attempts.

Does Clariva replace model providers?

No. Clariva controls the path to eligible provider execution. It preserves provider flexibility while reducing provider-specific governance sprawl.

Can I see a sample request before booking a call?

Yes. Clariva provides illustrative sample request, decision, rejection, and audit records so technical reviewers can inspect the control model before requesting an evaluation.

Data Boundaries

Where does sensitive text get transformed?

Clariva supports two workflow patterns. A customer application can transform sensitive text before sending the request to the deployed Clariva control layer, or the deployed Clariva control layer can apply policy-driven sanitization, redaction, masking, removal, replacement, or hashing before provider execution. The selected pattern, retained record, deployment model, and evidence scope are finalized during evaluation and contract review.

Does provider execution receive raw workflow content?

The provider execution contract is sanitized-only at the request surface. Provider-bound requests receive transformed content and control metadata rather than raw conversation fields.

Is sanitization deterministic?

Policy-driven transformations and canonicalization are designed to be repeatable for the workflow, which supports verification, review, and audit evidence.

Deployment

Is Clariva a hosted cloud service or customer-deployed?

Clariva's intended production model is a licensed API/SDK control layer deployed into the customer's approved environment. Evaluation environments, customization support, customer-managed deployment requirements, networking, retention, and support obligations are scoped during evaluation and finalized in a written agreement. Any managed-service, private-deployment, VPC, on-prem, or single-tenant requirement is evaluated separately and should not be treated as currently available unless expressly agreed in writing.

What happens if Clariva is unavailable?

For sensitive workflows, Clariva is part of the enforcement path. If required checks cannot be completed, the safer deployment posture blocks provider execution rather than allowing an unverified request to continue. Retry and escalation behavior can be defined as part of deployment.

How are residency constraints handled?

Routing can evaluate provider residency metadata and policy constraints such as allowed or denied execution regions where configured.

What if provider runtime is not ready?

Runtime readiness checks can prevent routing when production provider credentials, readiness, or availability requirements are not satisfied.

How much latency does Clariva add?

Latency is measured during evaluation for the selected workflow, policy depth, payload size, provider route, and audit requirements. Clariva separates the control-layer decision path from model execution so the team can inspect where time is spent before production scope is discussed.

If benchmark data is available for the evaluated workflow, it should be shared as part of the technical review package rather than assumed from a generic public number.

Do evaluation environments include production SLAs?

Production SLA terms are discussed and confirmed during the production contracting phase. Evaluation environments are not scoped under production SLA commitments.

Verification and Review

How does verification work at a high level?

The client generates receipt evidence from the ordered processing path. Clariva verifies the transformed payload against challenge-bound proof evidence, recomputes the required commitments at the server boundary, and rejects stale, replayed, incomplete, or non-admissible submissions.

Does Clariva replace customer security or legal review?

No. Clariva provides control artifacts for a specific AI workflow, but customer security, legal, procurement, and deployment approvals remain customer-side decisions.

What is the difference between an evaluation and a pilot?

An evaluation is the first bounded workflow review. A pilot is a production-scoped follow-up if the workflow is a fit and the customer wants to test Clariva under more realistic operational conditions.

Evaluation Guardrails

Timeline, latency, and retention are scoped per workflow.

Evaluation Scope

How long does a first evaluation take?

Timing depends on integration complexity, review questions, and the customer's internal review schedule. The first scoping discussion is intended to determine whether Clariva fits the selected workflow before broader production scope is discussed.

What should a team bring to the first evaluation?

Bring a high-level workflow description, the source system, the AI task, the intended provider route if known, and the review questions your security, privacy, legal, or platform team needs answered.

Retention Scope

What data does Clariva store?

The retained record depends on the scoped workflow and deployment model. Evaluation should define which metadata, policy decisions, provider-route decisions, rejection reasons, and audit artifacts are retained, and which sensitive payload elements are omitted, transformed, or excluded from persistent storage.

Evidence Pack Scope

Are the evidence examples real customer data?

No. The website evidence examples are generated from controlled synthetic/test-tenant scenarios using Clariva runtime evaluation paths. They are buyer-review evidence, not production customer proof.

Does Clariva publish policy accuracy percentages?

No. Clariva does not publish policy accuracy percentages from this evidence pack. Evaluation review focuses on defined test scenarios, expected vs actual outcomes, and workflow-scoped review.

Does Clariva claim zero retention?

No. Clariva does not claim zero retention from this evidence pack. Retention and evidence scope are reviewed during evaluation for the selected workflow.

Company and Review Questions

Founder, certification, and data handling.

Company

Who built Clariva?

Clariva is founded by Tolga Cengiz, an enterprise strategy and product leader with experience across AI-enabled product development, digital transformation, strategic partnerships, and large-scale commercial initiatives. The company is currently early-stage and founder-led.

Why was Clariva created?

Clariva began from a communication-focused product and shifted after a more fundamental issue became clear: sensitive conversations and workflow data were being sent into LLM systems without enforceable controls over what crossed the boundary or whether execution was eligible to proceed.

Security & Trust Review

Is Clariva SOC 2 or ISO 27001 certified?

Not yet. Clariva has not obtained formal certifications such as SOC 2 or ISO 27001. Certification requirements, procurement controls, and regulated-use obligations are handled during the security review process.

Differentiation

Gateway and proof-model questions.

How is Clariva different from AI gateways or routing proxies?

AI gateways typically focus on provider access, routing, rate limits, observability, or model management. Clariva is focused on pre-execution admissibility: verifying that required privacy and policy controls occurred before a sensitive request can continue to an eligible provider route.

What does “proof” mean in Clariva?

Clariva uses proof artifacts as verifiable receipt evidence: transformed payload commitments, ordered stage evidence, challenge binding, recomputation checks, and replay controls. It is not presented as a zero-knowledge proof system.

Review Clariva against your workflow.

Map the request path, provider route, rejection behavior, and evidence requirements for one sensitive AI workflow.