AI Safety

AI you can trust with your audit.

Compliance work is too important for a black box. Here's how we build, deploy, and govern Effy responsibly — and how that aligns with the AI safety standards your auditor and regulator already recognize.

Our practices

Six practices that keep AI accountable.

These aren't aspirational principles. They're built into the platform — every customer gets them by default, on every action, every day.

A human approves every change

AI drafts. AI suggests. AI never ships. A reviewer signs off before a policy publishes, a vendor score updates, or a questionnaire answer goes back to the customer. Effy is a colleague, not an autopilot.

Every answer cites its source

When the AI drafts a response, it shows you exactly which policy, control, or evidence file it came from. No invented facts. No hidden reasoning. No surprise answers in front of an auditor.

Your data stays your data

Your evidence, policies, and vendor information are scoped to your organization at every layer of the platform. We don't train on your data. We don't share it across customers. We don't ship it to public model providers.

You can pause or undo at any time

Conservative defaults across the platform. Reviewers can override AI scores, retract drafted answers, and roll back any AI suggestion before it reaches a published artifact. AI never operates on irreversible changes.

You can see what the AI did, and why

Every Effy action is recorded — what was asked, what was drafted, what was retrieved, what was approved by whom. Full activity logs are exportable for your auditor or examiner.

Reviewer always wins

Where AI scores and human judgment disagree, the human decision is authoritative. AI scores are preserved alongside the human override so you can see the reasoning trail — never replace it.

Standards alignment

Aligned with the AI safety standards your auditor recognizes.

We didn't invent our own framework. We built Effy to map cleanly to the standards that regulators, auditors, and procurement teams are already asking about.

NIST AI RMF 1.0

NIST AI Risk Management Framework

Effy is built around the four NIST AI RMF functions — Govern, Map, Measure, Manage. Risk surfaces, severity tiers, mitigation controls, and reviewer accountability are all first-class platform concepts, not afterthoughts.

ISO/IEC 42001

ISO/IEC 42001 Artificial Intelligence Management System

Our AI lifecycle controls — model selection, change management, performance monitoring, incident response, and ongoing review — align with the ISO/IEC 42001 management system clauses.

OECD AI Principles

OECD AI Principles

Inclusive growth, human-centered values, transparency, robustness, and accountability — the OECD principles map cleanly to our product practices: human-in-the-loop, cited reasoning, undoable actions, and named reviewer accountability.

EU AI Act

EU AI Act

Effy operates as a limited-risk AI system under the EU AI Act framework. We disclose AI involvement on every drafted artifact, document the system in scope, and provide the audit trail required for downstream deployer obligations.

What the AI sees

The AI only sees what it needs to see.

When you ask Effy a question, it doesn't load your entire organization into a prompt. It retrieves only the few specific policies, controls, or pieces of evidence it needs to answer — and the answer always cites where each fact came from.

  • Only relevant excerpts — never bulk databases or full document stores
  • Scoped to your organization at every retrieval, every time
  • We don't pass personal data to the AI — we work with policies, controls, and evidence
  • If retrieval doesn't find enough grounded sources, Effy says so — never fabricates
How a question becomes an answer
  1. 1You ask Effy a question
  2. 2Effy finds the few sources most relevant to your question
  3. 3Effy drafts an answer using only those sources
  4. 4Effy attaches citations so you can verify each claim
  5. 5A reviewer approves before the answer goes anywhere
No reviewer, no ship. AI never publishes a policy, sends a questionnaire response, or updates a vendor score on its own.
Accountability

When something goes wrong, you know who to ask.

Responsible AI starts with accountability. Every Effy action has an owner — both at Thirdsentry and at your organization.

Named reviewers

Every AI-drafted artifact has a named human reviewer who approved it. Their decision and timestamp are part of the permanent record.

Exportable activity logs

Your auditor or examiner can request the full activity log for any AI-touched artifact. We export it on demand — formatted to drop into your audit package.

Incident response

If something does go wrong, we have a documented incident response process — disclosed promptly, investigated thoroughly, fixed permanently.

Ready to put our AI safety practices to the test?

Bring your hardest questions. We'll walk through every practice on a live demo with your team.