Skip to content

Reports

Build configurable PDF or CSV reports from AI red teaming assessments, with section-level controls and findings filters.

The Reports tab lets you build a configurable PDF or CSV report from the assessments in the current project. Unlike the one-click Export PDF on the Overview page, the Reports builder gives you per-section control: pick the sections you want, narrow the findings table with filters, and download the artifact when it’s ready.

Navigate to AI Red Teaming → Reports in your workspace. The builder is scoped to the project currently selected in the header.

  1. Pick your sections. The Sections group lets you include or omit any of:

    SectionWhat it shows
    Risk score & ASR metricsProject-level risk score, overall ASR, totals
    Severity breakdownCritical / High / Medium / Low / Info counts
    FindingsRow-level findings table (subject to the filters below)
    ASR by attackPer-attack success rates
    ASR by categoryPer-harm-category success rates
    Transform effectivenessPer-transform success rates + lift over baseline
    Compliance coverageFramework coverage (requires at least one framework selected)
    Models usedTarget, attacker, and judge models across assessments

    At least one section is required to build.

  2. (Optional) Narrow the findings table. The Findings filters group scopes which finding rows appear in the Findings section only. Summary metrics (risk score, ASR, severity breakdown, compliance coverage) always reflect the entire project regardless of filters.

    Available filters:

    • Severity — critical, high, medium, low, info
    • Category — derived from the assessment’s goal categories
    • Attack name — derived from the assessment’s attack runs
    • Finding type — jailbreak, partial, refusal, error
    • Minimum score — slider from 0% to 100%
    • Assessments — narrow to a subset of the project’s assessments (includes a “Select all” shortcut)
    • Date range — limit to assessments whose started_at falls within a window. Quick ranges (7d, 30d, 90d, All) are provided.
  3. (Optional) Select compliance frameworks. The Compliance coverage section only renders when you include the section AND select at least one framework:

    • OWASP LLM Top 10
    • OWASP Agentic Top 10
    • MITRE ATLAS
    • NIST AI RMF
    • Google SAIF
  4. Pick a format. PDF (default) or CSV.

    • PDF — an executive-ready document with charts and tables. Appropriate for CISO, governance, audit sharing.
    • CSV — the findings table as a flat CSV, for downstream pipelines, adversarial training datasets, or ad-hoc analysis.
  5. Click Generate report. The status panel on the right shows lifecycle progress: Submitting → Queued → Rendering → Report ready. When complete, the file downloads automatically in most browsers. If the automatic download is blocked (common on Safari iOS), click the visible Download button.

    The signed download URL is valid for 1 hour. After expiry, generate the report again to fetch a fresh URL.

As you adjust sections and filters, a background preflight check runs. If any selected section would be empty under the current configuration (for example, “Compliance coverage” with no frameworks, or “Findings” with filters that exclude every row), a warning banner lists the affected sections and the Generate report button is disabled if every selected section is empty.

Building a report requires airt:write on the current workspace. Polling a build job back and downloading the result require airt:read. The signed URL itself is time-bounded and scoped to your organization’s object store key (airt/reports/{org_id}/{job_id}.{ext}).

  • Export — one-click PDF export from the Overview page + CLI dn airt report commands
  • Compliance — framework mapping used by the Compliance coverage section
  • Overview Dashboard — the headline risk metrics that feed the report’s Risk score section
  • Assessments — the underlying per-campaign data a report summarizes