Every benchmark number on the site should be easy to check.

Public benchmarks, reserved data, and what we won't publish.

Name the market and forecast run

Every benchmark claim should identify the territory, horizon, baseline source, metric, and evaluation window.

Keep the public table readable

The public page should be quick to scan. The full benchmark package should carry the detail needed for a real buying decision.

Use the same definitions everywhere

Accuracy, docs, procurement, and sales materials should all use the same language for horizons, metrics, and issuance timing.

Do not publish what you cannot defend

If a number cannot survive desk review, engineering review, and procurement review, it should not appear on the site.

What appears in the public benchmark

Territory and market identifier

Forecast run and metric

Baseline source and comparison logic

What lower error means in plain language

How to request the full market package

What is included in a market-specific benchmark package

Exact evaluation dates and issuance schedule

Sample size and any data completeness notes

Horizon-specific metrics

Large-miss days and other edge cases

Known limitations

Model version and update cadence

What Gramm will not do on public benchmark pages.

Do not publish blended percentages that hide territory-level variation.
Do not use 'peer-reviewed' unless a paper is formally accepted, not only posted.
Do not claim enterprise security or SLA posture beyond what operations can support today.
Do not let the homepage, docs, and trust pages drift into different product definitions.

Start with the public table, then request the full market package.

If the public result is relevant to your desk or operating region, request the full package with dates, issuance timing, and the rest of the benchmark detail.