Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.secapi.ai/llms.txt

Use this file to discover all available pages before exploring further.

Benchmark workflows

Benchmarks are here to answer one simple question: does Datastream help you finish real investor work faster without hiding the source trail?

What the benchmark set measures

1

Repeated investor workflows

The benchmark set prioritizes the paths investors and agents repeat all day: entity resolution, filing search, section extraction, statements, and insider or ownership reads.
2

Payload usefulness, not just raw milliseconds

Faster matters. Faster with smaller payloads and clear provenance matters more.
3

One-call investor intelligence

The gold corpus now includes allocator-style prompts that join company, macro, factor, ownership, and filing context into one compact response.
4

Published methodology

Competitive claims stay sourced, dated, and paired with enough context to show what the benchmark measured and what it did not.

Current snapshot

Against sec-api.io

On the 2026-03-18 dated suite, OMNI is ahead on the scoped entity, filing, section, and structured-facts workflows that investor agents repeat all day.

Against financialdatasets.ai

On the same suite, OMNI is ahead on the scoped statements, metrics, filings, and insider-trade workflows, with the current published scorecard at 18 wins, 0 losses, and 2 ties.

Investor-intelligence corpus

The named gold corpus proves one-call agent workloads such as return decomposition, factor neutralization, country reports, stress tests, and footnote investigation with compact outputs and dated latency artifacts.

FinanceBench 150/150

Perfect score on the full 150-question PatronusAI FinanceBench corpus — numeric extraction, filing section evidence, and multi-hop financial reasoning across 45 public companies.

What the current claim actually is

  • safe claim: OMNI beats sec-api.io and financialdatasets.ai on the current dated investor-agent benchmark suite
  • unsafe claim: OMNI wins every speed, payload, and extraction metric universally
This distinction matters. The benchmark is strong because it is specific, reproducible, and workload-shaped, not because it tries to say everything.

Source artifacts

  • benchmarks/competitive/latest-report.md
  • benchmarks/competitive/results/scorecard.json
  • benchmarks/competitive/results/omni_apps_sec_workflows_latest.json
  • benchmarks/investor-intelligence/results/latest.json
  • ops/investor-intelligence-gold-corpus/latest.json
The public site will use proof tables and methodology links instead of charts. Keep the structured benchmark dataset authoritative and let the rendering stay simple.