Token Efficiency
AI agents pay for tokens. OMNI Datastream is designed to minimize token consumption while maximizing information density.The Token Tax
When an AI agent queries SEC data, every byte of the API response consumes tokens. Bloated responses waste money and slow down agent reasoning. OMNI Datastream solves this with compact, purpose-built responses.Side-by-Side: OMNI vs sec-api.io
Entity Resolution
| Metric | OMNI | sec-api.io | Savings |
|---|---|---|---|
| Response size | 273 bytes | 412 bytes | 34% |
| Estimated tokens | 68 | 103 | 34% |
| Latency | 62ms | 231ms | 73% |
Filing Search
| Metric | OMNI | sec-api.io | Savings |
|---|---|---|---|
| Response size | 500 bytes | 792 bytes | 37% |
| Estimated tokens | 125 | 198 | 37% |
| Latency | 64ms | 281ms | 77% |
Section Extraction
| Metric | OMNI | sec-api.io | Savings |
|---|---|---|---|
| Response size | 1,800 bytes | 2,880 bytes | 38% |
| Estimated tokens | 450 | 720 | 38% |
| Latency | 64ms | 348ms | 82% |
Intelligence Bundles: 75% Token Reduction
A typical “company briefing” requires assembling data from multiple sources:Traditional Approach (sec-api.io)
OMNI Intelligence Bundle
Why OMNI Responses Are Smaller
- No wrapper bloat. Responses contain data, not framework metadata.
- Compact mode. Use
?view=compactto get minimal responses for agent consumption. - Pre-computed intelligence. One bundle call replaces 8+ raw API calls.
- Semantic search. Find relevant content in one call instead of paginating through keyword results.