Benchmarks
Streaming performance benchmarks
StreamMDX benchmarks are designed to be reproducible. Every comparison starts with local baselines, shared fixtures, and the same harness settings.
Fixture: naive-bayes.mdScenario: S2_typical (50ms)Mode: incrementalRenderer: V2Methodology documentation
Comparison summary
Results are local and machine-dependent. Lower is better.
FirstIncrementalLive
| Renderer | Version | First render | Peak memory | Jank (long tasks) |
|---|---|---|---|---|
| StreamMDX | v1.2.4 | 12 ms | 4.2MB | 0 |
| react-markdown | v9.0.1 | 48 ms | 12.8MB | 3 (avg 18ms) |
Time to first visible render
Latency from first chunk to initial DOM paint. Lower is better.
StreamMDX12 ms
Alternative A48 ms
Patch to DOM latency (p50 / p95)
Batch processing time across the stream.
p50 latency p95 latency
Reproduce
Run the harness locally with the same fixture and scenario.
NEXT_PUBLIC_STREAMING_DEMO_API=true npm run docs:dev
npm run perf:harness -- --fixture naive-bayes --scenario S2_typical --runs 3 --warmup 1 --out tmp/perf-baselines
npm run perf:compare -- --base tmp/perf-baselines/<base> --candidate tmp/perf-baselines/<candidate>
Notes on interpretation
- First flush is the most user-visible metric. Aim for under 50ms.
- Long tasks should stay under 50ms to avoid frame drops.
- Memory peaks matter for multi-stream dashboards.