Skip to content

Debug Trace Speedups Profile#3252

Draft
Kbhat1 wants to merge 14 commits intorelease/v6.4from
kartik/debug-trace-profile-v6.4-dbhot
Draft

Debug Trace Speedups Profile#3252
Kbhat1 wants to merge 14 commits intorelease/v6.4from
kartik/debug-trace-profile-v6.4-dbhot

Conversation

@Kbhat1
Copy link
Copy Markdown
Contributor

@Kbhat1 Kbhat1 commented Apr 15, 2026

Describe your changes and provide context

Testing performed to validate your change

Kbhat1 added 8 commits April 14, 2026 12:51
Add an opt-in debug trace profiling endpoint that breaks transaction replay into timing phases and surfaces historical store lookup costs. Extend the store tracer to record iterator samples and per-operation timing so SS-heavy traces show which keys and scans dominated the replay.

Made-with: Cursor
Thread request-scoped read tracing through the SS stack so debug trace profiles can attribute latency to MVCC and Pebble internals like iterator creation, Last, SeekLT, and NextPrefix. Surface the low-level stats alongside the existing store trace so it is clear whether historical lookups are dominated by SS wrapper logic or the underlying Pebble read path.

Made-with: Cursor
Instrument the inner getMVCCSlice path so debug trace profiling attributes historical GET latency across iterator creation, Last, key reads, version decoding, value reads, cloning, and iterator close. This makes the dominant historical lookup path explicit instead of burying it inside a single coarse iterator bucket.

Made-with: Cursor
Add a seidb command that runs debug_traceTransactionProfile across a block range, saves raw results, and generates a summarized report. Relax the release-branch trace test so it still validates the new endpoint while tolerating backend differences in the older v6.4 test setup.

Made-with: Cursor
Polish the generated trace profile report into a dashboard with summary cards, top takeaways, and clearer hotspot tables for phases, low-level ops, store ops, slowest transactions, and slowest blocks. Keep the same underlying analytics while making results easier to scan during live tracing investigations.

Made-with: Cursor
Speed up debug trace historical reads by caching repeated per-request lookups and reusing one Pebble iterator per store through a request-scoped snapshot instead of creating a new iterator for every point read. Keep the existing profiling surface and trace cleanup intact so the before/after impact remains measurable from debug_traceTransactionProfile.

Made-with: Cursor
Add a versionless latest-value index for recent historical reads and fall back to MVCC history only when needed. Cache replayed block state across trace requests so tracing multiple transactions in the same block no longer replays from tx zero every time.

Made-with: Cursor
Switch MVCC version encoding to descending order so historical reads can seek forward to the first visible version instead of creating bounded reverse iterators. Update forward and reverse iterator semantics plus prune behavior so the new ordering stays correct under iteration, pruning, and trace profiling.

Made-with: Cursor
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 15, 2026

The latest Buf updates on your PR. Results from workflow Buf / buf (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed✅ passed✅ passed✅ passedApr 16, 2026, 3:34 PM

Sort Pebble batch writes by comparer order so the new latest-value index and inverted MVCC layout do not trip write-order errors. Tighten inverted-order iteration, old-version reads, and prune behavior so the DB path stays correct while preserving the new trace speedup path.

Made-with: Cursor
@codecov
Copy link
Copy Markdown

codecov bot commented Apr 15, 2026

Codecov Report

❌ Patch coverage is 33.33333% with 4 lines in your changes missing coverage. Please review.
✅ Project coverage is 58.36%. Comparing base (687fab0) to head (42b7077).

Files with missing lines Patch % Lines
sei-db/db_engine/pebbledb/mvcc/db.go 33.33% 1 Missing and 3 partials ⚠️

❌ Your patch check has failed because the patch coverage (33.33%) is below the target coverage (50.00%). You can increase the patch coverage or adjust the target coverage.

Additional details and impacted files

Impacted file tree graph

@@               Coverage Diff                @@
##           release/v6.4    #3252      +/-   ##
================================================
- Coverage         58.37%   58.36%   -0.01%     
================================================
  Files              2091     2091              
  Lines            172983   172763     -220     
================================================
- Hits             100972   100832     -140     
+ Misses            63009    62944      -65     
+ Partials           9002     8987      -15     
Flag Coverage Δ
sei-db 70.41% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
evmrpc/block_trace_profiled.go 0.70% <ø> (+<0.01%) ⬆️
evmrpc/simulate.go 72.00% <ø> (-0.07%) ⬇️
evmrpc/tracers.go 62.88% <ø> (ø)
sei-cosmos/store/cachekv/store.go 88.77% <ø> (ø)
sei-cosmos/store/gaskv/store.go 92.47% <ø> (-0.80%) ⬇️
sei-cosmos/store/tracekv/store.go 80.82% <ø> (ø)
sei-cosmos/storev2/state/store.go 17.14% <ø> (-1.28%) ⬇️
sei-cosmos/types/context.go 93.33% <ø> (+0.28%) ⬆️
sei-cosmos/types/tracer.go 83.33% <ø> (-6.67%) ⬇️
sei-db/db_engine/pebbledb/mvcc/batch.go 43.33% <ø> (+1.47%) ⬆️
... and 8 more
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Kbhat1 added 5 commits April 15, 2026 18:02
Write the latest-version metadata key outside the comparer-sorted Pebble data batch so inverted MVCC ordering and latest-value indexing do not conflict with metadata key ordering. Keep the read-path optimizations intact while removing the runtime batch ordering failure.

Made-with: Cursor
Strip the latest-value side index entirely to eliminate the write-ordering conflict between s/l: index keys and s/k: MVCC data keys under the custom comparer. Keep the inverted MVCC ordering, request cache, reusable iterators, and replay cache which are the main performance improvements.

Made-with: Cursor
Re-add the latest-value side index for fast recent reads. The actual root cause of the write-ordering error was empty/malformed keys from the state sync import path, not the index itself. Guard the import path and batch writers against empty storeKey or empty key to prevent the collision.

Made-with: Cursor
The s/l: latest-value index keys are fundamentally incompatible with the custom MVCC comparer during Pebble compaction. Remove the index entirely. Keep inverted MVCC ordering, request cache, reusable iterators, and replay cache.

Made-with: Cursor
Store a per-key "latest value" pointer at a reserved sentinel MVCC version
(math.MaxInt64) so the custom comparer sees a fully-formed MVCC key and
never the plain-shape keys that triggered the state-sync ordering bug.

Readers do a single bloom-filter accelerated db.Get on the sentinel key
before falling back to the MVCC scan path. Prune, RawIterate, and
iteration all explicitly skip the sentinel so it never surfaces as real
data. No new prefix, no new DB, no migration.

Made-with: Cursor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant