Skip to content

Rollback for dumptxoutset without invalidating blocks#33477

Open
fjahr wants to merge 5 commits intobitcoin:masterfrom
fjahr:202509-better-rollback
Open

Rollback for dumptxoutset without invalidating blocks#33477
fjahr wants to merge 5 commits intobitcoin:masterfrom
fjahr:202509-better-rollback

Conversation

@fjahr
Copy link
Contributor

@fjahr fjahr commented Sep 24, 2025

This is an alternative approach to implement dumptxoutset with rollback that was discussed a few times. It does not rely on invalidateblock and reconsiderblock and instead creates a temporary copy of the coins DB, modifies this copy by rolling back as many blocks as necessary and then creating the dump from this temp copy DB. See also #29553 (comment), #32817 (comment) and #29565 discussions.

The nice side-effects of this are that forks can not interfere with the rollback and network activity does not have to be suspended. But there are also some downsides when comparing to the current approach: this does require some additional disk space for the copied coins DB and performance is slower (master took 3m 17s vs 9m 16s in my last test with the code here, rolling back ~1500 blocks). However, there is also not much code being added here, network can stay active throughout and performance would stay constant with this approach while it would impact master if there were forks that needed to be invalidated as well (see #33444 for the alternative approach), so this could still be considered a good trade-off.

@DrahtBot
Copy link
Contributor

DrahtBot commented Sep 24, 2025

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Code Coverage & Benchmarks

For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33477.

Reviews

See the guideline for information on the review process.

Type Reviewers
Concept ACK luke-jr, kevkevinpal, theStack, mzumsande
Stale ACK enirox001

If your review is incorrectly listed, please copy-paste <!--meta-tag:bot-skip--> into the comment that the bot should ignore.

Conflicts

Reviewers, this pull request conflicts with the following ones:

  • #34534 (rpc: Manual prune lock management (Take 2) by fjahr)
  • #34440 (Refactor CChain methods to use references, tests by optout21)

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

@fjahr
Copy link
Contributor Author

fjahr commented Sep 24, 2025

cc @Sjors since you were asking for this approach a few times :)

@fjahr fjahr force-pushed the 202509-better-rollback branch 2 times, most recently from 716e1db to 6d409d5 Compare September 24, 2025 22:33
@luke-jr
Copy link
Member

luke-jr commented Sep 25, 2025

Concept ACK, this seems cleaner.

master took 3m 17s vs 9m 16s in my last test with the code here

I suspect if you go back further, this approach will end up performing better because we no longer need to roll back forward at the end

Copy link
Contributor

@kevkevinpal kevkevinpal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK 6d409d5

This approach makes more sense. I reviewed the code a bit and made some comments, but nothing blocking

I also added comments on possible functional tests for the new JSONRPCError but these can be done in a followup

"Unless the \"latest\" type is requested, the node will roll back to the requested height and network activity will be suspended during this process. "
"Because of this it is discouraged to interact with the node in any other way during the execution of this call to avoid inconsistent results and race conditions, particularly RPCs that interact with blockstorage.\n\n"
"This call may take several minutes. Make sure to use no RPC timeout (bitcoin-cli -rpcclienttimeout=0)",
"This creates a temporary UTXO database when rolling back, keeping the main chain intact. Should the node experience an unclean shutdown the temporary database may need to be removed from the datadir manually.\n\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be worth noting that "network activity will not be suspended during this process."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is necessary, I don't think a user would naturally assume that there is a reason to suspend network activity for this. Previously we had to do this as basically a hack. When we don't do this anymore it would seem odd to me to mention it.

CBlock block;
if (!node.chainman->m_blockman.ReadBlock(block, *block_index)) {
throw JSONRPCError(RPC_INTERNAL_ERROR,
strprintf("Failed to read block at height %d", block_index->nHeight));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be able to add a functional test for this rpc error

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, this one and the other similar comments about test coverage are all cases that are pretty hard to hit because they should only be possible in a case of db corruption or a similarly unlikely event. Not saying that this wouldn't be valuable coverage, but afaik we hardly ever go through the hassle to cover such cases and this RPC is far from being a critical path in the comparison to the rest of the code base. So, unless you have a specific suggestion for how the can be hit in practical way I would suggest we keep these for a follow-up. The current changes should be a robustness improvement by themselves.

(Marking the other comments as resolved for now but feel free to correct me if you think different about one of them specifically)

}
~TemporaryUTXODatabase() {
try {
fs::remove_all(m_path);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be useful to add a log here to inform the user the temp UTXO db was cleaned up since we mentioned in logs that we are creating a temp UTXO db

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I don't think we log anything on the creation of the DB so I think I would keep it the same on the destruction. It should only be a debug level log if we would add anything like that since it seems a bit low level for the general user.

@ajtowns
Copy link
Contributor

ajtowns commented Sep 29, 2025

Nice!

performance is slower (master took 3m 17s vs 9m 16s in my last test with the code here,

That probably makes sense. It might be possible to do it faster and with less disk usage for relatively short rollbacks via a two step process:

  • create a read-only snapshot of the db
  • create an empty "coins-delta" db
  • iterate through the rev data to rollback, update the coins-delta db:
    • when you rollback past a coin's creation:
      • if the coin was in the snapshot db, add "[coin] deleted"
      • otherwise, if it was in the coins-delta db, remove "[coin]"
      • (otherwise, there's a bug)
    • when you rollback past a coin's spend, add "[coin]"
  • when you've finished the rollback,
    • iterate through the snapshot coins, skipping any where there is a "[coin] deleted" entry, reporting them
    • iterate through the non-deleted coins-delta coins, reporting them
  • delete the coins-delta db, delete the snapshot

Rather than being direct RPC functionality, maybe it would be better to have an RPC function to export a copy of the utxo set at the current height, and have a separate bitcoin-kernel binary that performs the rollback and utxoset stats calculation itself?

@theStack
Copy link
Contributor

theStack commented Oct 9, 2025

Concept ACK

@enirox001
Copy link
Contributor

ACK 6d409d5

This is a good change. Using a temporary coins DB for the rollback is a much cleaner and safer solution than the invalidateblock method. It correctly solves fork-related bugs by isolating the process and avoids the need for network suspension, making it a superior approach to what I proposed in #33444.

The code is well-contained, and the new TemporaryUTXODatabase class handles the DB lifecycle cleanly.

I've pulled the branch, compiled, and the full functional test suite passes, including the rpc_dumptxoutset test

Copy link
Contributor

@mzumsande mzumsande left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK

I suspect if you go back further, this approach will end up performing better because we no longer need to roll back forward at the end

Would be interesting if someone could try out by going back 25k blocks or more, as is usually done for creating snapshots (I can't right now).

WITH_LOCK(::cs_main, cursor = chainstate.CoinsDB().Cursor());

size_t coins_count = 0;
while (cursor->Valid()) {
Copy link
Contributor

@mzumsande mzumsande Oct 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we need to hold cs_main throughout the copying phase? What if the utxo set changes while we are copying coins?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so, my understanding is that the cursor/LevelDB iterator works on a snapshot of the DB itself which doesn't get mutated. You can see the same pattern in WriteUTXOSnapshot where we are working with a cursor without holding cs_main as well.

@fjahr fjahr force-pushed the 202509-better-rollback branch from 6d409d5 to b71ee30 Compare November 26, 2025 21:45
@fjahr
Copy link
Contributor Author

fjahr commented Nov 26, 2025

Thanks for all the feedback so far and sorry for the slow response!

Rather than being direct RPC functionality, maybe it would be better to have an RPC function to export a copy of the utxo set at the current height, and have a separate bitcoin-kernel binary that performs the rollback and utxoset stats calculation itself?

Hm, feels like a bit overengineered for this functionality, considering the overhead for test coverage and build changes for this. Maybe I am overcomplicating it in my head, I have not done much with kernel yet. But if this is considerably more complex I would rather first go ahead with this and then I would rather keep it for consideration of a future change.

Would be interesting if someone could try out by going back 25k blocks or more, as is usually done for creating snapshots (I can't right now).

I tried with 880,000, our v29 param, so 45,000 blocks rollback. It took just under 20min in total and it returned the correct hash.

$ build/bin/bitcoin-cli -rpcclienttimeout=0 -named dumptxoutset ~/Downloads/utxo880k.dat rollback=880000
{
  "coins_written": 184821030,
  "base_hash": "000000000000000000010b17283c3c400507969a9c2afd1dcf2082ec5cca2880",
  "base_height": 880000,
  "path": "/Users/FJ/Downloads/utxo880k.dat",
  "txoutset_hash": "dbd190983eaf433ef7c15f78a278ae42c00ef52e0fd2a54953782175fbadcea9",
  "nchaintx": 1145604538
}

It might be possible to do it faster and with less disk usage for relatively short rollbacks via a two step process:

I actually had a similar idea early on but then stayed with the simpler approach. I will try this out with a POC and check the performance impact.

@fjahr fjahr force-pushed the 202509-better-rollback branch from b71ee30 to 53865e7 Compare November 26, 2025 21:51
@DrahtBot
Copy link
Contributor

🚧 At least one of the CI tasks failed.
Task Windows-cross to x86_64: https://github.com/bitcoin/bitcoin/actions/runs/19718249847/job/56495362448
LLM reason (✨ experimental): Compile errors in rpc/blockchain.cpp due to RPCHelpMan constructor signature mismatch (brace-initializer/API change).

Hints

Try to run the tests locally, according to the documentation. However, a CI failure may still
happen due to a number of reasons, for example:

  • Possibly due to a silent merge conflict (the changes in this pull request being
    incompatible with the current code in the target branch). If so, make sure to rebase on the latest
    commit of the target branch.

  • A sanitizer issue, which can only be found by compiling with the sanitizer and running the
    affected test.

  • An intermittent issue.

Leave a comment here, if you need help tracking down a confusing failure.

@fjahr fjahr force-pushed the 202509-better-rollback branch from 53865e7 to e0162ab Compare November 26, 2025 22:05
@fjahr
Copy link
Contributor Author

fjahr commented Nov 27, 2025

It might be possible to do it faster and with less disk usage for relatively short rollbacks via a two step process:

I actually had a similar idea early on but then stayed with the simpler approach. I will try this out with a POC and check the performance impact.

I tried to a few different takes on the delta-based idea including some vibe coding tests. Here is the latest one, something is definitely still broken there but the dump it generates is correct. All tests I did had in common that the processing time actually took longer than the current approach, almost 28 min with the latest code version. I am now thinking that longer rollbacks may not be quicker, only shorter ones will be because the big copy overhead in the beginning is saved. But then we also don't have that much to gain. Even if the performance can be improved, now that I have played around with it, it doesn't seem worth the additional code complexity it introduces. It would need to be significantly faster to justify that and I am currently not seeing that.

@sedited
Copy link
Contributor

sedited commented Mar 4, 2026

Even if the performance can be improved, now that I have played around with it, it doesn't seem worth the additional code complexity it introduces. It would need to be significantly faster to justify that and I am currently not seeing that.

Are you saying this PR might not be worth it, or just the other approaches you tried out? I'm also curious if this runs counter to the purpose of #31560 , which would reduce the amount of interim data written to disk for the purpose of collecting the data into a db.

@fjahr
Copy link
Contributor Author

fjahr commented Mar 6, 2026

Are you saying this PR might not be worth it, or just the other approaches you tried out?

Wording is a bit confusing, sorry. I meant the other approach that I tried out. That approach had a negative impact on performance but even if that could be improved I would still doubt it's our best choice.

I'm also curious if this runs counter to the purpose of #31560 , which would reduce the amount of interim data written to disk for the purpose of collecting the data into a db.

I haven't reviewed that one but from a quick glance I don't see why these would conflict. This PR here deals with how the UTXO set data is collected and #31560 deals with writing that data somewhere else. What #31560 does is just overwriting the temppath variable for this special case, but the call to WriteUTXOSnapshot stays the same as in master and here.

It also seems like #31560 is basically RFM. I would say this can move forward and I will figure the integration in the rebase :)

@ajtowns
Copy link
Contributor

ajtowns commented Mar 7, 2026

Oh, huh, I kind-of thought this was merged already. I guess ping for review after 31560 is merged and this is rebased on top?

@fjahr
Copy link
Contributor Author

fjahr commented Mar 7, 2026

Oh, huh, I kind-of thought this was merged already. I guess ping for review after 31560 is merged and this is rebased on top?

Sure, I'm just waiting for 31560 myself now :) FWIW, I've been using this branch for all my txoutset dump needs since I opened it and never saw any issues so it seems to work well (anecdotally).

@fjahr fjahr force-pushed the 202509-better-rollback branch from e0162ab to 921827c Compare March 12, 2026 13:39
@fjahr
Copy link
Contributor Author

fjahr commented Mar 12, 2026

Rebased on top of the changes from #31560, the conflict was very minor and things seem to just work (TM). I even added a commit which extends the sqlite test for named pipe usage to do a rollback so both functionalities are tested together.

I'm also curious if this runs counter to the purpose of #31560 , which would reduce the amount of interim data written to disk for the purpose of collecting the data into a db.

@sedited I have thought about your comment again and I probably didn't address it correctly. Your point was not about if things can be integrated mechanically but rather that this offsets the gain someone might aim for when using a pipe, right? Indeed the negative impact is that there is more disk space used temporarily while the temp utxo set exists. However this happens only in the rollback case, with the latest utxo set the gain is still there.

Copy link
Contributor

@theStack theStack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested manually on signet so far with an arbitrary snapshot height (deep rollback=50000) and it seems to work fine, both with a regular file and a named pipe. Out of curiosity I tried to place the temporary db in /tmp (tmpfs)

diff --git a/src/rpc/blockchain.cpp b/src/rpc/blockchain.cpp
index e2a15d52d8..8453f8fe0d 100644
--- a/src/rpc/blockchain.cpp
+++ b/src/rpc/blockchain.cpp
@@ -3121,7 +3121,7 @@ static RPCHelpMan dumptxoutset()
                                               target_index,
                                               std::move(afile),
                                               path,
-                                              temppath);
+                                              "/tmp/");
     }

     if (!fs::is_fifo(path_info)) {

with the hope that it could be a bit faster, and also experimented with higher .cache_bytes values for the temporary db (256_MiB, 1024_MiB), but none of these changes made a difference in performance. Will re-test on mainnet within the next days.


// Log every 10M coins (optimized for mainnet)
if (coins_count % 10'000'000 == 0) {
LogInfo("Copying UTXO set: %uM coins copied.", coins_count / 1'000'000);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logging nit: would be nice to also show the number of total coins, to provide some information about the copying progress. I guess that's not easily possible though as we would need to a call GetUTXOStats before (adding a significant delay and therefore not worth it)?

Copy link
Contributor Author

@fjahr fjahr Mar 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I don't like it either but to my knowledge there isn't an easy enough way that would be worth it, IMO, unless the user runs coinstatsindex. Doing the logging different with coinstatsindex is certainly doable but also a bit ugly so not sure if it's worth it. Running the stats before getting started also doesn't seem worth it to take the performance hit. One creative idea I had was to take the db size on disk and calculate an estimate of progress based on bytes processed. But that is probably even worse than the other approaches because it takes more code, we would need to deal with all the stuff leveldb does like at least trigger a compaction manually before getting the size on disk etc. And it would likely be inaccurate often so we might still get user complaints. One more dirty estimate idea: getting the latest assumeutxo coins count and then extrapolate some coins count based on the blocks delta. That would have worked pretty well until 2023 but now it would be off to an annoying degree as well: https://mainnet.observer/charts/utxoset-size/

Happy to consider more ideas but so far I haven't had one where I felt it was worth it for nicer logging.

Copy link
Contributor

@theStack theStack Mar 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that none of the mentioned ideas are worth it the additional code complexity / performance penalty just for nicer log messages (though it's interesting to reason about them, even if they are dirty :-)). Forgot to mention that in my review yesterday, but for the rollback a progress could be added though, as in that case the necessary information is available, e.g. "Rolled back 500/52000 blocks" (maybe even show the percentage if we want to be extra fancy). But even that I would consider a non-blocking nit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. Added the rollback progress, the extra fancy way :)

@fjahr fjahr force-pushed the 202509-better-rollback branch from 921827c to 9da705c Compare March 13, 2026 11:49
// Create temporary database
DBParams db_params{
.path = temp_db_path,
.cache_bytes = 16_MiB,
Copy link
Contributor

@sedited sedited Mar 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this make a difference at all? We are just reading once, so I would not expect db-level caching to make a difference.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, technically we are reading twice but the cache doesn't really help here. During the rollback phase each coin is looked up only once, so the cache has no impact. And at the end everything is read once for the final dump, there 16 MB cache is negligible.

Good catch, setting it to 0.

DBParams db_params{
.path = temp_db_path,
.cache_bytes = 16_MiB,
.memory_only = false,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might it be interesting to make this a named argument? On systems with heaps of memory this might reduce the disk load a bit, and for me at least it seems to cut the dump time in half.

Copy link
Contributor Author

@fjahr fjahr Mar 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, that seems a small enough change to be worth it. I have added is as a separate commit so it can be considered as an optional addition to the change here.

strprintf("Failed to read block at height %d", block_index->nHeight));
}

WITH_LOCK(::cs_main, res = chainstate.DisconnectBlock(block, block_index, rollback_cache));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might there be a TOCTOU issue in case pruning happens in the meantime here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, yeah, I think it is. The simple way would have been to just throw an error in this case but it's a bit annoying when this happened to the user because maybe it would have worked but now they have to resync because they lost the blocks. I rather implemented what I think is a nicer solution using a prune lock.

@fjahr fjahr force-pushed the 202509-better-rollback branch from 9da705c to b3ea153 Compare March 14, 2026 22:42
@fjahr
Copy link
Contributor Author

fjahr commented Mar 14, 2026

Addressed comments from @sedited and @theStack , thanks for the review!

There are now two new commits and a bit of extra lines of code. As mentioned in some of the comments there would have probably been simpler solutions available for each of the addressed issues and I am open to consider rolling back (pun intended) what I have added in this push in favor of something simpler if that's what reviewers prefer.

EDIT: FWIW, the newly added DeletePruneLock would also be a part of #34534.

Copy link
Contributor

@sedited sedited left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me, but I would recommend splitting the DeletePruneLock from the last commit and adding a unit test for it. To me it would also make sense to just squash it into the commit changing the rest of the rollback logic.


LogInfo("Copying current UTXO set to temporary database.");
{
WITH_LOCK(::cs_main, chainstate.ForceFlushStateToDisk());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we retain the cache here?

const bool in_memory)
{
// Create a temporary leveldb to store the UTXO set that is being rolled back
std::string temp_db_name{strprintf("temp_utxo_%d", target->nHeight)};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about adding a few random characters to this? Might that prevent a collision bug where a re-org takes place in case we are dumping close to the tip height?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants