[WIP] Reuse sighash computations across evaluation#8654
[WIP] Reuse sighash computations across evaluation#8654jl2012 wants to merge 4 commits intobitcoin:masterfrom
Conversation
src/script/interpreter.h
Outdated
7dd93af to
b895b96
Compare
|
fixed a bug in #4562. It requires 256 cache slots instead of just 6 for the common types, since the nHashType is part of the sighash so each one is unique. Need more tests for witness txs. |
|
Nice work with the tests so far! |
|
Test updated to cover #8524 Benchmarking with and without #8524/#8654: https://gist.github.com/jl2012/f3262fd7cf47664ce43f036b9539e831 |
|
is it be possible to put the cache as a field of the TransactionChecker rather than as yet another parameter in CheckSig ? |
|
@NicolasDorier I believe so, yes! |
|
@NicolasDorier Actually, no. Not until we outlaw OP_CODESEPARATOR, as the script execution needs to be able to wipe the cache. |
|
@sipa you can add a ScriptCode field to SigHashCache, and add a check in CheckSig to use the cache only if the ScriptCode match. |
|
I hope nobody intend to outlaw OP_CODESEPARATOR, it is a nice way for a signer to sign a particular executed path without requiring multiple public key. May have potential cool things to do with it and MAST, as with MAST we can have lots of different branches. |
|
@NicolasDorier Nice idea. Implemented in https://github.com/sipa/bitcoin/commits/sighashcache. (@jl2012: feel free to squash if you like it) I don't understand what you're suggesting about OP_CODESEPARATOR. Perhaps we should discuss that on IRC instead. |
|
Code Review ACK for https://github.com/sipa/bitcoin/commits/sighashcache (this PR + the commit removing the param to CheckSig) However, I'm not sure it is really useful, in a case of O(n^2) hashing attack, having a 10x performance improvement for a 5 second block verification is not going to do huge difference. |
|
Squashed with @sipa 's patch @NicolasDorier, the effect would be limited for a miner-initiated attack. However, for standard transaction, 10x could be a make or break difference |
|
utACK 9e4cf76412e17efa8fdeb3e7626ccf25b578e036 |
|
@jl2012 as implemented now, the cache is per input. So how can it increase 10x for standard transaction ? |
|
@NicolasDorier Please move back-on-forth discussions to IRC. |
|
utACK 9e4cf76 |
|
Is this a target of 0.13.1? |
|
I have an alternative implementation as ed71079 in #8756. It only reuse a sighash for a signature within |
|
I think it would take more time to analysis this issue and figure out the best approach. I'm inclined not to include this in 0.13.1 |
This makes sure that SignatureHash is performed once only for each signature within a CHECKMULTISIG. Alternative to bitcoin#8654.
|
Tests are updated with FindAndDelete tests. The earlier version (#4562) did not reset the cache after FindAndDelete and the new tests would have caught the bug. However, the tests require the public key recovery code from https://github.com/petertodd/python-bitcoinlib , which may have copyright issues. Any suggestions? |
|
I'm interested in understanding the benchmark results a little bit more. It seems there are huge speed ups in big CHECKMULTISIG transactions, but there appear to be a non-neglible slowdown in more regular transactions (the last test in the benchmark). I'd be concerned that this code will on average be a slow down to block validation. Has it been benchmarked for typical blocks either during IBD or normal operation. |
|
@morcos: good question. I'll try to do some benchmarking in real operation |
|
I did some benchmarking but did not see any noticeable performance difference either way for typical usage |
|
Untagging this for 0.14 as discussed in the meeting |
|
Needs rebase (though if you dont get to it early this week maybe just wait until post-15). |
|
@jl2012 Should this have the "up for grabs" label so someone can pick it up, in case you're busy elsewhere? |
|
@kallewoof This PR is quite risky (might introduce consensus bug), but not very helpful unless we have the following softforks: 1. limiting SIGHASH to 6 types; 2. disallow OP_CODESEPARATOR; 3. disallow FindAndDelete(). The first one is already a policy and the other two are proposed in #11423. However policy is not enough |
|
@jl2012 Got it! Maybe add [WIP] to it so we know not to review it until it's ready for merging. |
|
Recommend we close this PR for now. If the following events happen:
we can reopen. Does that sound reasonable @jl2012 ? |
|
agree with @jnewbery |
Rebase of #4562 by @sipa
The effect of doing this would not be very obvious in normal use, but could have >10x improvement for a O(n^2) hashing attack