-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add signMessageWithoutRand method for kaspa wasm #587
Conversation
Signing without aux rand is dangerous and can leak keys. |
This is completely untrue. |
Can you elaborate? I'm not familiar with the underlying implementation here but I'm mainly concerned with this leading to nonce reuse. |
@apoelstra is is the main contributor of rust-secp256k1 |
Andrew is obviously right :) To read more about the relevant discussions that arised in adding this randomness please see: sipa/bips#195 |
Thanks for the input @elichai. @witter-deland I've reopened the PR while I read up more onto this. |
@coderofstuff Thanks
|
@coderofstuff May I ask when this PR will be merged? I need this feature to be able to continue and complete the integration and docking process. Thank you. |
Excuse me. What's going on? This API is very straightforward. Are there any issues that are blocking the merge? Thank you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the delay, we're currently packed with other things.
The change itself looks sound. I just have some comments on code duplication.
fixed :)
|
cff0732
to
9a25b94
Compare
@coderofstuff fixed :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Tested the scenarios and existing demo still works.
As a follow-up, please update the wasm/examples/nodejs/javascript/general/message-signing.js
to include a demo of the new scenario you added support for (passing noAuxRand: true
)
@coderofstuff Sure, done :)
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for that!
* rothschild: donate funds to external address with custom priority fee (kaspanet#482) * rothschild: donate funds to external address Signed-off-by: Dmitry Perchanov <[email protected]> * rothschild: Append priority fee to txs. Signed-off-by: Dmitry Perchanov <[email protected]> * rothschild: add option to choose and randomize fee Signed-off-by: Dmitry Perchanov <[email protected]> * rothschild: address clippy formatting issues Signed-off-by: Dmitry Perchanov <[email protected]> --------- Signed-off-by: Dmitry Perchanov <[email protected]> Signed-off-by: Dmitry Perchanov <[email protected]> Co-authored-by: coderofstuff <[email protected]> Co-authored-by: Dmitry Perchanov <[email protected]> * fix wrong combiner condition (kaspanet#567) * fix wRPC json notification format (kaspanet#571) * Documentation updates (kaspanet#570) * docs * Export ConsensusSessionOwned * add CI pass to run `cargo doc` * module rust docs * lints * fix typos * replace glob import terminology with "re-exports" * cleanup * fix wasm rpc method types for methods without mandatory arguments (kaspanet#572) * cleanup legacy bip39 cfg that interferes with docs.rs builds (kaspanet#573) * Bump tonic and prost versions, adapt middlewares (kaspanet#553) * bump tonic, prost versions update middlewares * use unbounded channel * change log level to trace * use bounded channel * reuse counts bytes body to measure bytes body * remove unneeded clone * Fix README.md layout and add linting section (kaspanet#488) * Bump tonic version (kaspanet#579) * replace statrs and statest deps & upgrade some deps. (kaspanet#425) * replace statrs and statest deps. * remove todo in toml.cargo and fmt & lints. * do a run of `cargo audit fix` for some miscellaneous reports. * use maintained alt ks crate. * add cargo.lock. * update * use new command * newline * refresh cargo lock with a few more version updates * fix minor readme glitches --------- Co-authored-by: Michael Sutton <[email protected]> * enhance tx inputs processing (kaspanet#495) * sighash reused trait * benches are implemented * use cache per iteration per function * fix par versions * fix benches * use upgreadable read * use concurrent cache * use hashcache * dont apply cache * rollback rwlock and indexmap. * remove scc * apply par iter to `check_scripts` * refactor check_scripts fn, fix tests * fix clippy * add bench with custom threadpool * style: fmt * suppress warnings * Merge branch 'master' into bcm-parallel-processing * renames + map err * reuse code * bench: avoid exposing cache map + iter pools in powers of 2 * simplify check_sig_op_counts * use thread pool also if a single input 1. to avoid confusion 2. since tokio blocking threads are not meant to be used for processing anyway * remove todo * clear cache instead of recreate * use and_then (so map_err can be called in a single location) * extend check scripts tests for better coverage of the par_iter case --------- Co-authored-by: Michael Sutton <[email protected]> * Parallelize MuHash calculations (kaspanet#575) * Parallelize MuHash calculations MuHash calculations are additive and can be done in chunks then later combined * Reimplement validate tx with muhash as a separate fn * Use smallvec for muhash parallel Co-authored-by: Michael Sutton <[email protected]> * Add independent rayon order test * Filter some data * Use tuple_windows for test iter --------- Co-authored-by: Michael Sutton <[email protected]> * Muhash parallel reduce -- optimize U3072 mul when LHS = one (kaspanet#581) * semantic: add `from` ext methods * muhash from txs benchmark * optimization: in u3072 mul test if lhs is one * extract `parallelism_in_power_steps` * comment * Rust 1.82 fixes + mempool std sig op count check (kaspanet#583) * rust 1.82 fixes * sig op count std check * typo(cli/utils): kaspa wording (kaspanet#582) Co-authored-by: Michael Sutton <[email protected]> * On-demand calculation for Ghostdag for Higher Levels (kaspanet#494) * Refactor pruning proof validation to many functions Co-authored-by: Ori Newman <[email protected]> * Use blue score as work for higher levels Co-authored-by: Ori Newman <[email protected]> * Remove pruning processor dependency on gd managers Co-authored-by: Ori Newman <[email protected]> * Consistency renaming Co-authored-by: Ori Newman <[email protected]> * Update db version Co-authored-by: Ori Newman <[email protected]> * GD Optimizations Co-authored-by: Ori Newman <[email protected]> * Remove remnant of old impl. optimize db prefixes * Ensure parents are in relations; Add comments apply_proof only inserts parent entries for a header from the proof into the relations store for a level if there was GD data in the old stores for that header. This adds a check to filter out parent records not in relations store * Match depth check to block_at_depth logic * Use singular GD store for header processing * Relax the panic to warn when finished_headers and couldn't find sufficient root This happens when there's not enough headers in the pruning proof but it satisfies validation * Error handling for gd on higher levels relations.get_parents on GD gets extra parents that aren't in the current GD store. so get_blue_work throws an error next, ORIGIN was mising from the GD so add that * remove using deeper requirements in lower levels * Fix missed references to self.ghostdag_stores in validate_pruning_point_proof * Refactoring for single GD header processing * Add assertion to check root vs old_root * Lint fix current_dag_level * Keep DB Version at 3 The new prefixes added are compatible with the old version. We don't want to trigger a db delete with this change * Cleanup apply_proof logic and handle more ghostdag_stores logic * remove simpa changes * Remove rewriting origin to primary GD It's already on there * More refactoring to use single GD store/manager * Lint fixes * warn to trace for common retry * Address initial comments * Remove "primary" in ghostdag store/manager references * Add small safety margin to proof at level 0 This prevents the case where new root is an anticone of old root * Revert to only do proof rebuilding on sanity check * Proper "better" proof check * Update comment on find_selected_parent_header_at_level * Re-apply missed comment * Implement db upgrade logic from 3 to 4 * Explain further the workaround for GD ordering.rs * Minor update to Display of TempGD keys * Various fixes - Keep using old root to minimize proof size. Old root is calculated using the temporary gd stores - fix the off-by-one in block_at_depth and chain_up_to_depth - revert the temp fix to sync with the off-by-one * Revert "Various fixes" This reverts commit bc56e65. This experimental commit requires a bit more thinking to apply, and optimization can be deferred. * Revert better proof check Recreates the GD stores for the current consensus by checking existing proof * Fix: use cc gd store * When building pruning point proof ghostdag data, ignore blocks before the root * Add trusted blocks to all relevant levels during apply_proof As opposed to applying only to level 0 * Calculate headers estimate in init proof stores * Explain finished headers logic Add back the panic if we couldn't find the required block and our headers are done Add explanation in comment for why trying anyway if finished_headers is acceptable * clarify comment * Rename old_root to depth_based_root explain logic for the two root calculation * More merge fixes * Refactor relations services into self * Use blue_work for find_selected_parent_header_at_level * Comment fixes and small refactor * Revert rename to old root * Lint fix from merged code * Some cleanup - use BlueWorkType - fix some comments * remove last reference to ghostdag_primary_* * Cleaner find_selected_parent_header_at_level Co-authored-by: Michael Sutton <[email protected]> * Refactor for better readability and add more docs * Smaller safety margin for all * Lint and logic fix * Reduce loop depth increase on level proof retries Co-authored-by: Michael Sutton <[email protected]> * Update consensus/src/processes/pruning_proof/mod.rs Co-authored-by: Michael Sutton <[email protected]> * Comment cleanup * Remove unnecessary clone Co-authored-by: Michael Sutton <[email protected]> * Rename genesis_hash to root; Remove redundant filter * Cleaner reachability_stores type Co-authored-by: Michael Sutton <[email protected]> * Change failed to find sufficient root log to debug * Bump node version to 0.15.3 * A few minor leftovers --------- Co-authored-by: Ori Newman <[email protected]> Co-authored-by: Michael Sutton <[email protected]> Co-authored-by: Michael Sutton <[email protected]> * Standartize fork activation logic (kaspanet#588) * Use ForkActivation for all fork activations * Avoid using negation in some ifs * Add is_within_range_from_activation * Move 'is always' check inside is_within_range_from_activation * lints * Refactoring for cleaner pruning proof module (kaspanet#589) * Cleanup manual block level calc There were two areas in pruning proof mod that manually calculated block level. This replaces those with a call to calc_block_level * Refactor pruning proof build functions * Refactor apply pruning proof functions * Refactor validate pruning functions * Add comments for clarity * Pruning proof minor improvements (kaspanet#590) * Check pow for headers in level proof * Implement comparable level work * Implement fairer pruning proof comparison * prefer having the GD manager compose the level target, so that 1. level_work is always used 2. level zero can be explicitly set to 0 by the manager itself (being consensus sensitive code) * 1. no need to init origin here 2. comments about blue work are obvious * use saturating ops and avoid SignedInteger all together * Comment on level_work * Move MAX_WORK_LEVEL close to BlueWorkType and explain * Refactor block level calc from pow to a function --------- Co-authored-by: Michael Sutton <[email protected]> * Add KIP-10 Transaction Introspection Opcodes, 8-byte arithmetic and Hard Fork Support (kaspanet#487) * implement new opcodes * example of mutual tx * add docs describing scenario * introduce feature gate for new features * introduce hf feature that enables txscript hf feature * style: fmt and clippy fix * implement new opcodes * example of mutual tx * add docs describing scenario * introduce feature gate for new features * style: fmt and clippy fix * remove unused feature * fmt * make opcode invalid in case of feature disabled * feature gate test * change test set based on feature add ci cd test * rename InputSPK -> InputSpk * enable kip10 opcodes based on daa_score in runtime * use dummy kip10 activation daa score in params * use dummy kip10 activation daa score in params * suppress clippy lint * add example with shared key * fix clippy * remove useless check from example * add one-time borrowing example * Implement one-time and two-times threshold borrowing scenarios - Add threshold_scenario_limited_one_time function - Add threshold_scenario_limited_2_times function - Create generate_limited_time_script for reusable script generation - Implement nested script structure for two-times borrowing - Update documentation for both scenarios - Add tests for owner spending, borrowing, and invalid attempts in both cases - Ensure consistent error handling and logging across scenarios - Refactor to use more generic script generation approach * fix: fix incorrect sig-op count * correct error description * style: fmt * pass kip-10 flag in constructor params * remove borrow scenario from tests. run tests against both kip1- enabled/disabled engine * introduce method that converts spk to bytes. add tests covering new opcodes * return comment describing where invalid opcodes starts from. add comments describing why 2 files are used. * fix wring error messages * support introspection by index * test input spk * test output spk * tests refactor * support 8-byte arithmetics * Standartize fork activation logic (kaspanet#588) * Use ForkActivation for all fork activations * Avoid using negation in some ifs * Add is_within_range_from_activation * Move 'is always' check inside is_within_range_from_activation * lints * Refactoring for cleaner pruning proof module (kaspanet#589) * Cleanup manual block level calc There were two areas in pruning proof mod that manually calculated block level. This replaces those with a call to calc_block_level * Refactor pruning proof build functions * Refactor apply pruning proof functions * Refactor validate pruning functions * Add comments for clarity * only enable 8 byte arithmetics for kip10 * use i64 value in 9-byte tests * fix tests covering kip10 and i64 deserialization * fix test according to 8-byte math * finish test covering kip10 opcodes: input/output/amount/spk * fix kip10 examples * rename test * feat: add input index op * feat: add input/outpiut opcodes * reseve opcodes reorder kip10 opcodes. reflect script tests * fix example * introspection opcodes are reserverd, not disables * use ForkActivation type * cicd: run kip-10 example * move spk encoding to txscript module * rework bound check ot input/output index * fix tests by importing spkencoding trait * replace todo in descripotions of over[under]flow errors * reorder new opcodes, reserve script sig opcode, remove txid * fix bitcoin script tests * add simple opcode tests * rename id(which represents input index) to idx * fix comments * add input spk tests * refactor test cases * refactor(txscript): Enforce input index invariant via assertion Change TxScriptEngine::from_transaction_input to assert valid input index instead of returning Result. This better reflects that an invalid index is a caller's (transaction validation) error rather than a script engine error, since the input must be part of the transaction being validated. An invalid index signifies a mismatch between the transaction and the input being validated - this is a programming error in the transaction validator layer, not a script engine concern. The script engine should be able to assume it receives valid inputs from its caller. The change simplifies error handling by enforcing this invariant early, while maintaining identical behavior for valid inputs. The function is now documented to panic on malformed inputs. This is a breaking change for code that previously handled InvalidIndex errors, though such handling was likely incorrect as it indicated an inconsistency in transaction validation. * refactor error types to contain correct info * rename id to idx * rename opcode * make construction of TxScriptEngine from transaction input infallible * style: format combinators chain * add integration test covering activation of kip10 * rename kip10_activation_daa_score to kip10_activation * Update crypto/txscript/src/lib.rs refactor vector filling * rework assert * verify that block is disqualified in case of it has tx which requires block that contains the tx with kip10 opcode is accepted after daa score has being reached * revert changer to infallible api * add doc comments * Update crypto/txscript/src/opcodes/mod.rs Fallible conversion of output amount (usize -> i64) * Update crypto/txscript/src/opcodes/mod.rs Fallible conversion of input amount (usize -> i64) * add required import * refactor: SigHashReusedValuesUnsync doesnt neet to be mutable * fix test description * rework example * 9 byte integers must fail to serialize * add todo * rewrite todo * remove redundant code * remove redundant mut in example * remove redundant mut in example * remove redundant mut in example * cicd: apply lint to examples --------- Co-authored-by: Ori Newman <[email protected]> * Some simplification to script number types (kaspanet#594) * Some simplification to script number types * Add TODO * Address review comments * feat: add signMessage noAuxRand option for kaspa wasm (kaspanet#587) * feat: add signMessageWithoutRand method for kaspa wasm * enhance: sign message api * fix: unit test fail * chore: update noAuxRand of ISignMessage * chore: add sign message demo for noAuxRand * Optimize window cache building for ibd (kaspanet#576) * show changes. * optimize window caches for ibd. * do lints and checks etc.. * bench and compare. * clean-up * rework lock time check a bit. * // bool: todo!(), * fmt * address some reveiw points. * address reveiw comments. * update comments. * pass tests. * fix blue work assumption, update error message. * Update window.rs slight comment update. * simplify a bit more. * remove some unneeded things. rearrange access to cmpct gdd. * fix conflicts. * lints.. * address reveiw points from m. sutton. * uncomplicate check_block_transactions_in_context * commit in lazy * fix lints. * query compact data as much as possible. * Use DefefMut to unify push_mergeset logic for all cases (addresses @tiram's review) * comment on cache_sink_windows * add comment to new_sink != prev_sink * return out of push_mergeset, if we cannot push any more. * remove unused diff cache and do non-daa as option. * Cargo.lock * bindings signer layout --------- Signed-off-by: Dmitry Perchanov <[email protected]> Signed-off-by: Dmitry Perchanov <[email protected]> Co-authored-by: demisrael <[email protected]> Co-authored-by: coderofstuff <[email protected]> Co-authored-by: Dmitry Perchanov <[email protected]> Co-authored-by: Maxim <[email protected]> Co-authored-by: aspect <[email protected]> Co-authored-by: George Bogodukhov <[email protected]> Co-authored-by: Michael Sutton <[email protected]> Co-authored-by: D-Stacks <[email protected]> Co-authored-by: Romain Billot <[email protected]> Co-authored-by: Ori Newman <[email protected]> Co-authored-by: Michael Sutton <[email protected]> Co-authored-by: witter-deland <[email protected]>
* Add KIP-10 Transaction Introspection Opcodes, 8-byte arithmetic and Hard Fork Support (kaspanet#487) * implement new opcodes * example of mutual tx * add docs describing scenario * introduce feature gate for new features * introduce hf feature that enables txscript hf feature * style: fmt and clippy fix * implement new opcodes * example of mutual tx * add docs describing scenario * introduce feature gate for new features * style: fmt and clippy fix * remove unused feature * fmt * make opcode invalid in case of feature disabled * feature gate test * change test set based on feature add ci cd test * rename InputSPK -> InputSpk * enable kip10 opcodes based on daa_score in runtime * use dummy kip10 activation daa score in params * use dummy kip10 activation daa score in params * suppress clippy lint * add example with shared key * fix clippy * remove useless check from example * add one-time borrowing example * Implement one-time and two-times threshold borrowing scenarios - Add threshold_scenario_limited_one_time function - Add threshold_scenario_limited_2_times function - Create generate_limited_time_script for reusable script generation - Implement nested script structure for two-times borrowing - Update documentation for both scenarios - Add tests for owner spending, borrowing, and invalid attempts in both cases - Ensure consistent error handling and logging across scenarios - Refactor to use more generic script generation approach * fix: fix incorrect sig-op count * correct error description * style: fmt * pass kip-10 flag in constructor params * remove borrow scenario from tests. run tests against both kip1- enabled/disabled engine * introduce method that converts spk to bytes. add tests covering new opcodes * return comment describing where invalid opcodes starts from. add comments describing why 2 files are used. * fix wring error messages * support introspection by index * test input spk * test output spk * tests refactor * support 8-byte arithmetics * Standartize fork activation logic (kaspanet#588) * Use ForkActivation for all fork activations * Avoid using negation in some ifs * Add is_within_range_from_activation * Move 'is always' check inside is_within_range_from_activation * lints * Refactoring for cleaner pruning proof module (kaspanet#589) * Cleanup manual block level calc There were two areas in pruning proof mod that manually calculated block level. This replaces those with a call to calc_block_level * Refactor pruning proof build functions * Refactor apply pruning proof functions * Refactor validate pruning functions * Add comments for clarity * only enable 8 byte arithmetics for kip10 * use i64 value in 9-byte tests * fix tests covering kip10 and i64 deserialization * fix test according to 8-byte math * finish test covering kip10 opcodes: input/output/amount/spk * fix kip10 examples * rename test * feat: add input index op * feat: add input/outpiut opcodes * reseve opcodes reorder kip10 opcodes. reflect script tests * fix example * introspection opcodes are reserverd, not disables * use ForkActivation type * cicd: run kip-10 example * move spk encoding to txscript module * rework bound check ot input/output index * fix tests by importing spkencoding trait * replace todo in descripotions of over[under]flow errors * reorder new opcodes, reserve script sig opcode, remove txid * fix bitcoin script tests * add simple opcode tests * rename id(which represents input index) to idx * fix comments * add input spk tests * refactor test cases * refactor(txscript): Enforce input index invariant via assertion Change TxScriptEngine::from_transaction_input to assert valid input index instead of returning Result. This better reflects that an invalid index is a caller's (transaction validation) error rather than a script engine error, since the input must be part of the transaction being validated. An invalid index signifies a mismatch between the transaction and the input being validated - this is a programming error in the transaction validator layer, not a script engine concern. The script engine should be able to assume it receives valid inputs from its caller. The change simplifies error handling by enforcing this invariant early, while maintaining identical behavior for valid inputs. The function is now documented to panic on malformed inputs. This is a breaking change for code that previously handled InvalidIndex errors, though such handling was likely incorrect as it indicated an inconsistency in transaction validation. * refactor error types to contain correct info * rename id to idx * rename opcode * make construction of TxScriptEngine from transaction input infallible * style: format combinators chain * add integration test covering activation of kip10 * rename kip10_activation_daa_score to kip10_activation * Update crypto/txscript/src/lib.rs refactor vector filling * rework assert * verify that block is disqualified in case of it has tx which requires block that contains the tx with kip10 opcode is accepted after daa score has being reached * revert changer to infallible api * add doc comments * Update crypto/txscript/src/opcodes/mod.rs Fallible conversion of output amount (usize -> i64) * Update crypto/txscript/src/opcodes/mod.rs Fallible conversion of input amount (usize -> i64) * add required import * refactor: SigHashReusedValuesUnsync doesnt neet to be mutable * fix test description * rework example * 9 byte integers must fail to serialize * add todo * rewrite todo * remove redundant code * remove redundant mut in example * remove redundant mut in example * remove redundant mut in example * cicd: apply lint to examples --------- Co-authored-by: Ori Newman <[email protected]> * Some simplification to script number types (kaspanet#594) * Some simplification to script number types * Add TODO * Address review comments * feat: add signMessage noAuxRand option for kaspa wasm (kaspanet#587) * feat: add signMessageWithoutRand method for kaspa wasm * enhance: sign message api * fix: unit test fail * chore: update noAuxRand of ISignMessage * chore: add sign message demo for noAuxRand * test reflects enabling payload * Enhance benchmarking: add payload size variations Refactored `mock_tx` to `mock_tx_with_payload` to support custom payload sizes. Introduced new benchmark function `benchmark_check_scripts_with_payload` to test performance with varying payload sizes. Commented out the old benchmark function to focus on payload-based tests. * Enhance script checking benchmarks Added benchmarks to evaluate script checking performance with varying payload sizes and input counts. This helps in understanding the impact of transaction payload size on validation and the relationship between input count and payload processing overhead. * Add new test case for transaction hashing and refactor code This commit introduces a new test case to verify that transaction IDs and hashes change with payload modifications. Additionally, code readability and consistency are improved by refactoring multi-line expressions into single lines where appropriate. * Add payload activation test for transactions This commit introduces a new integration test to validate the enforcement of payload activation rules at a specified DAA score. The test ensures that transactions with large payloads are rejected before activation and accepted afterward, maintaining consensus integrity. * style: fmt * test: add test that checks that payload change reflects sighash * rename test --------- Co-authored-by: Ori Newman <[email protected]> Co-authored-by: witter-deland <[email protected]>
#586