-
Notifications
You must be signed in to change notification settings - Fork 573
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Staging to master v2.4.0 #5071
Merged
Staging to master v2.4.0 #5071
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
extends the 'wallet:transaction:view' command to support viewing transaction details for serialized posted transactions this allows you to download a transaction from the block explorer, decrypt notes, and view transaction details on the cli adds a new wallet rpc, getTransactionNotes. replicates the logic of getUnsignedTransactionNotes, but for posted transactions adapts logic of renderUnsignedTransactionDetails to generalize for unsigned and posted transactions
…#4950) Adds a public to the rust node js bindings that returns a public address from an IVK.
This code is not correct in the first place because it should never use current encoders or models in a migration.
* Feat: support chain/getBlocks * Feat: add getBlocks to rpc client * Feat: add serializeRpcTransaction * Feat: add tests for getBlocks * Fix: clear unnecessary import * Fix: getBlocks test * Fix: add serialized back to serializeRpcTransaction * Fix tests * Remove MAX_BLOCKS_RANGE * Make range inclusive * Rename height to sequence * Support serialized parameter * With to At * Use getBlockRange --------- Co-authored-by: Derek Guenther <[email protected]>
This was merging an AccountValue type and walletDB into this. There's no reason to do that since the intent is to accept an AccountInfo, and a WalletDB constructor.
* Separate out block getter from RPC serialization This adds a new method to the block chain by adding a new matching method getBlockAtSequence which matches getHeaderAtSequence. It also does not conflate data fetching with serialization so getBlocks is a lot cleaner now. * Update ironfish/src/rpc/routes/chain/utils.ts Co-authored-by: Derek Guenther <[email protected]> * Added a test --------- Co-authored-by: Derek Guenther <[email protected]>
* adds raw transaction flag to combine notes * reusing createTransactionBytes variable
* determines raw tx sender from tx fields uses the spends, outputs, and/or mints in a raw transaction to determine which account to use to post the transaction. deprecates the account flag from the 'wallet:post' CLI command deprecates the account request parameter from the 'wallet/postTransaction' RPC * fixes postTransaction tests correctly loads account by public address * regenerates postTransaction fixtures * mock broadcast call * mock post call * simplify post test do not preserve state between tests * replace outputs with mints in raw transactions avoids building chain state
* Move wallet/account/encoder to wallet/exporter We keep running into issues where people use code in the account and walletDB from the exporter, which they should not do. There should be no code share between the walletDB and the exporter. This continously causes issues where people add things to the DB and suddenly it gets exported or expected to be decoded when someone is importing a wallet. * Fixed lint
This encoder is primarily used in the DB, and will be deleted from being used in Bech32 Encoder soon.
* Remove AccountValue from account importer/exporter The exporter and importer infrastructure for accounts should not use AccountImport. This means if you store new fields in the account DB this will change the export format and break compatability with accounts. If you change the accounts, nothing should change about exported accounts automatically. * Update ironfish/src/rpc/routes/wallet/exportAccount.ts Co-authored-by: Derek Guenther <[email protected]> * Rename encoded name * Remove else * Move viewOnly parameter to toAccountImport --------- Co-authored-by: Derek Guenther <[email protected]>
displays account names in alphabetical order when no account is specified by flags
* Give bech32 encoder it's own multisig encoder These should not share the same encoder. It's posssible to break old wallets by updating the account value Multisig encoder. * Simplified bech32 encoder
031 is not using the correct multisig encoder that matches AccountValue. This leads to errors when you try to use the new migration for 31 in any future migration such as 32.
* defaults cli tables to no-truncate we typically do not want to truncate data that we display, especially hashes - wraps oclif 'CliUx.ux.table.flags()' and adds a 'truncate' flag which defaults to false - includes 'no-truncate' as a hidden flag that defaults to the inverse of the 'truncate' flag. this allows passing table flags to the oclif table constructor without handling our own truncate flag - replaces all uses of oclif table flags with our own constant * removes relationships for no-truncate unnecessary due to default(context) setting flag to inverse of truncate
--------- Co-authored-by: Jason Spafford <[email protected]>
* Remove unused optional parameter from decryptNotes * Always scan from earliest account head * Remove createTransaction isAccountUpToDate check The previous version of isAccountUpToDate checked against the wallet's head, but the wallet head is an arbitrary block, so the chain head would be more appropriate. However, I think this is too restrictive to be enforced as a hard requirement at the SDK level, since you might have knowledge that your account only has transactions through a limited number of blocks. A potential downside could be accidentally creating double spends where users would have previously been prevented from creating transactions.
This argument was unused.
The worker pool is a thread pool that allows multiple jobs to run concurrently. The worker pool is normally started only when a full node is initialized. If a full node is not initialized, then no separate thread is started, and jobs are not run concurrently but sequentially. This means that the performance of CLI commands that use the worker pool, like `wallet:rescan`, can vary drastically depending on whether a full node is running or not. This behavior is not documented and far from obvious from a user prospective. To always offer the best performance, this commit changes the worker pool to ensure that threads are always spawned before executing any job.
* Separate out account decoding from encrypt This fixes the fact that we attempted to add encryption to our encoder itself, but it was nerver fully inserted into the encoding because the decoding flow never used the wallet to try all the secrets. This means the encryption of an encoded account was only half in the encoding system, and half out. This introduces a very clear separate module at Wallet/Exporter/Encryption which has all the methods to handle encryption. * Removed unused code * Update ironfish/src/wallet/exporter/encryption.ts Co-authored-by: Derek Guenther <[email protected]> --------- Co-authored-by: Derek Guenther <[email protected]>
an account's 'createdAt' field contains the sequence and hash of the block at which that account was created. we use it to skip decryption of notes during rescans on blocks that came before the 'createdAt' block we check that the 'createdAt' block is in the chain in several places to protect against block references that may be from other networks or from pruned forks. however, it should only be necessary to check for the account birthday when the account is imported, and not when the wallet starts or in the middle of a scan. by reducing the number of places that we might reset an account's 'createdAt' we can reduce the sources of error from resetting it to null
oclif removes this from their library in the 4.x upgrade. This change is in preparation for that. On top of that, it builds a cleaner abstraction for using the progress bars. This will enable us to develop CLI tools using progress bars faster, while keeping the design more consistent.
* adds networkId to wallet, accountImport adds networkId to wallet and wallet constructor. node wallets and standalone wallets can only run on a single network without reset. the networkId is stored in the 'internal' file, so it is not necessary to persist networkId in wallet metadata adds networkId field to 'AccountImport' type to indicate which networkId an account was running on at export time. the network that an account was running on is only important when importing the account to another node uses networkId to determine if imported account birthday matches network instead of looking up block from chain. this will avoid resetting the account birthday if imported to a node that hasn't synced to the birthday yet * fixes wallet tests * nests networkId into createdAt in account exports * simplifies account fixture deserialization * fixes lint * addresses pr feedback removes incorrect yup field for networkId in json encoder only logs warning message on import if networkId is defined in imported account
The error here is that the error in getAccountTransactionsStream becomes unhandled, because we pause the event loop to run getNetworkInfo. The error handler for getAccountTransactionsStream isnt registered until we await it's content stream later on. Unfortunately because getNetworkInfo prevents that form happening it'll be treated as an unhandled exception which crashes in oclif itself.
The Sapling parameters defined in `ironfish::sapling_bls12::SAPLING` are behind a `lazy_static`. As such, the parameters are not loaded until accessed fo the first time. The purpose of `initializeSapling` is to acccess the parameters to ensure that they get loaded early. The current implementation of `initializeSapling` was calling `.clone()` on `ironfish::sapling_bls12::SAPLING`, causing large amounts of memory to be allocated and written, only to be discarded shortly afterwards. The implementation has been changed to just dereference `SAPLING`, without doing anything else. This is enough to trigger `lazy_static` to load the parameters. Also: - Removed the `Arc` from `ironfish::sapling_bls12::SAPLING`: `Arc` is used to extend lifetimes in multithreaded environments, but the lifetime of `SAPLING` is static, so there's no point in extending it. - Removed `ironfish::sapling_bls12::load` because it's not public and redundant
This commit changes the default value for `nodeWorkersMax` to the number of CPU cores, minus one, capped at 16. Before this change, `nodeWorkersMax` was hardcoded to 6, which resulted in limited performance on systems that have more than 6 cores. As a consequence of this change, most users with empty configuration will observe a larger number of workers spawned. Users who run on systems with SMT enabled and with a low number of cores may observe a lower number of workers. For example: on a system with 2 cores, with SMT enabled (4 logical CPUs in total), and empty configuration: before this change, the number of workers would have been 4 - 1 = 3, after this change it is 2 - 1 = 1. The rationale behind using the number of CPU cores rather than the number of logical CPUs is that the worker pool should mostly be used for CPU-bound and CPU-intensive workloads. For this kind of workloads, SMT is known to be detrimetrial for performance (other workloads that are primarily I/O-bound should not use the worker pool but would benefit from async code). This commit also changes the default value for `nodeWorkers` to be based on the number of processing units available to the process, rather than the number of logical CPUs. On most systems, this will not result in any difference. This change will only impact users that limit the Node process through sandboxing tecniques like CPU affinity masks or cgroups. Some examples to understand the impact of this change: - users with `nodeWorkers` set in their configuration: no change - users with `nodeWorkers`/`nodeWorkersMax` not set, 2 CPU cores, SMT enabled (4 CPU threads): number of workers before this change = min(4 - 1, 6) = 3 number of workers after this change = min(4 - 1, 2 - 1, 16) = 1 - users with `nodeWorkers`/`nodeWorkersMax` not set, 8 CPU cores, SMT enabled (16 CPU threads): number of workers before this change = min(16 - 1, 6) = 6 number of workers after this change = min(16 - 1, 8 - 1, 16) = 7 - users with `nodeWorkers`/`nodeWorkersMax` not set, 8 CPU cores, SMT disabled (8 CPU threads): number of workers before this change = min(16 - 1, 6) = 6 number of workers after this change = min(8 - 1, 8 - 1, 16) = 7 - users with `nodeWorkers`/`nodeWorkersMax` not set, 32 CPU cores, SMT enabled (64 CPU threads): number of workers before this change = min(64 - 1, 6) = 6 number of workers after this change = min(64 - 1, 32 - 1, 16) = 16 - users with `nodeWorkers`/`nodeWorkersMax` not set, 32 CPU cores, SMT enabled (64 CPU threads), scheduler affinity set to 4 CPUs: number of workers before this change = min(64 - 1, 6) = 6 number of workers after this change = min(4 - 1, 32 - 1, 16) = 3
… options All the use cases for `DecryptNotesTask` involve passing a bunch of notes to decrypt for the same account(s), and with the same options. To save bandwidth, and to allow future optimizations, it makes sense to group all account keys into a single input parameter, so that they don't have to be repeated for each and every note. This changes the request serialization size from `O(accounts * notes)` to `O(accounts + notes)`.
In the vast majority of the cases, a `DecryptNotesResponse` will contain zero or close-to-zero decrypted notes. It makes sense to have the serialization of `DecryptNotesResponse` optimize for those cases. This changes the serialization size of `DecryptNotesResponse` from `O(null_notes + non_null_notes)` to `O(non_null_notes)`. This introduces a small size overhead in the case where the are a lot of decrypted notes. We could in theory have `DecryptNotesResponse` choose between sparse and dense serialization based on how many decrypted notes there are, but that case is so rare, and the overhead so small, that it's not worth optimizing for it.
* removes references to account createdAt hash now that we use networkId in account imports to determine whether the createdAt sequence refers to a block on the same chain the hash is no longer needed. it is mostly vestigial, like the appendix or the wings of a kiwi bird these changes remove remaining references to createdAt.hash * uses sequence from previousBlock instead of createdAt
uses new DecrypNotesRequest format
This PR changes it so if you're running the wallet inside of the node, instead of iterating the chain using the RPC system it'll use the chain directly. This is a large performance optimziation when scaanning the blockchain. Co-authored-by: Andrea <[email protected]>
adds a 'createdAt' field to ImportAccountRequest that takes a sequence to use for the account's createdAt field if a 'createdAt' sequence is passed to importAccount, then the account's createdAt uses that sequence and an empty hash (hash is no longer used in createdAt) adds createdAt flag to wallet:import command
Reduces our dependency on web3-validator which is a large package and is not used in many places. Only dependency used now is keccak which is used in the checksum validation function.
* updates error handling to their new API response 400 object * removing old error type
hughy
approved these changes
Jun 24, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
upgrade to v2.4.0
Testing Plan
Documentation
Does this change require any updates to the Iron Fish Docs (ex. the RPC API
Reference)? If yes, link a
related documentation pull request for the website.
Breaking Change
Is this a breaking change? If yes, add notes below on why this is breaking and label it with
breaking-change-rpc
orbreaking-change-sdk
.