-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add InstalledScheduler for blockstore_processor #33875
Add InstalledScheduler for blockstore_processor #33875
Conversation
Codecov Report
@@ Coverage Diff @@
## master #33875 +/- ##
=======================================
Coverage 81.8% 81.9%
=======================================
Files 809 809
Lines 217824 217882 +58
=======================================
+ Hits 178387 178463 +76
+ Misses 39437 39419 -18 |
ledger/src/blockstore_processor.rs
Outdated
log_messages_bytes_limit: Option<usize>, | ||
prioritization_fee_cache: &PrioritizationFeeCache, | ||
) -> Result<()> { | ||
if !bank.has_installed_scheduler() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
I think the !
makes this slightly less straight-forward to me at least. Would be more clear if has_scheduler { schedule } else { rebatch_and_execute }
wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done: 9491e33
@@ -294,6 +294,55 @@ fn execute_batches_internal( | |||
}) | |||
} | |||
|
|||
fn process_batches( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so this fn is a drop-in replacement for the previous execute_batches
.
In process_entries
we still call this drop-in at the same times we called execute_batches
, i.e. tick-boundaries, entry conflicts, end of loop.
Scheduling doesn't block on execution, and also cannot fail w/ an error - how does replay stage become aware of invalid block state without this fn returning it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with that in mind, i'm curious why the switch was introduced here instead of one layer up. Is there a benefit in collecting the until conflict/tick before scheduling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, thanks for well-deserved good questions! hope this in-source comments should answer to both of your questions: 2897280
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so? I think the main reason is we want to share most of the same code since there are checks on non-self-conflicting entries. And since scheduling doesn't block on execution, there shouldn't be too much overhead from just collecting the multiple entries instead of scheduling them as they come in?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah. that's correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fyi, i a bit improved the in-source comment: 6e63d87
// This fn diverts the code-path into two variants. Both must provide exactly the same set of | ||
// validations. For this reason, this fn is deliberately inserted into the code path to be called |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both must provide exactly the same set of validations
I'm aware that apply_cost_tracker_during_replay
needs to be handled for unified scheduler ;)
); | ||
// scheduling always succeeds here without being blocked on actual transaction executions. | ||
// The transaction execution errors will be collected via the blocking fn called | ||
// BankWithScheduler::wait_for_completed_scheduler(), if any. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fyi, wait_for_completed_scheduler()
doesn't exist in the master branch. yet written as if it exists to avoid needless future rewrite. that fn will be added just by next pr..
// 'a is needed; anonymous_lifetime_in_impl_trait isn't stabilized yet... | ||
pub fn schedule_transaction_executions<'a>( | ||
&self, | ||
transactions: &[SanitizedTransaction], | ||
transaction_indexes: impl Iterator<Item = &'a usize>, | ||
) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry I missed this on my previous pass. Could you explain/justify this interface? There's the comment on 'a
which drew my attention.
It feels a bit odd to have transactions
be a slice, and transaction_indexes
be an iterator.
AFAICT, the source of both of these is TransactionBatchWithIndexes
which should be able to get us a slice for both these fields. Which makes me wonder why we need/want transaction_indexes to be an iterator.
I can see iterators are more generic than the slice; if that's the case, why not something like:
pub fn schedule_transaction_executions(
&self,
transactions_with_indexes: impl IntoIterator<Item = (...)>
) {
...
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're right. i just simplified and i like this: 5d856f9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - appreciate all the work!
Problem
There's no actual usage of the recently introduced
BankWithScheduler
(#33704) in the monorepo.Summary of Changes
Start to use it incrementally; first in
blockstore_processor
with minimal definition of a newly-introduced trait calledInstalledScheduler
.The trait is currently really bare-bone; more methods will be added in the upcoming prs, especially to properly shutdown the installed scheduler and its lifecycle management by the installed scheduler pool.
also note that the plumbing in this pr isn't complete as well: there should be no way to hit the added code path for the unified scheduler in the released binaries.
extracted from: #33070