You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently, if the sequencer crashes during batch production, it has to restart the whole process again, since it has no knowledge of intermediate results.
The queue can't do that, since the sequencer needs to consume task results eagerly in order to create new merging tasks (or other tasks continuing the flow, i.e. runtime -> tx proofs -> block proofs).
Therefore we need to make sure the sequencer has a way to recover the already created or completed tasks to continue block production where it left off.
Describe the solution you'd like
The Flow utilities should accept some module that allows for it to save dispatches tasks and, when completed, their results. Identification happens based on a hash of the task type plus all inputs. Therefore, any hash equality means that the inputs are the same, therefore the result will also always be the same => stored result can be reused
Open questions:
Merge pairing happens eagerly and is therefore non-deterministic. That could lead to the cache not hitting
Example: Tasks A - B - C should be merged. The Flow framework could merge B - C => BC first, while on the restarted flow, A - B => AB could be paired first, leading to a cache miss.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Currently, if the sequencer crashes during batch production, it has to restart the whole process again, since it has no knowledge of intermediate results.
The queue can't do that, since the sequencer needs to consume task results eagerly in order to create new merging tasks (or other tasks continuing the flow, i.e. runtime -> tx proofs -> block proofs).
Therefore we need to make sure the sequencer has a way to recover the already created or completed tasks to continue block production where it left off.
Describe the solution you'd like
The Flow utilities should accept some module that allows for it to save dispatches tasks and, when completed, their results. Identification happens based on a hash of the task type plus all inputs. Therefore, any hash equality means that the inputs are the same, therefore the result will also always be the same => stored result can be reused
Open questions:
Example: Tasks
A - B - C
should be merged. The Flow framework could mergeB - C => BC
first, while on the restarted flow,A - B => AB
could be paired first, leading to a cache miss.The text was updated successfully, but these errors were encountered: