-
Notifications
You must be signed in to change notification settings - Fork 364
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(audit): Periodically Remove oldTask data from aggregator #1004
fix(audit): Periodically Remove oldTask data from aggregator #1004
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if an answer comes after the batch is closed ? It should probably be tested with more operators
I ran a local testnet with 3 operators produced. The aggregator did not experience any errors and and successfully removed added task information from the aggregator after successfully responding. |
Changed this PR to draft since we should prioritize other issues first |
The aggregator when receiving the response checks that the batch indeed exists here. Anyway, I tested it on my machine doing:
[2024-09-27 11:54:10.904 -03] INFO (pkg/server.go:63) - Locked Resources: Starting processing of Response
[2024-09-27 11:54:10.904 -03] INFO (pkg/server.go:67) - Unlocked Resources: Task not found in the internal map |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything worked as expected on my machine!
Imo the deletion of the task in the maps should be delayed, so it will accept responses from operators after reaching the quorum It can be done with a go routine like go func() {
time.Sleep(10 * time.Second)
// Deletion logic
}() The deletion is done once the task was verified in Ethereum (so there is a time between the quorum is reached and the transaction is accepted in Ethereum). So I'm not sure if we should add an extra delay in the deletion |
That sounds to me like what we really want is a kind of queue (jargon for "an array") and a periodic task that checks if we're done with stuff. EDIT: actually, I think it's simpler than that. We already have indices for each batch. Every once in a while (the appropriate time is probably around the time Ethereum takes to confirm our transactions) we check the oldest (lowest) index, find its Merkle root, and use that and the index to clean the dictionaries. This could be a long-lived goroutine started by main. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
biased self-approve
…n responded or expires. (#1004) Co-authored-by: Uriel Mihura <[email protected]> Co-authored-by: Mario Rugiero <[email protected]>
This PR:
Removes Task data from aggregator response maps after a response has been responded to or expired.
closes #977
#To Test:
Setup local dev net as per the guide
(optional:)
And leave a task sender running until the end of the test
Everything should work as normally.
Also, every 120 seconds, you should see the log:
Cleaning finalized tasks from maps
, with some extra information about which tasks are being removed from the Aggregator. These are the tasks being removed, all older than 10 blocks of age.In prod this variables change to bigger numbers, but the flow is the same.
Note: in dev, first iteration may be small because of the constraints, but the second iteration should include more tasks.