-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix: map get data requests by status #189
Conversation
let total_items = self.len.load(store)?; | ||
|
||
// Shouldn't be reachable | ||
if total_items == 0 { | ||
unreachable!("No items in the map, so key should not exist"); | ||
} | ||
|
||
let status_len_item = self.get_status_len_item(&req_status); | ||
let req_status_index = status_len_item.load(store)? - 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ooopa, I wrote this comment over the weekend but never posted it. I realized this line was wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No longer relevant, but thought I posted it 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My only comment would be to move the EnumarableSet to it's own file to further encapsulate the logic (Its a data type and the data request one is using it)
Motivation
We need this to work properly, or sadness will happen.
Explanation of Changes
New
I pulled out the Status stuff to an
EnumerableSet
type. Based on https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/utils/structs/EnumerableSet.sol.Then use that in the
DataRequestStatusMap
.Also, remove index tracking of DataRequests, as it was legacy and unused.
This simplified a lot of code.
Old
I made the update easier by leveraging
swap_remove
. I added an insert method that took a status and is for internal use only.I added checks on tests to ensure that each status was of the correct length.
Explanation of problem and fix:
The problem was that because it was using the global index, the indexes per status were not 0-based. i.e., in the new test(before I changed it), the commits started at index 15 instead of 0, at least with the update being changed to use swap remove.
Which makes sense:
You removed index 0 and committed to revealing it as it met the replication factor. So it gets shuffled to the last index. Since the status map also used this index, the problem was created.
A more straightforward fix would have been not to use bounds and use iterator
.skip
and.take
methods. However, this would mean we load all dr's in that status every time.So, to resolve this, I added an index per status. This way each status is always 0 indexed as well with proper removes.
Testing
All tests still pass.
Related PRs and Issues
Closes #188.