-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FLUID-5542: Queued DataSource #566
base: main
Are you sure you want to change the base?
Conversation
Three requestQueues have been implemented. 1) queues all requests and executes them one at a time in the order they were received 2) a debounce queue that only accepts the latests request 3) a throttled queue that only accepts new requests after a specified delay. A wrapper dataSource still needs to be implemented to make use of one or more of these queues with a supplied datasource.
@amb26 could you please review this when you have a chance. |
* A basic request queue that will execute each request, one-by-one | ||
* in the order that they are received. | ||
*/ | ||
fluid.defaults("fluid.requestQueue", { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this contains a definition of the "add" operation, it should be given a specialised name such as "fluid.requestQueue.fifo" and/or split into two grades, with the "fluid.requestQueue" reserved for a queue without a policy
Renamed "queue" member to "requests" and "add" invoker to "queue"
Corrected the signatures of the DataSource to take in a model for the set operation. Also updated the comments to better reflect the architecture of a DataSource. (See: http://wiki.fluidproject.org/display/fluid/Notes+on+Kettle)
Separated the FIFO functionality into a new grade and left the base grade without a queue method implemented.
Removed the throttle request queue as there currently a use case for supporting it. If one does arise, we should implement it as necessary.
This loop was only needed for the throttling tests to prevent the requests from all firing at the same time. Since the throttling request has been removed, a simple for loop could be used instead.
Spoke with Antranig ( @amb26 ) in the channel today about this in more detail. The architecture will need to be restructured such that queues are based on the type of function, read (get) and write (set/delete), as well as the directModel. We'll need to create a has of the directModel to use to key the queues off of. This should be done by creating a string of the directModel after sorting it alphabetically (to maintain consistency). Additionally we'll need to add a comment that explains that this is just a partial implementation and that a full implementation would include a full consistent cache where things such as read operations would fetch data locally and make us of pending write requests, rather than going out to persistent storage. |
Further conversations from today's channel revealed that each queue per hash is a separate component. This way the request types can all operate concurrently for different types and have their own isActive state. |
Queues are separated by read/write and resource (as specified by the directModel).
@amb26 I've made some substantial changes based on our channel discussions. The pull request is ready for another round of review. |
* TODO: A fully realized implementation should provide a mechansim for working with a local | ||
* cache. For example pending write requests could be used to service get requests directly. | ||
*/ | ||
fluid.defaults("fluid.queuedDataSource", { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The naming is still confusing here. The comment above suggests that this is some kind of generic component that permits different queue policies - but in fact this grade is already bound to an implementation that operates a particular hardcoded policy that enforces a queue size of 1 (via "fluid.queuedDataSource.enqueueImpl"). This top-level grade should be better factored into a generic (possibly abstract) grade that doesn't commit the user to anything, with functions like enqueueImpl more clearly named (bound in a different grade/namespace) to emphasize that they are part of a particular concrete realisation of the queuedDataSource.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
created a new grade called debouncedDatasource that attaches the enqueue methods.
Increased the test namespace to prevent future collisions.
@amb26 this is ready for another round of review. |
Thanks, @jobara - the tests are looking great now. There are a few review comments from the original round of review still to be addressed. |
Split off the debounce queue methods from the queuedDataSource base grade and attached these in a debouncedDataSource grade.
Only an addToQueue method is needed for integration. The get, set, and delete methods use this to update the queue.
@amb26 I think this is ready for another review. Please let me know if there is anything else I missed. |
requestsQueue[key] = queue; | ||
} | ||
|
||
// queue.requests[0] = request; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stray commented line here
@amb26 I think i've gotten to all those old comments now. This should be ready for another round of review. |
Mothballed till the projects that require it become active again. |
Added a Queued DataSource that can combined with different requestQueues to synchronize, throttle, and debounce requests to a DataSource.
http://issues.fluidproject.org/browse/FLUID-5542
This is an extension of the work started in fluid-project/metadata#43