-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support a single change stream to listen to all op log events #2540
base: development/8.6
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files
... and 8 files with indirect coverage changes
@@ Coverage Diff @@
## development/8.6 #2540 +/- ##
===================================================
+ Coverage 69.37% 69.62% +0.24%
===================================================
Files 194 198 +4
Lines 12791 12916 +125
===================================================
+ Hits 8874 8993 +119
- Misses 3907 3913 +6
Partials 10 10
Flags with carried forward coverage won't be shown. Click here to find out more. |
31288fd
to
4b15b15
Compare
6df3ea0
to
f0a136a
Compare
4b15b15
to
4c761c4
Compare
f0a136a
to
f431436
Compare
4c761c4
to
434cb0b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this is the right approach... This change is trying to fit this single-change-stream approach in the existing code flow (computing which connector should handle the bucket, etc...), while in this approach there is mostly nothing to do : it is really about setting up one connector on system startup, and just call it a day...
(would still need to keep the reconcile loop, to ensure we restart failed connectors, though... and we may need to consider what to do when changing the overall mode)
I don't know if we should change or how, but I think we really need to dig a bit deeper here.
@francoisferrand shouldn't we also handle "old" connectors on startup? We can detect them and close them to recreate a single connector, or leave them untouched and handle any new connector in a single one. If we only create one connector, I'm not sure this will work for existing deployments. Edit: as discussed the current code should cover our tests needs. If a customer needs this approach, we will handle it manually, so no need to handle old connectors here. |
7396ddc
to
a1e9895
Compare
Hello williamlardier,My role is to assist you with the merge of this Available options
Available commands
Status report is not available. |
Request integration branchesWaiting for integration branch creation to be requested by the user. To request integration branches, please comment on this pull request with the following command:
Alternatively, the |
d6d5c88
to
22bbd23
Compare
22bbd23
to
d882d6e
Compare
00d2c2c
to
b9e5be6
Compare
- Each facroty manages a specific type of pipeline. - We use the pipeline in the allocation strategy. - The Connector now uses the factory from the allocation strategy to create the right pipeline. - The ConnectorsManager now rely on the allocation strategy to decide if an old connector should be kept or not, based on the old connector config, and the current allocation strategy. Issue: BB-602
- This ensures we never listen to a bucket twice - Handling of "transitions", i.e., to ensure there will be no event missed or seen twice, is out of scope, and will be covered by a procedure for now. Issue: BB-602
Issue: BB-602
Issue: BB-602
b9e5be6
to
336ab09
Compare
/wait |
if (!connectors.length) { | ||
return null; | ||
} | ||
return connectors.find(conn => conn.config.pipeline?.includes(constants.wildCardForAllBuckets)) || null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Should we fail or crash when we don't find the connector ? This should never happen looking at the code. Also if we fall in that case and return null
a new "wildcard" connector will be created but we're not certain no other connectors are laying around which will be a problem, maybe we should check that ?
getOldConnectorBucketList(oldConfig) { | ||
const bucketList = this.pipelineFactory.extractBucketsFromConfig(oldConfig); | ||
if (this.pipelineFactory.isValid(bucketList)) { | ||
return bucketList; | ||
} | ||
return null; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function does not have to do with the allocation strategy : the "validation" could be moved in the pipeline factory
* @param {Logger} params.logger logger object | ||
*/ | ||
constructor(params) { | ||
this._pipelineFactory = params.pipelineFactory; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the pipeline factory really a part of the strategy?
it seems is is only used as a vessel, to avoid passing an extra parameter to ConnectorManager : but may be cleaner to just keep things separate (lower coupling), and add the parameter where it should be.
Issue: BB-602