-
Notifications
You must be signed in to change notification settings - Fork 726
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mcs: reorganize cluster start and stop process #7155
Conversation
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
Codecov Report
@@ Coverage Diff @@
## master #7155 +/- ##
==========================================
+ Coverage 74.58% 74.61% +0.02%
==========================================
Files 441 441
Lines 47292 47388 +96
==========================================
+ Hits 35275 35358 +83
+ Misses 8940 8934 -6
- Partials 3077 3096 +19
Flags with carried forward coverage won't be shown. Click here to find out more. |
continue | ||
} | ||
} | ||
|
||
log.Info("schedulers updating notifier is triggered, try to update the scheduler") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If stop server here, is there data race?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is the same as the current PD.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In other word,is it possible to meet data race when add scheduler and coordinator wait at the same time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so but the possibility is much smaller than before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another way: we can check the cluster status before adding a scheduler every time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But there is still a gap between check status and adding scheduluer, if stop server here after checking the cluster status and before adding scheduler, it is possible to meet data race too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is the way we use the wait group for the scheduler controller is not proper instead of the wait group itself.
Signed-off-by: Ryan Leung <[email protected]>
Signed-off-by: Ryan Leung <[email protected]>
@JmPotato PTAL |
pkg/mcs/scheduling/server/cluster.go
Outdated
return | ||
case <-ticker.C: | ||
// retry | ||
notifier <- struct{}{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible we have a deadlock here? Since the length of the channel is only 1 and if the scheduler config watcher just sent it before, it could be blocked here.
Signed-off-by: Ryan Leung <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The rest LGTM.
pkg/mcs/scheduling/server/cluster.go
Outdated
select { | ||
case notifier <- struct{}{}: | ||
// If the channel is not empty, it means the check is triggered. | ||
default: | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about warping a trySend
function to reuse the code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good
Signed-off-by: Ryan Leung <[email protected]>
/merge |
@rleungx: It seems you want to merge this PR, I will help you trigger all the tests: /run-all-tests You only need to trigger
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
This pull request has been accepted and is ready to merge. Commit hash: 891a322
|
close tikv#7106, close tikv#7140 Signed-off-by: Ryan Leung <[email protected]> Co-authored-by: ti-chi-bot[bot] <108142056+ti-chi-bot[bot]@users.noreply.github.com>
What problem does this PR solve?
Issue Number: Close #7140, close #7106
What is changed and how does it work?
This PR reorganizes the cluster start/stop process and fix the race.
Check List
Tests
Release note