-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request to create an official fork of go-yaml #72
Comments
I'm cautiously in favor, for a couple reasons:
I would want to structure it similarly to https://sigs.k8s.io/json, where we expose as little of the API surface as possible while still providing the needed functions to callers |
Thanks for filing this, Natasha. I also support this request. Maintaining a fork is not the ideal outcome, but it is better than being unable to make critical fixes to such a key dependency. And if we do need a fork, here makes more sense than inside Kustomize.
Just so we're all aware, this may require exposing quite a bit more of the existing surface than you might expect. kyaml extensively uses, and even exposes, the intermediate yaml.Node struct from yaml.v3, for one thing. |
in other instances of discussions about the creation of larger internal k8s forks (https://github.com/gogo/protobuf ?) it was noted that the better investment would be to convince the existing (mostly unavailable) maintainers to grant k8s maintainers maintainer rights. that seems like the better option on paper, has it been considered and communicated outside of github - e.g. via email with the existing maintainers? also worth noting that this repository has not been actively maintained, which can be confirmed by the number of rotted/closed tickets by bots: |
Who would we want to add as maintainers to the go-yaml library? To me it seems like we either can:
The maintenance work that would fall on k8s maintainers would be the same in either case, so I'm not convinced that the question of bandwidth is relevant in choosing one of these options over the other. Additionally, there are some companies that have strict rules about which repos they can contribute to. Some k8s members can only contribute to kubernetes-sponsored repos. In this case, the fact that the go-yaml code lives elsewhere would be an unnecessary barrier that I would like to remove unless there is some other obvious benefit to not forking. |
Who is volunteering to maintain this new repo? |
a subpackage of this repo actually seems like the most natural location if a fork was going to be brought in-org |
I have some contributions/ improvements that could be of value at some point for the fork (see https://github.com/amurant/go-yaml/tree/v3_next). The main idea in that branch is to add a YAML test-suite & to make the implementation more conformant, based on go-yaml/yaml#798. BTW: I'm pro fork; once the original maintainer responds back, the changes can be up-streamed. I think that, also for tools like Helm, it might be useful to directly use the fork instead of the gopkg.in/yaml.v2 package. |
From the SIG API Machinery meeting on 3/9/22, we have reached consensus that this fork would be accepted provided that we add a disclaimer to the README indicating that we will only be accepting a small set of bug fixes, and will not be responsible for any major features. Here is a draft of the disclaimer: DisclaimerThis package is a fork of the go-yaml library and is intended solely for consumption by kubernetes projects. In this fork, we plan to support only critical changes required for kubernetes, such as small bug fixes and regressions. Larger, general-purpose feature requests should be made in the upstream go-yaml library, and we will reject such changes in this fork unless we are pulling them from upstream. |
That disclaimer LGTM
…On Mon, Mar 14, 2022 at 12:04 PM Natasha Sarkar ***@***.***> wrote:
From the SIG API Machinery meeting on 3/9/21, we have reached consensus
that this fork would be accepted provided that we add a disclaimer to the
README indicating that we will only be accepting a small set of bug fixes,
and will not be responsible for any major features.
Here is a draft of the disclaimer:
Disclaimer
This package is a fork of the go-yaml library
<https://github.com/go-yaml/yaml> and is intended solely for consumption
of kubernetes projects. In this fork, we plan to support only critical
changes required for kubernetes, such as small bug fixes and regressions.
Larger, general-purpose feature requests should be made in the upstream
go-yaml library, and we will reject such changes in this fork unless we are
pulling them from upstream.
—
Reply to this email directly, view it on GitHub
<#72 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAE6BFTOELTNWFR34SL5YSTU76ESJANCNFSM5PDC5VCQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you commented.Message ID:
***@***.***>
|
I don't think so, go for it!
…On Wed, Mar 16, 2022 at 11:19 AM Natasha Sarkar ***@***.***> wrote:
@lavalamp <https://github.com/lavalamp> @liggitt
<https://github.com/liggitt> is there anything else I need to do before
submitting a PR to fork?
—
Reply to this email directly, view it on GitHub
<#72 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAE6BFXEF4IAPDFH3OPGSALVAIQZ5ANCNFSM5PDC5VCQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@KnVerey: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten I just updated the corresponding PR yesterday, so this is still in progress! |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Currently, the go-yaml library is used heavily by kubernetes and many subprojects. In kustomize, we use both go-yaml.v2 and go-yaml.v3 as well as this library for all of our yaml processing.
An update to go-yaml v3 resulted in what would have been a breaking change in kustomize. We talked to the go-yaml maintainer, who agreed to make the fixes, but could not complete them in time for our deadline. As a result, we created a temporary fork of go-yaml v3 with the intent to remove the fork once the changes were made upstream. More details can be found in this issue: kubernetes-sigs/kustomize#4033
Unfortunately, shortly after we created the fork, the maintainer stopped responding to our issues and did not review the PRs we made to upstream the fix. As I understand it, there is only one go-yaml maintainer and they are very busy. The last response we received was in June 2021: go-yaml/yaml#720 (comment)
Maintaining the fork within kustomize is undesirable not only because of the overhead to the kustomize maintainers, but also because kustomize is built into kubectl, so our fork is now also vendored in k/k. It was accepted into kubectl with the understanding that it would be a temporary fork, but now I'm not so sure we will ever be able to remove it. Besides the fixes that we (kustomize) need to have upstream, it seems that we are not the only ones with this problem; there are lots of open issues and PRs in the go-yaml library with no response.
We would like to propose an official, SIG-sponsored fork of the go-yaml library. I had a very brief exchange in slack about this and I think it's time to push this forward. In that slack exchange I was advised that if we do have this official fork, we would probably want to fold it into this repo, hence why I am creating the issue here.
I would appreciate some advice on next steps and some indication about whether or not a permanent fork would be acceptable.
cc @liggitt @apelisse @kevindelgado @KnVerey @mengqiy
The text was updated successfully, but these errors were encountered: