New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using exec auth in kubectl triggers one connection per applied/described object #111911
Comments
/triage accepted |
labeling as a regression since a configuration that worked prior to 1.25 no longer has an equivalent that works properly /kind regression |
I see a few directions a fix could go:
All three of those changes seem likely to be invasive and inappropriate to make at the last minute for 1.25. Option 3 seems like the proper fix, but also the one with the biggest surface area, since it would require checking all the kubectl commands, and likely some significant restructuring in how clients are constructed |
I would like to avoid this approach as projects such as pinniped exclusively use client certificates for end user facing authentication. |
me too, that doesn't really seem like a fix |
it looks like the thing in kubectl that is repeatedly constructing clients is the Builder... there might be a central way to make that use the "construct http.Client once, reuse for multiple clientsets" that @aojea made last release... I'm looking into that now |
/cc @cici37 |
/cc @soltysh @atiratree adding them to the loop, there was also a WIP PR #108459 to fix this |
oof... I didn't know this was a known issue... I'd have pushed to resolve this before forcing the migration to exec auth :-/ |
expanding on #111911 (comment), I see our options for 1.25 as:
As loathe as I am to pick path 3, I think it's the right call for users and the release schedule, and I think we still have a clear path to actually remove the gcp and azure credential providers in 1.26 if we:
|
revert for 1.25 open at #111918 😞 |
Thank you for addressing this and raising the revert PR! |
Reverting this really kills me, but I agree that we should:
Lets get this fixed correctly early in the 1.26 release so that we can backport the fix to 1.23+ |
revert PR is merged for 1.25 and in the release-1.25 branch dropping the regression label from this issue now (since it's just an exec auth bug now, not a 1.25 regression) and adding important-soon priority for 1.26. Will work with @soltysh to get #108459 in a backport-safe shape, targeting merge first thing in 1.26 |
Optimizations on top of cutting down the number of new transports we make: For some cases it could be useful for tokens to be strictly single use. We need to update kubernetes/website#35981 to reflect the new situation. |
The exec auth plugin can already do both of those things (can return expiration timestamps that avoid additional calls to the plugin to get new credentials). In this case, we weren't actually calling the authenticator multiple times, the same cached credentials were getting used with transports and talking to the server over multiple connections |
thanks for pointing out the docs update needed |
I have left a comment earlier in kubernetes/website#35981 (comment) and the update is done. The CHANGLOG update is covered in kubernetes/sig-release#1995. We still need a release note update. Since it was merged after rc1 cut, the release notes has to be added manually. I am working with release notes team on it: kubernetes/sig-release#1996 (comment) |
I can try to improve #108459 next week and make it safer for backports |
/assign |
I have opened #112017 to fix this bug. I believe the change is small and safe enough to backport. |
What happened?
Switching from the gcp auth provider to the exec auth provider in 1.25 made
kubectl apply
requests which previously made a single connection to the API server make one connection per applied object.If enough items are applied, the connections can start to fail because of ephemeral port exhaustion
/kind bug
/milestone v1.25
/sig auth cli api-machinery
What did you expect to happen?
Switching to the exec auth plugin would allow commands that previously worked to continue to work
How can we reproduce it (as minimally and precisely as possible)?
Reproducer integration test at https://github.com/liggitt/kubernetes/commits/exec-auth-reproducer demonstrating one transport (and therefore a new connection) for each item in an applied file when using exec auth
on that branch:
and observe 10 transport creations to apply 10 items (plus a couple others for discovery):
Anything else we need to know?
This happens because exec auth setup unconditionally sets the GetCert field in transport config here (even if the plugin is only going to return token credentials):
kubernetes/staging/src/k8s.io/client-go/plugin/pkg/client/auth/exec/exec.go
Line 303 in 58c10aa
That makes the transport config uncacheable here:
kubernetes/staging/src/k8s.io/client-go/transport/cache.go
Lines 141 to 144 in 58c10aa
Commands like
kubectl describe
andkubectl apply
repeatedly construct transports when dealing with multiple objects. This poor behavior was masked in past configurations by the transports being cached.Kubernetes version
tried with 1.24 and 1.25, reproduced with both
Cloud provider
n/a
OS version
n/a
Install tools
n/a
Container runtime (CRI) and version (if applicable)
n/a
Related plugins (CNI, CSI, ...) and versions (if applicable)
n/a
The text was updated successfully, but these errors were encountered: