New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixing issue in generatePodSandboxWindowsConfig for hostProcess containers #110140
Fixing issue in generatePodSandboxWindowsConfig for hostProcess containers #110140
Conversation
Prior to this change the following podSpec is expected have containers run as host-process containers but instead fails because the pod sandbox wasn't configured to support running HostProcess containers. apiVersion: v1
kind: Pod
metadata:
name: container-sc
spec:
hostNetwork: true
containers:
- name: ping-test
securityContext:
windowsOptions:
hostProcess: true
runAsUserName: "NT AUTHORITY\\SYSTEM"
image: mcr.microsoft.com/windows/nanoserver:1809
command:
- "ping"
- "-t"
- "127.0.0.1"
nodeSelector:
"kubernetes.io/os": windows This is called out in the KEP https://github.com/kubernetes/enhancements/tree/master/keps/sig-windows/1981-windows-privileged-container-support#example-deployment-spec |
/assign @dcantah @jsturtevant for windows review |
/triage accepted |
cec8a16
to
7ff7fe9
Compare
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couple minor comments. Having the check if kubecontainer.HasWindowsHostProcessContainer(pod)
twice in the same function is a bit confusing though I see why it is done and how it was missed initially. Seems there could be clean up to make that easier to read but that shouldn't be done in this.
Can you add a release note? This seems like something that should be called out a bug fix in the release notes
/test pull-kubernetes-e2e-capz-windows-containerd |
…iners by where pod sandbox won't have HostProcess bit set if pod does not have a security context but containers specify HostProcess. Signed-off-by: Mark Rossetti <marosset@microsoft.com>
/retest |
/test pull-kubernetes-e2e-capz-windows-containerd |
/test pull-kubernetes-e2e-capz-windows-containerd |
/milestone v1.25 |
// Pods containing HostProcess containers should fail to schedule if feature is not | ||
// enabled instead of trying to schedule containers as regular containers as stated in | ||
// PRR review. | ||
if !utilfeature.DefaultFeatureGate.Enabled(features.WindowsHostProcessContainers) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since it's breaking the previous behavior, this may break some customers on upgrade. Is there a possibility customer marked container as hostprocess and forgot to clean it up after?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The situation where behavior should be changing is when users set hostProcess=true
for only the containers,
before this change the containers will start as normal containers, after the changes the containers will be started as hostProcess
containers.
The expected behavior and behavior outlined in the KEP is that the container should be started as a hostProcess
container.
We've had several people report this issue recently (including https://kubernetes.slack.com/archives/C0SJ4AFB7/p1652916420701559?thread_ts=1652890944.995179&cid=C0SJ4AFB7)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
talked offline. Seems like an edge case when things may break for users. Unless Im'm missing something this is not blocking the PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a customer thinks they are running in a Hostprocess pod but aren't, due to the limitations of process isolated containers, this will likely lead to functionality in the pod to not work in other ways leading them to see this as a bug. We have seen folks come to the sig-windows slack when this occurs.
An an example the promtheus windows_exporter, it technically runs inside a process isolated windows container but all the stats it collects are wildly off and in some cases like querying for containers just don't work.
I would anticipate this would be the same for most of the use cases we envisioned where customers are modifing the host registry or interacting with host services.
/assign @dchen1107 |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: marosset, mrunalp The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
…0140-upstream-release-1.24 Automated cherry pick of #110140: Fixing issue in generatePodSandboxWindowsConfig for
…0140-upstream-release-1.23 Automated cherry pick of #110140: Fixing issue in generatePodSandboxWindowsConfig for
Signed-off-by: Mark Rossetti marosset@microsoft.com
What type of PR is this?
/kind bug
What this PR does / why we need it:
Fixing issue in generatePodSandboxWindowsConfig for hostProcess containers by
where pod sandbox won't have HostProcess bit set if pod does not have a
security context but containers specify HostProcess.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
/sig windows node