New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't force detach volume from healthy nodes #110721
Conversation
@jsafrane: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1dccbe0
to
7cc4d09
Compare
Added an unit test |
@@ -154,6 +155,15 @@ func (rc *reconciler) hasOutOfServiceTaint(nodeName types.NodeName) (bool, error | |||
return false, nil | |||
} | |||
|
|||
// isHealthy returns true if the node looks healthy. | |||
func (rc *reconciler) isHealthy(nodeName types.NodeName) (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about naming isNodeHealthey()?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
// isHealthy returns true if the node looks healthy. | ||
func (rc *reconciler) isHealthy(nodeName types.NodeName) (bool, error) { | ||
node, err := rc.nodeLister.Get(string(nodeName)) | ||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
besides node object not found, any other possible error to get node from nodeLister, like some temporal error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is an informer, so it won't error on network hiccups. I've never seen a temporal errors returned.
// Act | ||
// Delete the pod and the volume will be detached only after the maxLongWaitForUnmountDuration expires as volume is | ||
// not unmounted. Here maxLongWaitForUnmountDuration is used to mimic that node is out of service. | ||
// But in this case the node does not have the node.kubernetes.io/out-of-service taint and hence it will wait for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
out-of-service is alpha feature, do we need feature gate to do the test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a copied comment that I forgot to edit :-)
Fixed.
since this PR will have some behavior change that might affect user workload in certain cases, eg., pod is stuck in terminating (we still have some bug there), node is healthy, volume will not be detached. Is release note to mention this change good enough? |
7cc4d09
to
a9bdfe8
Compare
6 minute force-deatch timeout should be used only for nodes that are not healthy. In case a CSI driver is being upgraded or it's simply slow, NodeUnstage can take more than 6 minutes. In that case, Pod is already deleted from the API server and thus A/D controller will force-detach a mounted volume, possibly corrupting the volume and breaking CSI - a CSI driver expects NodeUnstage to succeed before Kubernetes can call ControllerUnpublish.
a9bdfe8
to
3b94ac2
Compare
@jsafrane: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bswartz, jsafrane The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@jsafrane , does it make sense to backport? |
@ialidzhikov I am afraid it could break someone. What do you think? Can someone else than me test it thoroughly? |
It has been in 1.25 for quite some time without any issues reported, I'm approving 1.24 backport in #114168 |
…0721-upstream-release-1.24 Automated cherry pick of #110721: Don't force detach volume from healthy nodes
WIP: PR for discussion. Missing unit tests.What type of PR is this?
/kind bug
What this PR does / why we need it:
6 minute force-deatch timeout should be used only for nodes that are not healthy.
In case a CSI driver is being upgraded or it's simply slow, NodeUnstage can take more than 6 minutes. In that case, Pod is already deleted from the API server and thus A/D controller will force-detach a mounted volume, possibly corrupting the volume and breaking CSI - a CSI driver expects NodeUnstage to succeed before Kubernetes can call ControllerUnpublish.
In context of this PR, an unhealthy node means
node.status.conditions["Ready"] != true
.Which issue(s) this PR fixes:
Fixes #106710, #106902
cc @gnufied @jingxu97 @bswartz
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: