Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix volume reconstruction for CSI ephemeral volumes #108997

Merged
merged 2 commits into from Jun 4, 2022

Conversation

dobsonj
Copy link
Member

@dobsonj dobsonj commented Mar 25, 2022

What type of PR is this?

/kind bug

What this PR does / why we need it:

This solves a couple of issues with volume reconstruction for CSI ephemeral volumes.

Which issue(s) this PR fixes:

Fixes #79980

Special notes for your reviewer:

/cc @gnufied @jsafrane @pohly @jingxu97

CSI inline volume with these changes:

root@ubuntu2110:/workspace/csi-driver-host-path# kubectl apply -f examples/csi-app-inline.yaml      pod/my-csi-app-inline created
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl get pod/my-csi-app-inline
NAME                READY   STATUS    RESTARTS   AGE
my-csi-app-inline   1/1     Running   0          16s
root@ubuntu2110:/workspace/csi-driver-host-path# mount | grep my-csi-volume
/dev/mapper/ubuntu--vg-ubuntu--lv on /var/lib/kubelet/pods/55703bf4-9ee4-4fd5-8f4c-1e20f82391e4/volumes/kubernetes.io~csi/my-csi-volume/mount type ext4 (rw,relatime)

root@ubuntu2110:/workspace/csi-driver-host-path# /root/stopkubelet.sh
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl delete --force pod/my-csi-app-inline
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "my-csi-app-inline" force deleted

root@ubuntu2110:/workspace/csi-driver-host-path# /root/startkubelet.sh
root@ubuntu2110:/workspace/kubernetes# mount | grep my-csi-volume
root@ubuntu2110:/workspace/kubernetes# mount | grep 55703bf4-9ee4-4fd5-8f4c-1e20f82391e4
root@ubuntu2110:/workspace/csi-driver-host-path# grep 55703bf4-9ee4-4fd5-8f4c-1e20f82391e4 /tmp/kubelet.log
...
I0324 20:45:21.709899 2213546 operation_generator.go:864] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/55703bf4-9ee4-4fd5-8f4c-1e20f82391e4-my-csi-volume" (OuterVolumeSpecName: "my-csi-volume") pod "55703bf4-9ee4-4fd5-8f4c-1e20f82391e4" (UID: "55703bf4-9ee4-4fd5-8f4c-1e20f82391e4"). InnerVolumeSpecName "my-csi-volume". PluginName "kubernetes.io/csi", VolumeGidValue ""
I0324 20:45:21.798304 2213546 reconciler.go:300] "Volume detached for volume \"my-csi-volume\" (UniqueName: \"kubernetes.io/csi/55703bf4-9ee4-4fd5-8f4c-1e20f82391e4-my-csi-volume\") on node \"127.0.0.1\" DevicePath \"\""
I0324 20:45:23.682333 2213546 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=55703bf4-9ee4-4fd5-8f4c-1e20f82391e4 path="/var/lib/kubelet/pods/55703bf4-9ee4-4fd5-8f4c-1e20f82391e4/volumes"
I0324 20:45:23.682673 2213546 kubelet_volumes.go:236] "Orphaned pod found, removing" podUID=55703bf4-9ee4-4fd5-8f4c-1e20f82391e4

PV spec with these changes:

root@ubuntu2110:/workspace/csi-driver-host-path# kubectl apply -f examples/csi-storageclass.yaml
storageclass.storage.k8s.io/csi-hostpath-sc created
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl apply -f examples/csi-pvc.yaml     persistentvolumeclaim/csi-pvc created
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl apply -f examples/csi-app.yaml
pod/my-csi-app created
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl get pod/my-csi-app
NAME         READY   STATUS    RESTARTS   AGE
my-csi-app   1/1     Running   0          68s
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl get pod/my-csi-app -o yaml | grep uid
  uid: bf5d71e6-da6a-448a-bab2-f35b2a938d45
root@ubuntu2110:/workspace/csi-driver-host-path# mount | grep pvc-                          /dev/mapper/ubuntu--vg-ubuntu--lv on /var/lib/kubelet/pods/bf5d71e6-da6a-448a-bab2-f35b2a938d45/volumes/kubernetes.io~csi/pvc-1787a6c7-ccd3-44a1-b6ad-474da06c5e2b/mount type ext4 (rw,relatime)

root@ubuntu2110:/workspace/csi-driver-host-path# /root/stopkubelet.sh
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl get nodes
NAME        STATUS     ROLES    AGE   VERSION
127.0.0.1   NotReady   <none>   24h   v1.24.0-alpha.3.562+9eb3043a08b339-dirty
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl delete --force pod/my-csi-app
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "my-csi-app" force deleted

root@ubuntu2110:/workspace/csi-driver-host-path# /root/startkubelet.sh
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
127.0.0.1   Ready    <none>   24h   v1.24.0-alpha.3.562+9eb3043a08b339-dirty
root@ubuntu2110:/workspace/csi-driver-host-path# mount | grep pvc-
root@ubuntu2110:/workspace/csi-driver-host-path# grep pvc-1787a6c7-ccd3-44a1-b6ad-474da06c5e2b /tmp/kubelet.log
...
I0324 23:09:04.774114 2453285 csi_mounter.go:388] kubernetes.io/csi: Unmounter.TearDownAt successfully unmounted dir [/var/lib/kubelet/pods/bf5d71e6-da6a-448a-bab2-f35b2a938d45/volumes/kubernetes.io~csi/pvc-1787a6c7-ccd3-44a1-b6ad-474da06c5e2b/mount]
I0324 23:09:04.774132 2453285 operation_generator.go:864] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^7ede2757-abc4-11ec-80df-b6e25f7c1fda" (OuterVolumeSpecName: "pvc-1787a6c7-ccd3-44a1-b6ad-474da06c5e2b") pod "bf5d71e6-da6a-448a-bab2-f35b2a938d45" (UID: "bf5d71e6-da6a-448a-bab2-f35b2a938d45"). InnerVolumeSpecName "pvc-1787a6c7-ccd3-44a1-b6ad-474da06c5e2b". PluginName "kubernetes.io/csi", VolumeGidValue ""

subpath volumes with these changes:

root@ubuntu2110:/workspace/csi-driver-host-path# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
127.0.0.1   Ready    <none>   25h   v1.24.0-alpha.3.562+9eb3043a08b339-dirty
root@ubuntu2110:/workspace/csi-driver-host-path# cat examples/csi-app-subpath.yaml
kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data/subpath1"
        name: my-csi-volume
        subPath: subpath1
      - mountPath: "/data/subpath2"
        name: my-csi-volume
        subPath: subpath2
      command: [ "sleep", "1000000" ]
  volumes:
    - name: my-csi-volume
      persistentVolumeClaim:
        claimName: csi-pvc # defined in csi-pvc.yaml
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl apply -f examples/csi-app-subpath.yaml
pod/my-csi-app created
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl get pod/my-csi-app
NAME         READY   STATUS    RESTARTS   AGE
my-csi-app   1/1     Running   0          11s
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl get pod/my-csi-app -o yaml | grep uid 
  uid: 17bef72d-c2f7-4e9d-8988-dd8c725c54c8
root@ubuntu2110:/workspace/csi-driver-host-path# mount | grep subpath                       /dev/mapper/ubuntu--vg-ubuntu--lv on /var/lib/kubelet/pods/17bef72d-c2f7-4e9d-8988-dd8c725c54c8/volume-subpaths/pvc-1787a6c7-ccd3-44a1-b6ad-474da06c5e2b/my-frontend/0 type ext4 (rw,relatime)
/dev/mapper/ubuntu--vg-ubuntu--lv on /var/lib/kubelet/pods/17bef72d-c2f7-4e9d-8988-dd8c725c54c8/volume-subpaths/pvc-1787a6c7-ccd3-44a1-b6ad-474da06c5e2b/my-frontend/1 type ext4 (rw,relatime)

root@ubuntu2110:/workspace/csi-driver-host-path# /root/stopkubelet.sh
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl get nodes
NAME        STATUS     ROLES    AGE   VERSION
127.0.0.1   NotReady   <none>   25h   v1.24.0-alpha.3.562+9eb3043a08b339-dirty
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl delete --force pod/my-csi-app
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "my-csi-app" force deleted

root@ubuntu2110:/workspace/csi-driver-host-path# /root/startkubelet.sh
root@ubuntu2110:/workspace/csi-driver-host-path# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
127.0.0.1   Ready    <none>   25h   v1.24.0-alpha.3.562+9eb3043a08b339-dirty

root@ubuntu2110:/workspace/csi-driver-host-path# mount | grep subpath
root@ubuntu2110:/workspace/csi-driver-host-path# mount | grep pvc-
root@ubuntu2110:/workspace/csi-driver-host-path# grep pvc-1787a6c7-ccd3-44a1-b6ad-474da06c5e2b /tmp/kubelet.log | grep 'TearDown succeeded'
I0324 23:41:31.759945 2511547 operation_generator.go:864] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^7ede2757-abc4-11ec-80df-b6e25f7c1fda" (OuterVolumeSpecName: "pvc-1787a6c7-ccd3-44a1-b6ad-474da06c5e2b") pod "17bef72d-c2f7-4e9d-8988-dd8c725c54c8" (UID: "17bef72d-c2f7-4e9d-8988-dd8c725c54c8"). InnerVolumeSpecName "pvc-1787a6c7-ccd3-44a1-b6ad-474da06c5e2b". PluginName "kubernetes.io/csi", VolumeGidValue ""

Does this PR introduce a user-facing change?

Fix for volume reconstruction of CSI ephemeral volumes

@k8s-ci-robot k8s-ci-robot added the release-note Denotes a PR that will be considered when it comes time to generate release notes. label Mar 25, 2022
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Mar 25, 2022
@dobsonj
Copy link
Member Author

dobsonj commented Mar 25, 2022

/triage accepted
/priority important-soon

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Mar 25, 2022
@dobsonj
Copy link
Member Author

dobsonj commented Mar 25, 2022

/sig storage

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. area/kubelet area/test sig/node Categorizes an issue or PR as relevant to SIG Node. sig/testing Categorizes an issue or PR as relevant to SIG Testing. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Mar 25, 2022
@dobsonj
Copy link
Member Author

dobsonj commented Mar 25, 2022

How do we invoke the affected subpath tests...
/test ?

@k8s-ci-robot
Copy link
Contributor

@dobsonj: The following commands are available to trigger required jobs:

  • /test pull-kubernetes-conformance-kind-ga-only-parallel
  • /test pull-kubernetes-dependencies
  • /test pull-kubernetes-dependencies-go-canary
  • /test pull-kubernetes-e2e-gce
  • /test pull-kubernetes-e2e-gce-100-performance
  • /test pull-kubernetes-e2e-gce-big-performance
  • /test pull-kubernetes-e2e-gce-canary
  • /test pull-kubernetes-e2e-gce-large-performance
  • /test pull-kubernetes-e2e-gce-network-proxy-http-connect
  • /test pull-kubernetes-e2e-gce-no-stage
  • /test pull-kubernetes-e2e-gce-ubuntu
  • /test pull-kubernetes-e2e-gce-ubuntu-containerd
  • /test pull-kubernetes-e2e-gce-ubuntu-containerd-canary
  • /test pull-kubernetes-e2e-kind
  • /test pull-kubernetes-e2e-kind-ipv6
  • /test pull-kubernetes-files-remake
  • /test pull-kubernetes-integration
  • /test pull-kubernetes-integration-go-canary
  • /test pull-kubernetes-kubemark-e2e-gce-scale
  • /test pull-kubernetes-node-e2e-containerd
  • /test pull-kubernetes-typecheck
  • /test pull-kubernetes-unit
  • /test pull-kubernetes-unit-go-canary
  • /test pull-kubernetes-update
  • /test pull-kubernetes-verify
  • /test pull-kubernetes-verify-go-canary
  • /test pull-kubernetes-verify-govet-levee

The following commands are available to trigger optional jobs:

  • /test check-dependency-stats
  • /test pull-kubernetes-conformance-image-test
  • /test pull-kubernetes-conformance-kind-ga-only
  • /test pull-kubernetes-conformance-kind-ipv6-parallel
  • /test pull-kubernetes-cross
  • /test pull-kubernetes-e2e-aks-engine-azure-disk-windows-containerd
  • /test pull-kubernetes-e2e-aks-engine-azure-file-windows-containerd
  • /test pull-kubernetes-e2e-aks-engine-windows-containerd
  • /test pull-kubernetes-e2e-capz-azure-disk
  • /test pull-kubernetes-e2e-capz-azure-disk-vmss
  • /test pull-kubernetes-e2e-capz-azure-file
  • /test pull-kubernetes-e2e-capz-azure-file-vmss
  • /test pull-kubernetes-e2e-capz-conformance
  • /test pull-kubernetes-e2e-capz-ha-control-plane
  • /test pull-kubernetes-e2e-containerd-gce
  • /test pull-kubernetes-e2e-gce-alpha-features
  • /test pull-kubernetes-e2e-gce-correctness
  • /test pull-kubernetes-e2e-gce-csi-serial
  • /test pull-kubernetes-e2e-gce-device-plugin-gpu
  • /test pull-kubernetes-e2e-gce-iscsi
  • /test pull-kubernetes-e2e-gce-iscsi-serial
  • /test pull-kubernetes-e2e-gce-kubetest2
  • /test pull-kubernetes-e2e-gce-network-proxy-grpc
  • /test pull-kubernetes-e2e-gce-storage-disruptive
  • /test pull-kubernetes-e2e-gce-storage-slow
  • /test pull-kubernetes-e2e-gce-storage-snapshot
  • /test pull-kubernetes-e2e-gci-gce-autoscaling
  • /test pull-kubernetes-e2e-gci-gce-ingress
  • /test pull-kubernetes-e2e-gci-gce-ipvs
  • /test pull-kubernetes-e2e-iptables-azure-dualstack
  • /test pull-kubernetes-e2e-ipvs-azure-dualstack
  • /test pull-kubernetes-e2e-kind-canary
  • /test pull-kubernetes-e2e-kind-dual-canary
  • /test pull-kubernetes-e2e-kind-ipv6-canary
  • /test pull-kubernetes-e2e-kind-ipvs-dual-canary
  • /test pull-kubernetes-e2e-kind-multizone
  • /test pull-kubernetes-e2e-kops-aws
  • /test pull-kubernetes-e2e-ubuntu-gce-network-policies
  • /test pull-kubernetes-e2e-windows-gce
  • /test pull-kubernetes-kubemark-e2e-gce-big
  • /test pull-kubernetes-local-e2e
  • /test pull-kubernetes-node-crio-cgrpv2-e2e
  • /test pull-kubernetes-node-crio-cgrpv2-e2e-kubetest2
  • /test pull-kubernetes-node-crio-e2e
  • /test pull-kubernetes-node-crio-e2e-kubetest2
  • /test pull-kubernetes-node-e2e-containerd-features
  • /test pull-kubernetes-node-e2e-containerd-features-kubetest2
  • /test pull-kubernetes-node-e2e-containerd-kubetest2
  • /test pull-kubernetes-node-kubelet-serial-containerd
  • /test pull-kubernetes-node-kubelet-serial-containerd-kubetest2
  • /test pull-kubernetes-node-kubelet-serial-cpu-manager
  • /test pull-kubernetes-node-kubelet-serial-cpu-manager-kubetest2
  • /test pull-kubernetes-node-kubelet-serial-crio-cgroupv1
  • /test pull-kubernetes-node-kubelet-serial-crio-cgroupv2
  • /test pull-kubernetes-node-kubelet-serial-hugepages
  • /test pull-kubernetes-node-kubelet-serial-memory-manager
  • /test pull-kubernetes-node-kubelet-serial-topology-manager
  • /test pull-kubernetes-node-kubelet-serial-topology-manager-kubetest2
  • /test pull-kubernetes-node-memoryqos-cgrpv2
  • /test pull-kubernetes-node-swap-fedora
  • /test pull-kubernetes-node-swap-fedora-serial
  • /test pull-kubernetes-node-swap-ubuntu-serial
  • /test pull-kubernetes-unit-experimental
  • /test pull-publishing-bot-validate

Use /test all to run the following jobs that were automatically triggered:

  • pull-kubernetes-conformance-kind-ga-only-parallel
  • pull-kubernetes-conformance-kind-ipv6-parallel
  • pull-kubernetes-dependencies
  • pull-kubernetes-e2e-gce-100-performance
  • pull-kubernetes-e2e-gce-csi-serial
  • pull-kubernetes-e2e-gce-storage-slow
  • pull-kubernetes-e2e-gce-storage-snapshot
  • pull-kubernetes-e2e-gce-ubuntu-containerd
  • pull-kubernetes-e2e-kind
  • pull-kubernetes-e2e-kind-ipv6
  • pull-kubernetes-integration
  • pull-kubernetes-node-e2e-containerd
  • pull-kubernetes-typecheck
  • pull-kubernetes-unit
  • pull-kubernetes-verify
  • pull-kubernetes-verify-govet-levee

In response to this:

How do we invoke the affected subpath tests...
/test ?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dobsonj
Copy link
Member Author

dobsonj commented Mar 25, 2022

How do we invoke the affected subpath tests...

/test pull-kubernetes-e2e-gce-storage-disruptive
(I think)

@dobsonj
Copy link
Member Author

dobsonj commented Mar 29, 2022

Aside from the in-tree hostPath failures, the pull-kubernetes-e2e-gce-storage-disruptive results also have failures for the in-tree iscsi, ceph, rbd volume plugins... maybe we have to skip these 2 tests for those drivers as well?

/test pull-kubernetes-e2e-gce-storage-disruptive
on latest changes

@dobsonj
Copy link
Member Author

dobsonj commented Mar 30, 2022

The results for pull-kubernetes-e2e-gce-storage-disruptive now look pretty consistent with runs for other PR jobs. The subpath tests always fail for the in-tree iscsi, rbd, and ceph drivers.
I'll leave #61446 to be addressed in a separate PR so we can at least get this working for the CSI ephemeral volumes.

@dobsonj dobsonj changed the title Fix volume reconstruction for CSI ephemeral volumes and subpaths Fix volume reconstruction for CSI ephemeral volumes Mar 30, 2022
@SergeyKanzhelev SergeyKanzhelev moved this from Triage to Archive-it in SIG Node CI/Test Board Mar 30, 2022
This resolves a couple of issues for CSI volume reconstruction.
1. IsLikelyNotMountPoint is known not to work for bind mounts and was
   causing problems for subpaths and hostpath volumes.
2. Inline volumes were failing reconstruction due to calling
   GetVolumeName, which only works when there is a PV spec.
@dobsonj
Copy link
Member Author

dobsonj commented Jun 1, 2022

/test pull-kubernetes-e2e-gce-storage-disruptive

@@ -246,14 +245,6 @@ func (h *hostpathCSIDriver) PrepareTest(f *framework.Framework) (*storageframewo
NodeName: node.Name,
}

// Disable volume lifecycle checks due to issue #103651 for the one
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so by removing this, the reconstruction part for csi-host-path driver will be tested ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, this check was added in f1e1f3a and e99b945 to work around #103651 (comment)
So removing this check allows csi-hostpath to be tested again, and I can see those tests passing for csi-hostpath in pull-kubernetes-e2e-gce-storage-disruptive now.

@jingxu97
Copy link
Contributor

jingxu97 commented Jun 3, 2022

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 3, 2022
@jingxu97
Copy link
Contributor

jingxu97 commented Jun 3, 2022

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dobsonj, jingxu97

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 3, 2022
@dobsonj
Copy link
Member Author

dobsonj commented Jun 3, 2022

/test pull-kubernetes-e2e-kind-ipv6

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Jun 3, 2022

@dobsonj: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-e2e-gce-storage-disruptive c8d3cc5 link false /test pull-kubernetes-e2e-gce-storage-disruptive

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-triage-robot
Copy link

The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass.

This bot retests PRs for certain kubernetes repos according to the following rules:

  • The PR does have any do-not-merge/* labels
  • The PR does not have the needs-ok-to-test label
  • The PR is mergeable (does not have a needs-rebase label)
  • The PR is approved (has cncf-cla: yes, lgtm, approved labels)
  • The PR is failing tests required for merge

You can:

/retest

@k8s-ci-robot k8s-ci-robot merged commit 1f90b79 into kubernetes:master Jun 4, 2022
SIG Node CI/Test Board automation moved this from Archive-it to Done Jun 4, 2022
SIG Node PR Triage automation moved this from Needs Reviewer to Done Jun 4, 2022
@k8s-ci-robot k8s-ci-robot added this to the v1.25 milestone Jun 4, 2022
k8s-ci-robot added a commit that referenced this pull request Dec 1, 2022
…997-upstream-release-1.24

Automated cherry pick of #108997: kubelet: fix volume reconstruction for CSI ephemeral
k8s-ci-robot added a commit that referenced this pull request Dec 1, 2022
…997-upstream-release-1.23

Automated cherry pick of #108997: kubelet: fix volume reconstruction for CSI ephemeral
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/kubelet area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/storage Categorizes an issue or PR as relevant to SIG Storage. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Development

Successfully merging this pull request may close these issues.

CSI volume reconstruction does not work for ephemeral volumes
6 participants