New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Promote CSIMigrationAWS to GA #111479
Promote CSIMigrationAWS to GA #111479
Conversation
Ready for review, CI tests passing now. Since my windows test turned out basically like a superset of upgrade test (upgrading EKS from 1.22->1.23) I think it's sufficient, however I'm still going to work on repeating the test for a kops 1.25 cluster without involving Windows and the checklist reflects this |
/triage accepted |
/milestone v1.25 |
/lgtm |
/lgtm |
/assign @dims |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dims, wongma7, xing-yang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind feature
/sig storage
What this PR does / why we need it:
CSIMigrationAWS has been beta since 1.17, on by default since 1.23, this takes it to GA for 1.25.
kubernetes/enhancements#1487
Which issue(s) this PR fixes:
Special notes for your reviewer:
I am still working on Windows and upgrade tests, aim to complete this week but opening the PR now as placeholder.
Testing:
expand
tests. For promoting off->on by default*, there was no official vendor-supported AWS+Windows Nodes setup to test so I had to hack my own VM/driver images. As of April this year now EKS has Windows Nodes with CSI support https://docs.aws.amazon.com/eks/latest/userguide/eks-ami-versions-windows.html so for GA I will be using that to test. However, there is no official EKS 1.23 Kubernetes Control Plane with migration enabled (yet) so I have to hack that together again, and since it is not public/reproducible you will have to take my word for it that the tests succeed!)should store data
test cases fail on EKS due to EKS specific issue, it's not related to CSI: Clear ephemeral container resources field when creating one in volume test #111521ip-192-168-71-159.us-west-2.compute.internal Ready <none> 37m v1.23.7-eks-4721010 192.168.71.159 34.222.140.157 Windows Server 2019 Datacenter 10.0.17763.3165 docker://20.10.9
. PASSED with caveat: 'should mount multiple PV pointing to the same storage on the same node' test case fails because it's not applicable to 1.23 Nodes, I am executing master e2e.test against 1.23 NodesMISC NOTES FOR SELF (sorry, I am editing this PR frequently with notes/progress so to keep above checklist comprehensible dumping stuff here):
Disable migration means:
Windows test
./_output/bin/e2e.test --ginkgo.focus="aws.*ntfs" \ --ginkgo.skip="LinuxOnly|expand" \ --kubeconfig=$HOME/.kube/config \ --node-os-distro=windows \ --gce-zone=us-west-2a \ --provider=aws
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpathfrom same volumes [Slow]","completed":19,"skipped":6573,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]"]}
~Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: