Corresponding kubernetes/kubernetes Issue: https://github.com/kubernetes/kubernetes/issues/62822
Cross Reference with kubernetes/kubernetes: Issue #62822
Thanks for the update!
/assign @leblancd
/kind feature
/sig network
/milestone 1.11
@leblancd any design document available?
/cc @thockin @dcbw @luxas @kubernetes/sig-network-feature-requests
@idvoretskyi - No design doc yet, but we'll start collaborating on one shortly.
Does this mean Kubernetes Ingress will support Dual-Stack ?
Does this mean CNI ( Calico) would need to run Dual stack ( both BIRD and BIRD6 daemons for example) ?
@sb1975 - Regarding dual-stack ingress support, that's something we'll need to hash out, but here are my preliminary thoughts:
Regarding Calico and other CNI plugins:
@leblancd : So here is the scenario :
@sb1975 - Good question re. the NGINX ingress controller with dual-stack. I'm not an expert on the NGINX ingress controller (maybe someone more familiar can jump in), but here's how I would see the work flow:
As for helping and getting involved, this would be greatly appreciated! We're about to start working in earnest on dual-stack (it's been a little delayed by the work in getting CI working for IPv6-only). I'm hoping to come out with an outline for a spec (Google Doc or KEPs WIP doc) soon, and would be looking for help in reviewing, and maybe writing some sections. We'll also DEFINITELY need help with official documentation (beyond the design spec), and with defining and implementing dual-stack E2E tests. Some of the areas which I'm still a bit sketchy on for the design include:
We're also considering an intermediate "dual-stack at the edge" (with IPv6-only inside the cluster) approach, where access from outside the cluster to K8s services would be dual-stack, but this would be mapped (e.g. via NGINX ingress controller) to IPv6-only endpoints inside the cluster (or use stateless NAT46). Pods and services in the cluster would need to be all IPv6, but the big advantage would be that dual-stack external access would be available much more quickly from a time-to-market perspective.
/milestone 1.12
@leblancd / @caseydavenport - I'm noticing a lot of discussion here and a milestone change.
Should this be pulled from the 1.11 milestone?
@justaugustus - Yes, this should be moved to 1.12. Do I need to delete a row in the release spreadsheet, or is there anything I need to do to get this changed?
@leblancd I've got it covered. Thanks for following up! :)
@leblancd @kubernetes/sig-network-feature-requests --
This feature was removed from the previous milestone, so we'd like to check in and see if there are any plans for this in Kubernetes 1.12.
If so, please ensure that this issue is up-to-date with ALL of the following information:
Set the following:
Please make sure all PRs for features have relevant release notes included as well.
Happy shipping!
/cc @justaugustus @kacole2 @robertsandoval @rajendar38
@leblancd --
Feature Freeze is today. Are you planning on graduating this to Beta in Kubernetes 1.12?
If so, can you make sure everything is up-to-date, so I can include it on the 1.12 Feature tracking spreadsheet?
Hi @justaugustus - Beta status will need to slip into Kubernetes 1.13. We are making (albeit slow) progress on the design KEP (https://github.com/kubernetes/community/pull/2254), and we're getting close to re-engaging with the CI test PR, but the Kubernetes 1.12 target was a bit too optimistic.
I'll update the description/summary above with the information you requested earlier. Thank you for your patience.
/remove-stage alpha
/stage beta
No worries, @leblancd. Thanks for the update!
Hi, @justaugustus @leblancd
I just read the update that the beta is moved to 1.13 for dual stack. What is the expected release date of 1.13? We are actually looking for dual stack support. Its a go-nogo decision for our product to move to containers.
@navjotsingh83 - I don't think the release date for Kubernetes 1.13 has been solidified. I don't see 1.13 listed in the Kubernetes releases documentation.
@navjotsingh83 @leblancd 1.13 release schedule is published. Its a short release cycle with Code freeze on Nov 15th. Do you think its enough time to graduate this feature to Beta. Can you plz update this issue with your level of confidence, whats pending in terms of code, test and docs completion?
As per discussion in the SIG Network meeting, though there will considerable work done on this feature in 1.13, it is not expected to go to Beta in 1.13. removing milestone accordingly.
/milestone clear
@kacole2 to remove this from 1.13 enhancements spreadsheet
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
@leblancd Hello - I’m the enhancement’s lead for 1.14 and I’m checking in on this issue to see what work (if any) is being planned for the 1.14 release. Enhancements freeze is Jan 29th and I want to remind that all enhancements must have a KEP
@leblancd Wanted to follow up on your prior comment relative to creating a delineation at the edge of the cluster for IPv4/IPv6:
“We're also considering an intermediate "dual-stack at the edge" (with IPv6-only inside the cluster) approach, where access from outside the cluster to K8s services would be dual-stack, but this would be mapped (e.g. via NGINX ingress controller) to IPv6-only endpoints inside the cluster (or use stateless NAT46). Pods and services in the cluster would need to be all IPv6, but the big advantage would be that dual-stack external access would be available much more quickly from a time-to-market perspective.”
This use case would be a good one for a current project, so wanted to see your thoughts around timeframe, see if there was anything myself or someone in our group could contribute to help out with this quicker time to market path.
@KevinAtDesignworx If the edge-dual-stack but internal ipv6-only approach can still reach external ipv4 requests from inside a container (i.e. curl -v 93.184.216.34 -H "Host: example.com"
), I genuinely think it's the best approach. If your infrastructure can use ipv6, why bother use ipv4 except at the edge for compatibility reasons. However, if this approach means that I can not reach legacy websites solely using ipv4 from inside my cluster, I'm not so sure anymore.
well there is 464XLAT so ipv6 only inside the container would be feasable.
@KevinAtDesignworx - If using an ingress controller would work in your scenario, it's possible to configure an NGINX ingress controller for dual-stack operation from outside (proxying to single-family inside the cluster): https://github.com/leblancd/kube-v6#installing-a-dual-stack-ingress-controller-on-an-ipv6-only-kubernetes-cluster
The ingress controllers would need to run on the host network on each node, so the controllers wold need to be set up as a daemonset (one ingress controller on each node). This assumes:
This would be in addition to a NAT64/DNS64 for connections from V6 clients inside the cluster to external IPv4-only servers.
Stateless NAT46 is also an option, but I haven't tried that, so I don't have any config guides for that.
@leblancd any work planned here for 1.15? Looks like a KEP hasn't been accepted yet at this point either. Thanks!
@leblancd Wanted to follow up on your prior comment relative to creating a delineation at the edge of the cluster for IPv4/IPv6:
“We're also considering an intermediate "dual-stack at the edge" (with IPv6-only inside the cluster) approach, where access from outside the cluster to K8s services would be dual-stack, but this would be mapped (e.g. via NGINX ingress controller) to IPv6-only endpoints inside the cluster (or use stateless NAT46). Pods and services in the cluster would need to be all IPv6, but the big advantage would be that dual-stack external access would be available much more quickly from a time-to-market perspective.”
This use case would be a good one for a current project, so wanted to see your thoughts around timeframe, see if there was anything myself or someone in our group could contribute to help out with this quicker time to market path.
From inside a container(which is only ipv6) sendding out a curl request (i.e. curl -v 93.184.216.34 -H "Host: example.com") to the outside of the cluster. I think it will give out an error of unknow destination or destination unreachable, unless there exist an ipv4 route on the host where the container exist.
@GeorgeGuo2018 if k8s would implement DNS64/NAT64 it would work. it heavily depends on how far k8s will go into 464xlat/plat solutions and what would need to be handled at edge routers, etc...
actually I think it would be possible by using a DaemonSet/Deployment that uses host networking and Tayga inside the kube-system namespace so that the internal DNS64 would use tayga to go outside the network.
Sounds like a solution to me.
We run an IPv6-only network internally and NAT64/DNS64 works quite well for us. For some legacy stuff where there was no IPv6 support at all, we ended up using clatd directly where it was needed. (In our case directly on a VM.)
@kacole2 - I would like this tracked for 1.15. I'm working to get the following PR merged - https://github.com/kubernetes/enhancements/pull/808
Specifically for 1.15 we would be adding support for the following:
cc @caseydavenport for milestone tracking ^
@kacole2 the KEP is now merged. Let me know if there is anything else we need to get this tracked in 1.15
Hey @leblancd @lachie83 Just a friendly reminder we're looking for a PR against k/website (branch dev-1.15) due by Thursday, May 30. It would be great if it's the start of the full documentation, but even a placeholder PR is acceptable. Let me know if you have any questions!
@kacole2 the KEP is now merged. Let me know if there is anything else we need to get this tracked in 1.15
@lachie83 Hi,Lachie,did you mean the IPv4/IPv6 dual-stack support this KEP was finished?
@kacole2 the KEP is now merged. Let me know if there is anything else we need to get this tracked in 1.15
Actually, I want to figure out whether dual stack support will surely be add in k8s 1.15.
@leblancd The placeholder PR against k8s.io dev-1.15 is due Thursday May 30th.
@leblancd The placeholder PR against k8s.io dev-1.15 is due Thursday May 30th.
Could I consider that dual-stack support will be available in release-1.15?
@GeorgeGuo2018 It is still on the enhancement sheet for 1.15 but only enhancement lead @kacole2 can provide you with better details on that.
Hi @lachie83 @leblancd. Code Freeze is Thursday, May 30th 2019 @ EOD PST. All enhancements going into the release must be code-complete, including tests, and have docs PRs open.
Please list all current k/k PRs so they can be tracked going into freeze. If the PRs aren't merged by freeze, this feature will slip for the 1.15 release cycle. Only release-blocking issues and PRs will be allowed in the milestone.
I see kubernetes/kubernetes#62822 in the original post is still open. Are there other PRs we are expecting to be merged as well?
If you know this will slip, please reply back and let us know. Thanks!
@simplytunde - Appreciate the heads up. I am working on getting the docs PR together this week.
@GeorgeGuo2018 - This is going to be a multi-release KEP. We plan on landing phase 1 in 1.15. Please take a look at the implementation plan in the KEP for further detail - https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20180612-ipv4-ipv6-dual-stack.md#implementation-plan.
@simplytunde - I've created the initial placeholder docs PR here with a WIP https://github.com/kubernetes/website/pull/14600. I plan to complete and have it ready for review over the next couple of days.
@kacole2 Thanks for the ping. I've updated the 1.15 enhancements speadsheet with the k/k PR that we are tracking (https://github.com/kubernetes/kubernetes/pull/73977) along with the draft docs PR (https://github.com/kubernetes/website/pull/14600). We are still currently on track to get this PR merged before code freeze. LMK if I'm missing anything else
@kacole2 after discussion with @claurence and the release team we've decided to remove this from the 1.15 milestone. Please go ahead and remove it and update the spreadsheet as appropriate. Thanks for all your assistance thus far.
/milestone clear
@simplytunde I've also commented on the docs PR. Can you please make sure that's removed from the 1.15 milestone also?
Hi @lachie83 @leblancd , I'm the 1.16 Enhancement Shadow. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet.
Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29.
Thank you.
https://github.com/kubernetes/dns/issues/315 covers adding IPv6 / AAAA to the DNS service discovery specification.
@lachie83 @leblancd any idea if this will be graduating in 1.16 to track it?
@evillgenius75 @kacole2 This needs to be tracked in 1.16. This feature will be in alpha state. We will be implementing phase 1 and phase 2 as defined in the KEP is 1.16
Tracking KEP
Merged k/k PRs (currently in master will be in 1.16)
Associated PRs
Hey, @leblancd I'm the v1.16 docs release lead.
Does this enhancement (or the work planned for v1.16) require any new docs (or modifications)?
Just a friendly reminder we're looking for a PR against k/website (branch dev-1.16) due by Friday,August 23rd. It would be great if it's the start of the full documentation, but even a placeholder PR is acceptable. Let me know if you have any questions!
@simplytunde here is the docs PR - https://github.com/kubernetes/website/pull/16010
@lachie83 friendly reminder code freeze for 1.16 is on Thursday 8/29. (as if you didn't know that). Looks like these PRs are still outstanding:
Phase 2 Services/Endpoints - kubernetes/kubernetes#79386
Phase 2 kube-proxy - kubernetes/kubernetes#79576
Associated:
Support multiple Mask Sizes for cluster cidrs - kubernetes/kubernetes#79993
E2e Prow Job for dualstack kubernetes/test-infra#12966
Hi @lachie83 @leblancd it looks as though https://github.com/kubernetes/kubernetes/pull/79576 and https://github.com/kubernetes/kubernetes/pull/79993 didn't merge before code freeze and it's not in the Tide Merge Pool. This feature is going to be bumped from v1.16. If you would still like to have this be a part of the 1.16 release, please file an exception
@kacole2 Apologies for the delays in response. The primary PR were were tracking was https://github.com/kubernetes/kubernetes/pull/79386. As for kubernetes/kubernetes#79576 we made a decision to defer that to 1.17 and instead focus on https://github.com/kubernetes/kubernetes/pull/82091 (in agreement with sig-network) which fulfills the same phase2 goals that were laid out in the KEP. The other related PR was tracked in this release was https://github.com/kubernetes/kubernetes/pull/80485 which is also merged. kubernetes/kubernetes#79993 has also been deferred to 1.17
Hey there @lachie83 @leblancd -- 1.17 Enhancements lead here. I wanted to check in and see if you think this Enhancement will be graduating to alpha/beta/stable in 1.17?
The current release schedule is:
If you do, please list all relevant k/k PRs in this issue so they can be tracked properly. 👍
Thanks!
/milestone clear
Hi Bob. Thanks for reaching out. I'm still planning phase 3 of this enhancement which will round out the enhancement to completion. This enhancement will still be in alpha at the end of this release but there will be phase 3 related work that will land in k/k as part of 1.17.
Here is a list of high level deliverables for 1.17 for dual-stack. I will update this list throughout the release.
Much appreciated, thank you kindly @lachie83 ❤️ I'll go ahead and add it to the tracking sheet.
/milestone v1.17
@mrbobbytables I've also added a PR to detail the work listed above as part of phase 3 in the KEP after communicating the plan via sig-network. The KEP itself is still in the implementable
state and these changes are merely documenting the planned work as part of 1.17 specifically.
At some point, I'd like to ensure that https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ covers IPv6 DNS. https://github.com/kubernetes/website/issues/15434 tracks that change; mentioning it here to note a cross-reference.
Updated KEP to add phase 2 e2e tests - https://github.com/kubernetes/enhancements/pull/1311
Hello @lachie83 I'm one of the v1.17 docs shadows.
Does this enhancement for (or the work planned for v1.17) require any new docs (or modifications to existing docs)? If not, can you please update the 1.17 Enhancement Tracker Sheet (or let me know and I'll do so)
If so, just a friendly reminder we're looking for a PR against k/website (branch dev-1.17) due by Friday, November 8th, it can just be a placeholder PR at this time. Let me know if you have any questions!
@lachie83
Since we're approaching Docs placeholder PR deadline on Nov 8th. Please try to get one in against k/website dev-1.17 branch.
Hey there @lachie83, I know you're keepin' tabs, but I need to pop in and mention it anyway 🙈
Code freeze is just around the corner (November 14th). How are things looking? Is everything on track to be merged before then?
Thanks!
Hey @mrbobbytables! Thanks for the ping. We are tracking the following PRs to land in 1.17. There may be one or two more PRs associated with this change that come in too. These changes will need docs. I will raise a placeholder docs PR
@irvifa - Here is the placeholder docs PR. https://github.com/kubernetes/website/pull/17457
Cool thanks 🎉 @lachie83
@lachie83 tomorrow is the code freeze for the 1.17 release cycle. It looks like the k/k PRs have not yet been merged. 😬 We're flagging this as At Risk in the 1.17 Enhancement Tracking Sheet.
Do you think they will be merged by the EoD of the 14th (Thursday)? After that point, only release-blocking issues and PRs will be allowed in the milestone with an exception.
Thanks Bob - I'll be discussing this with sig-network today and will provide an update.
Hey @mrbobbytables. Here is a list of PRs that we are working on getting merged by EoD today and have been approved by sig-network.
The remaining PR is most likely going to be punted to 1.18 - https://github.com/kubernetes/kubernetes/pull/82462
@mrbobbytables just confirming that all stated PRs above have been merged and that we are indeed going to punt kubernetes/kubernetes#82462 to 1.18. This enhancement can still be tracked as these PRs add meaning changes to the dualstack behavior in 1.17. Now I just need to get the docs PR ready! We are hoping to land kubernetes/kubernetes#82462 in 1.18 and progress this work to beta
Great, thanks @lachie83!
We plan to move this enhancement to beta in 1.18. Enhancement graduation criteria and test plans can be found in the KEP along with this PR - https://github.com/kubernetes/enhancements/pull/1429
/milestone 1.18
@lachie83: The provided milestone is not valid for this repository. Milestones in this repository: [keps-beta
, keps-ga
, v1.17
, v1.18
, v1.19
, v1.20
, v1.21
]
Use /milestone clear
to clear the milestone.
In response to this:
/milestone 1.18
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/milestone v1.18
We plan to move this enhancement to beta in 1.18. Enhancement graduation criteria and test plans can be found in the KEP along with this PR - #1429
Thanks for the update @lachie83, I've marked this as tracked in the 1.18 spreadsheet!
Please track the following PR as part of the work to land in 1.18. https://github.com/kubernetes/kubernetes/pull/82462
Adding other related PRs for tracking:
https://github.com/kubernetes/test-infra/pull/15893
https://github.com/kubernetes-sigs/kind/pull/692
Thanks @lachie83!
Hi @lachie83, do you have any other PR's that we should track other than the above-mentioned ones?
Hello, @lachie83 @leblancd - I'm a Docs shadow on the 1.18 release team.
Does this enhancement work planned for 1.18 require any new docs or modifications to existing docs?
If not, can you please update the 1.18 Enhancement Tracker Sheet (or let me know and I'll do so)
If doc updates are required, reminder that the placeholder PRs against k/website (branch dev-1.18) are due by Friday, Feb 28th.
Let me know if you have any questions!
If anyone wants help documenting IPV6 or dual-stack stuff for v1.18, give me a nudge. I may be able to help.
Hey @lachie83,
Looks like kubernetes-sigs/kind#692 hasn't merged yet. Is that critical for your Beta graduation?
Hey @jeremyrickard @sethmccombs we're going to have to pull this from graduating to beta given this PR https://github.com/kubernetes/kubernetes/pull/86895. Until we have a reasonable way forward I don't think it is wise to move this to beta for 1.18
/milestone clear
@lachie83 Thank you for the update. I've removed this enhancement from the milestone. Looking forward to this on 1.19. :)
I would like to confirm that the state of the dualstack enhancement remains in alpha
in 1.18. I am currently working with the community to assess the work planned to be completed in 1.19. It's likely this enhancement will still remain in alpha state in 1.19 however I would like to confirm. I will also take an action on getting the docs updated to reflect the enhancement state in the 1.18 docs.
If there are pages on the website that show dual-stack Kubernetes as beta, please file those against k/website as priority/important-soon bugs.
Hi @lachie83 -- 1.19 Enhancements Lead here, I wanted to check in if you think this enhancement would graduate in 1.19?
The current release schedule is:
If there are pages on the website that show dual-stack Kubernetes as beta, please file those against k/website as priority/important-soon bugs.
@sftim I've raised two PRs to address the release labelling in 1.17 and 1.18
@palnabarun We are working to get the dualstack KEP updated in the 1.19 release timeframe however we don't currently think that we will be landing code changes in the 1.19 release. We have one blocking issue with the work that's already been done (thanks to having it in alpha
state). The blocking issue is https://github.com/kubernetes/kubernetes/pull/86895. We plan to address that via the follow KEP update https://github.com/kubernetes/enhancements/pull/1679 but it's going to take time get consensus on the proposed change. At this stage the dualstack enhancement will remain in alpha
state until we address this blocking issue with the current implementation. I will provide updates as things progress.
Thank you, Lachie for the updates. I appreciate all the efforts! :slightly_smiling_face:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
We would like this enhancement to be tracked in 1.20. It will be reimplemented in alpha state according to the updated kep - https://github.com/kubernetes/enhancements/pull/1679. Please track the following PR for the implementation - https://github.com/kubernetes/kubernetes/pull/91824. We are planning to complete the review and merge the PR early in the 1.20 release cycle.
Latest dual-stack graduation to Beta status as discussed in Sept 17th's SIG Network meeting, for those playing along at home:
All these items are being actively worked on, and 1.20 is still the target for dual-stack API Beta graduation. However despite our best efforts there is always a chance something will not be resolved in time, and if so, SIG Network will decide whether to continue graduation to Beta or not in our public meetings. All are welcome to join.
@dcbw thank you very much for the update (sorry I couldn't make the call). Does it make sense to get this to enhancement to beta in 1.20 or simply remain in alpha? If we want to go to beta does the graduation criteria in the KEP still make sense given that this is a reimplementation https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20180612-ipv4-ipv6-dual-stack.md#graduation-criteria
@dcbw thank you very much for the update (sorry I couldn't make the call). Does it make sense to get this to enhancement to beta in 1.20 or simply remain in alpha? If we want to go to beta does the graduation criteria in the KEP still make sense given that this is a reimplementation https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20180612-ipv4-ipv6-dual-stack.md#graduation-criteria
It's not really a reimplementation, though. All of the previous work is still valid and the work in 1.20 is building on top of it to finalize the last changes needed that have been identified. My interpretation of the sig-network discussion is that the list @dcbw posted is the set of remaining known issues needed to be resolved for graduation.
Hi all,
1.20 Enhancements Lead here, I'm going to set this as tracked please update me if anything changes :)
As a reminder Enhancements Freeze is October 6th.
As a note the KEP is using an old format we have updated to : https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template
Best,
Kirsten
/milestone v1.20
Hi, @russellb -
It's not really a reimplementation, though. All of the previous work is still valid and the work in 1.20 is building on top of it to finalize the last changes needed that have been identified.
Given the API changes in https://github.com/kubernetes/kubernetes/pull/91824, enough is different that marking dual-stack as alpha for 1.20 will allow room for any further re-implementations that prove necessary. I know we're all eager for beta, but let's first land the PR with +9,319 −3,261
and let the dust settle. :)
Given the API changes in kubernetes/kubernetes#91824, enough is different that marking dual-stack as alpha for 1.20 will allow room for any further re-implementations that prove necessary. I know we're all eager for beta, but let's first land the PR with
+9,319 −3,261
and let the dust settle. :)
@bridgetkromhout yeah, we need to land https://github.com/kubernetes/kubernetes/pull/91824 before we can make any determination about API readiness. I really hope we can do that ASAP.
Hi all,
1.20 Enhancement shadow here 👋
Since this Enhancement is scheduled to be in 1.20, please keep in mind these important upcoming dates:
Friday, Nov 6th: Week 8 - Docs Placeholder PR deadline
Thursday, Nov 12th: Week 9 - Code Freeze
As a reminder, please link all of your k/k PR as well as docs PR to this issue so we can track them.
Thank you!
Hi @kinarashah @kikisdeliveryservice - I have confirmed on the sig-network call that we need this reclassified to alpha for 1.20. It's a complete reimplementation that needs time to soak and be tested in alpha stage.
Hello @lachie83, 1.20 Docs shadow here.
Does this enhancement work planned for 1.20 require any new docs or modification to existing docs?
If so, please follows the steps here to open a PR against the dev-1.20
branch in the k/website
repo. This PR can be just a placeholder at this time and must be created before Nov 6th.
Also take a look at Documenting for a release to get yourself familiarize with the docs requirement for the release.
Thank you!
Thanks @reylejano-rxm - we've opened kubernetes/website#24725
Hi @lachie83
Thanks for creating the docs PR!
Please keep in mind the important upcoming dates:
As a reminder, please link all of your k/k PR as well as docs PR to this issue for the release team to track.
Hi @kinarashah @kikisdeliveryservice - I have confirmed on the sig-network call that we need this reclassified to alpha for 1.20. It's a complete reimplementation that needs time to soak and be tested in alpha stage.
Hey @lachie83
Given the above, I presume that this is still intended for alpha as-is? I don't see any outstanding PRs that need to merge / work was already merged.
_Just a reminder that Code Freeze is coming up in 2 days on Thursday, November 12th. All PRs must be merged by that date, otherwise an Exception is required._
Thanks!
Kirsten
Hi, @kikisdeliveryservice - yes, IPv4/IPv6 dual-stack support (reimplemented) will be alpha for 1.20.
Here's the progress we have for this enhancement:
1) Code is merged from https://github.com/kubernetes/kubernetes/pull/91824 - will be alpha for 1.20
2) Documentation updates covering that code change are in https://github.com/kubernetes/website/pull/24725/ - reviewed and merged into the dev-1.20 branch
Is there anything else needed for 1.20 that we haven't completed on this enhancement?
@bridgetkromhout Thanks for the clear update, you're all good!
It looks like LoadBalancerIP
in ServiceSpec
is not part of the dual-stack implementation yet. Is there any plan to support it or did I miss it?
Hi @chenwng - Changes to cloud provider code for Loadbalancers are out of scope currently as defined in the KEP here - https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20180612-ipv4-ipv6-dual-stack.md#load-balancer-operation.
You can help by providing your use case and suggested changes to understand and decide if we need to make any modifications to the KEP.
@chenwng There is a KEP being worked on for LoadBalancerIPs
in dual-stack clusters - https://github.com/kubernetes/enhancements/pull/1992
Thanks for the info, @aramase , @lachie83 .
Most helpful comment
@sb1975 - Good question re. the NGINX ingress controller with dual-stack. I'm not an expert on the NGINX ingress controller (maybe someone more familiar can jump in), but here's how I would see the work flow:
As for helping and getting involved, this would be greatly appreciated! We're about to start working in earnest on dual-stack (it's been a little delayed by the work in getting CI working for IPv6-only). I'm hoping to come out with an outline for a spec (Google Doc or KEPs WIP doc) soon, and would be looking for help in reviewing, and maybe writing some sections. We'll also DEFINITELY need help with official documentation (beyond the design spec), and with defining and implementing dual-stack E2E tests. Some of the areas which I'm still a bit sketchy on for the design include:
If you've thought about any of these, maybe you could help with those sections?
We're also considering an intermediate "dual-stack at the edge" (with IPv6-only inside the cluster) approach, where access from outside the cluster to K8s services would be dual-stack, but this would be mapped (e.g. via NGINX ingress controller) to IPv6-only endpoints inside the cluster (or use stateless NAT46). Pods and services in the cluster would need to be all IPv6, but the big advantage would be that dual-stack external access would be available much more quickly from a time-to-market perspective.