I'd like to know if there's currently a way to customize the ports that are exposed by the minikube container when running on Linux with the Docker driver?
Previously, I was using the virtualbox driver and used the following to expose ports I needed to access:
vboxmanage controlvm "minikube" natpf1 "didowi-web,tcp,127.0.0.1,30000,,30000"
vboxmanage controlvm "minikube" natpf1 "didowi-web-classic,tcp,127.0.0.1,4200,,30000"
vboxmanage controlvm "minikube" natpf1 "didowi-database,tcp,127.0.0.1,30300,,30300"
vboxmanage controlvm "minikube" natpf1 "didowi-database-classic,tcp,127.0.0.1,5984,,30300"
vboxmanage controlvm "minikube" natpf1 "didowi-gate,tcp,127.0.0.1,30100,,30100"
vboxmanage controlvm "minikube" natpf1 "didowi-gate-debug,tcp,127.0.0.1,30101,,30101"
vboxmanage controlvm "minikube" natpf1 "didowi-cli-debug,tcp,127.0.0.1,30200,,30200"
vboxmanage controlvm "minikube" natpf1 "didowi-ingress-http,tcp,127.0.0.1,30480,,30480"
vboxmanage controlvm "minikube" natpf1 "didowi-ingress-https,tcp,127.0.0.1,30443,,30443"
This allowed me to easily expose the ports used by my ingress within the kubernetes cluster. This felt clean as I could add/remove rules easily and through a stable solution, with stable ports without having to fiddle with complex configs.
Thanks to this, I could access https://whatever.local:30443 (stable name/stable port), and have it access of all the services exposed by my ingress; whatever.local being in the hosts file and also configured within the app.
If it's not currently possible, then do you think it could be added?
I don't think there is a (user-accessible) way to add ports to a running container in Docker:
顐奥燿ocker port minikube
22/tcp -> 127.0.0.1:32771
2376/tcp -> 127.0.0.1:32770
5000/tcp -> 127.0.0.1:32769
8443/tcp -> 127.0.0.1:32768
You can work around it with kubectl port-forward, but we don't export any -p flags (yet?)
// control plane specific options
params.PortMappings = append(params.PortMappings, oci.PortMapping{
ListenAddress: oci.DefaultBindIPV4,
ContainerPort: int32(params.APIServerPort),
},
oci.PortMapping{
ListenAddress: oci.DefaultBindIPV4,
ContainerPort: constants.SSHPort,
},
oci.PortMapping{
ListenAddress: oci.DefaultBindIPV4,
ContainerPort: constants.DockerDaemonPort,
},
oci.PortMapping{
ListenAddress: oci.DefaultBindIPV4,
ContainerPort: constants.RegistryAddonPort,
},
)
There was no such flag before either, but VirtualBox looks more flexible than Docker ?
The situation is even worse on Docker Desktop, because there you have VM network too.
Indeed, ports have to be exposed when containers are created (afaik).
With Virtualbox it works fine since it is possible to create port forwarding rules, which is a great match in my use case.
I have multiple ingresses, but also need to expose additional ports
having multiple ingresses, but more importantly using and exposing different services/ports during development so that I can fully develop/debug code running in my development containers.
You could of course just continue to use the VirtualBox driver
@afbjorklund yes of course, that what I'm doing for now. I was actually not aware of the switch of the default to the docker driver; I've probably missed it in the release notes.
The thing is that I'd hope to waste less resources if I could make use of the docker driver instead of the virtualbox one.
But indeed, for now at least I can keep on working ;-))
@afbjorklund I'd like to access the k8s API that is running on the minikube container in more interfaces than just loopback. Inspecting the minikube container I saw that the API port is only exposed on the loopback interface:
...
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32788"
}
]
...
Can an option to configure the "HostIp" field be provided so I can set it to match the IP of the desired interface?
EDIT: It can be done on docker when starting the container (https://docs.docker.com/config/containers/container-networking/#published-ports)
@bolipereira :
Minikube is only intended for local development, there are other options for deploying a publicly available cluster
We should provide some better ingress options of exposing apps, but I'm not sure sure about the apiserver itself
Maybe you could describe your use case - in a new issue ?
Is it something similar to the generic driver #4733 (for none)
@afbjorklund I'll open a new issue if you think it makes sense to support my use case:
Currently, I need to SSH into the machine running the cluster to interact with the k8s API. What I'd like to do is issue kubectl commands to the cluster without having to access it remotely with SSH. That would be possible by exposing the apiserver port on the host. I don't think enabling it by default would be sensible, but I feel that I should have the option to expose the container ports on a network interface other than loopback.
So if I understand you correctly, it works OK on the workstation but you _also_ want to access it externally (from the laptop).
This scenario is slightly different from the one that I described, where kubernetes is running on a dedicated Linux server.
So if I understand you correctly, it works OK on the workstation but you _also_ want to access it externally (from the laptop).
This scenario is slightly different from the one that I described, where kubernetes is running on a dedicated Linux server.
@afbjorklund Yes, that's basically it. Do you think adding this feature makes sense? Should I open another issue if that's the case?
That seems like a good feature to add
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten