Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Feature request
I'd like to run minikube in one docker container and connect to it from another.
I'd like to be able to run an integration test layer using minikube, and have tests connect to minikube from the container, but the config minikube generates seems to use a random ip that complicates it.
version: "2"
services:
minikube:
image: minikube
build:
context: "."
dockerfile: "docker/minikube"
volumes:
- "/etc/ssl/certs:/etc/ssl/certs"
- "/var/run/docker.sock:/var/run/docker.sock"
privileged: true
golang-tests:
...
Please provide the following details:
Environment:
Trying to run minikube in a docker container and connect to it from another docker container.
minikube version: v0.25.0
VM Driver = none
ISO Version = v1.9.0
docker-compose version 1.18.0, build 8dd22a9
docker version: 1.13.1
What happened:
I'm unable to connect to minikube from another linked container via
kubectl get pods --server=minikube:8080
The connection to the server minikube:8080 was refused - did you specify the right host or port?
What you expected to happen:
A list of pods returned.
How to reproduce it (as minimally and precisely as possible):
With the docker file
FROM alpine:latest
ADD https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/linux/amd64/kubectl kubectl
RUN chmod +x kubectl
ADD https://storage.googleapis.com/minikube/releases/v0.25.0/minikube-linux-amd64 minikube
RUN chmod +x minikube
ADD https://download.docker.com/linux/static/stable/x86_64/docker-17.09.0-ce.tgz docker-17.09.0-ce.tgz
RUN tar xzvf docker-17.09.0-ce.tgz
FROM debian:stable-slim
COPY --from=0 kubectl minikube docker/docker /usr/local/bin/
COPY start.sh start.sh
CMD ["sh", "./start.sh"]
start.sh
#!/bin/sh
/usr/local/bin/minikube start --vm-driver=none
/usr/local/bin/minikube logs -f
With the following docker-compose.yml
version: "2"
services:
minikube:
image: minikube
build:
context: "."
dockerfile: "docker/minikube"
volumes:
- "/etc/ssl/certs:/etc/ssl/certs"
- "/var/run/docker.sock:/var/run/docker.sock"
privileged: true
get-pods:
image: "google/cloud-sdk:190.0.1"
links:
- "minikube"
command: ["kubectl","get","pods","--server=minikube:8080"]
run docker-compose run get-pods
Sounds like you are trying to do docker-in-docker, which is not what minikube does...
But you should be able to use minikube from other containers running in the same VM
I'm trying to connect to minikube to test some stuff that relies on kubernetes functionality.
The simplest version of what I'm trying to do is
minikube start
in one container, and
kubectl get pods
in another.
But I believe the port is randomized on minikube start, and the kube config file would need to be shared with the other pod as well?
I'm probably missing something obvious?
I'm still trying to parse what "run minikube in a container" means. Normally you will run it in a VM ?
Meaning just the controller, using an existing docker daemon and vm driver none. Anything minikube runs in terms of pods run's via the existing docker daemon, but the actual kube service is in its own docker container rather than on the host machine.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
May I /reopen ? @jimmiebtlr explanations of what he wants sounds good to me and I would like to have this feature too :blush:.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
If someone is willing to write up a tutorial, one of these would make a perfect home for it:
https://minikube.sigs.k8s.io/docs/tutorials/
https://minikube.sigs.k8s.io/docs/reference/networking/
Most helpful comment
May I /reopen ? @jimmiebtlr explanations of what he wants sounds good to me and I would like to have this feature too :blush:.