Currently we delegate all building of images to docker, using minikube docker-env.
This requires the user to install Docker on their machine, and then learn how to set it up...
https://kubernetes.io/docs/tutorials/hello-minikube/
FROM node:6.14.2
EXPOSE 8080
COPY server.js .
CMD node server.js
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};
var www = http.createServer(handleRequest);
www.listen(8080);
For more information on the docker build command, read the Docker documentation.
If the user doesn't already have a local installation of docker, they can't build the image!
We could do better, by providing an abstraction that will simply do the build for them:
$ minikube build -- -t hello-node minikube
馃捑 Downloading docker 18.09.8
Sending build context to Docker daemon 3.072kB
Step 1/4 : FROM node:6.14.2
6.14.2: Pulling from library/node
3d77ce4481b1: Pull complete
7d2f32934963: Pull complete
0c5cf711b890: Pull complete
9593dc852d6b: Pull complete
4e3b8a1eb914: Pull complete
ddcf13cc1951: Pull complete
2e460d114172: Pull complete
d94b1226fbf2: Pull complete
Digest: sha256:62b9d88be259a344eb0b4e0dd1b12347acfe41c1bb0f84c3980262f8032acc5a
Status: Downloaded newer image for node:6.14.2
---> 00165cd5d0c0
Step 2/4 : EXPOSE 8080
---> Running in 2a302085e433
Removing intermediate container 2a302085e433
---> 9172f65af846
Step 3/4 : COPY server.js .
---> 035625e5e23f
Step 4/4 : CMD node server.js
---> Running in 1771091ed23a
Removing intermediate container 1771091ed23a
---> 250208286ec5
Successfully built 250208286ec5
Successfully tagged hello-node:latest
Then the image is built right on the VM, and ready to be used from the minikube pods:
$ minikube ssh sudo crictl images hello-node
IMAGE TAG IMAGE ID SIZE
hello-node latest 250208286ec58 660MB
$ minikube kubectl -- create deployment hello-node --image=hello-node
馃捑 Downloading kubectl v1.15.0
deployment.apps/hello-node created
As usual have to edit the pull policy, when not using a registry but the local images.
containers:
- image: hello-node:latest
imagePullPolicy: IfNotPresent
name: hello-node
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
Change Always, as per https://kubernetes.io/docs/concepts/containers/images/
Eventually we could improve this by not using the Docker daemon but e.g. buildah :
https://github.com/containers/libpod/blob/master/docs/podman-build.1.md
https://github.com/containers/buildah/blob/master/docs/buildah-bud.md
That way the user don't have to have the dockerd running, but can use containerd or cri-o.
This project could also be interesting, eventually:
https://github.com/GoogleContainerTools/kaniko
That is: building the images in Kubernetes instead ?
With enough kernel support, also doable with buildah.
https://opensource.com/article/19/3/tips-tricks-rootless-buildah
+1
Nice talk about the general problem space (i.e. beyond just what minikube can provide and uses):
https://kccnceu18.sched.com/event/Dqu1/building-docker-images-without-docker-matt-rickard-google-intermediate-skill-level-slides-attached
It could be implemented in the same way as minikube kubectl. So simply running minikube docker ... will relay everything to the docker client / demon inside the VM.
It does require the docker argument, which is slightly longer to type then just minikube build ... but it does provide clear purpose (docker based commands). Will also do everything the docker client can instead of just build.
This suggestion would mostly simply save having to do minikube ssh first...
Problem point with this would be context for any command other than build, since the user would expect their CWD on the host being the context, but in reality it would be the CWD inside the VM. As long as a user is in their homedir it could be mapped of course (given that the homedir is mounted under most vm-drivers, though not all?).
It could be implemented in the same way as
minikube kubectl
It _was_ implemented the same way, only difference being using .tgz and .zip instead of .exe
See 8983b8d73fe7e1b7a4eb42d47a8ae7369c6849e4
Problem point with this would be context for any command other than build,
The use case was _only_ build, but it still remains to handle the "build context" for podman.
As long as we are using the docker client on the host, it will transparently handle directories.
But when running docker build or podman build on the VM, we need to transport the files.
This is done by creating a tar archive on the client, and then copying that to the machine.
There are some other minor details as well, like handling .dockerignorefiles. But nothing much.
https://docs.docker.com/engine/reference/commandline/build/#build-with-path
I implemented the podman version now, although it does not transport any build context.
So it can only build files that are already on the VM, or build tarballs provided by an URL.
The Docker version is sending the command over 2376 to the dockerd using docker (client).
docker $(minikube docker-config) build ...
Where "docker-config" is an imaginary command that does the same as docker-machine config
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM busybox
---> 19485c79a9bb
Step 2/2 : RUN true
---> Using cache
---> 5b5b3c378749
Successfully built 5b5b3c378749
Successfully tagged testbuild:latest
The Podman version is instead running the podman command remotely, using sudo over ssh.
minikube ssh -- sudo podman build ...
This means that we don't need to install a client, and don't need to have a daemon running.
STEP 1: FROM busybox
STEP 2: RUN true
--> Using cache 4fdf7ea3e9292032ccf15cd1fed43cf483724640923b48511c906b9ce632fcd0
STEP 3: COMMIT testbuild
4fdf7ea3e9292032ccf15cd1fed43cf483724640923b48511c906b9ce632fcd0
They both take more or less the same flags, such as -t for tagging the container image.
Handing a directory ("build context") is done by creating a tar stream, including Dockerfile .
Thanks @afbjorklund. I was able to achieve exactly what I wanted based on your notes, moving the buildstep inside minikube instead of relying on my co-developers local docker environments being correctly set up.
@afbjorklund are you still working on this on e?
@josedonizetti also showed interest in this.
this would be a cool feature.
@medyagh : I was trying to build critical mass for including it as a feature, over initial concerns.
Seems like we have it, so I can do a rebase and finish that "build context" implentation for podman...
Something like this: https://github.com/fsouza/go-dockerclient/blob/master/tar.go#L20
So that you can build a directory, and it will automatically create a tarball and scp it...
Adding varlink support to the minikube ISO and using podman-remote/podman-env (#6350) means that one does not _have_ to use podman over ssh (directly) anymore. It will handle the build context.
We should probably still support the use case, like when not able to run any kind of local docker client or podman remote whatsover. But it is less urgent now, when there is an alternative for both.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
this is still a good thing to have, to clarify :
minikube already has 5 multiple ways to build images https://minikube.sigs.k8s.io/docs/handbook/pushing/
but this issue is about adding an issue, that remove the dependency on docker on the user's host.
we still have a way to build image using minikube ssh, that means user wont have to install docker but still they would need to transfer their docker file and files into minikube to use minikube ssh
so this issue purposes that we hide that from the user for a less friction build
There's actually only two ways of building (docker and podman), maybe four if you count ssh.
Please note that the user will still run a local docker (cli) or podman-remote "under the hood".
While we _could_ have re-implemented the client inside of minikube, it wasn't really worth doing...
The implementation handles both scenarios though, since it was written before varlink worked.
For some scenarios, it could be good to avoid running a local client (beyond just tar and ssh)
The main idea was the same as with minikube kubectl, to shield the user from all the details.
Having to log in to the master node using ssh is considered as a workaround here.
Similar to having to log in to the master just to run kubectl, it should not be needed!
Instead the user is supposed to be able to edit the Dockerfile and the yaml file locally.
And then use the provided "build" and "kubectl" commands, to talk to their cluster.
The same method that is used for running docker or podman remotely, also works for kaniko.
https://github.com/GoogleContainerTools/kaniko#using-kaniko
echo -e 'FROM alpine \nRUN echo "created from standard input"' > Dockerfile | tar -cf - Dockerfile | gzip -9 | docker run \ --interactive -v $(pwd):/workspace gcr.io/kaniko-project/executor:latest \ --context tar://stdin \ --destination=<gcr.io/$project/$image:$tag>
This build wrapper could handle all those ugly bits (creating and piping the tarball) for you...
@afbjorklund the idea of docker $(minikube docker-config) build ... sounds great. is there any progress on that?
@afbjorklund the idea of
docker $(minikube docker-config) build ...sounds great. is there any progress on that?
No, forgot about it. There was the workaround with the subshell. ( eval $(minikube docker-env); docker ...)
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten
Also needs an implementation for buildkit, for use with the containerd container runtime.