Follow the Quickstart docs, and it prints "Hello World" - assuming there is an intention to support microk8s, which might not be the case...
Exits with:
FATA[0033] exiting dev mode because first build failed: build failed: building [skaffold-example]: build artifact: denied: requested access to the resource is denied
11. Based on #3486 edit skaffold.yaml and k8s-pod.yaml:
```YAML
# skaffold.yaml
apiVersion: skaffold/v2alpha2
kind: Config
build:
local:
push: true
useDockerCLI: true
artifacts:
- image: 127.0.0.1:32000/skaffold-example
deploy:
kubectl:
manifests:
- k8s-*
# k8s-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: getting-started
spec:
containers:
- name: getting-started
image: 127.0.0.1:32000/skaffold-example
Listing files to watch...
- 127.0.0.1:32000/skaffold-example
Generating tags...
- 127.0.0.1:32000/skaffold-example -> 127.0.0.1:32000/skaffold-example:v1.2.0-59-g4c38a79-dirty
Checking cache...
- 127.0.0.1:32000/skaffold-example: Found. Pushing
The push refers to repository [127.0.0.1:32000/skaffold-example]
4b23326d73d7: Preparing
531743b7098c: Preparing
531743b7098c: Layer already exists
4b23326d73d7: Layer already exists
v1.2.0-59-g4c38a79-dirty: digest: sha256:5058f56bf126d7ec7968de8ce5415a5a7b9cb3b153bc50f9dc7284a169128116 size: 739
Tags used in deployment:
- 127.0.0.1:32000/skaffold-example -> 127.0.0.1:32000/skaffold-example:v1.2.0-59-g4c38a79-dirty@sha256:5058f56bf126d7ec7968de8ce5415a5a7b9cb3b153bc50f9dc7284a169128116
Starting deploy...
- pod/getting-started created
Cleaning up...
- pod "getting-started" deleted
FATA[0009] starting logger: initializing aggregate pod watcher: getting k8s client: getting client config for Kubernetes client: error creating REST client config in-cluster: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Any suggestions please?
Hi @sfhardman, thanks for the detailed issue!
Step 11 - you could just pass in the registry as well with --default-repo 127.0.0.1:32000 as a flag.
Not sure yet why you need the dockerCLI vs why the go-containerregistry doesn't work - probably you'd have to add 127.0.0.1:32000 to insecure-registries?
Step 12 - now that is an odd one - skaffold expects a ~/.kube/config to exist, i.e. if an existing kubectl would work then skaffold should work. From what I gather microk8s does not rely on that file but manages its own and provides a microk8s.kubectl command. What happens if you run sudo microk8s.kubectl config view --raw > $HOME/.kube/microk8s.cfg, and run skaffold with --kube-context=microk8s --kubeconfig=$HOME/.kube/microk8s.cfg?
Thanks @balopat I've been trying to do the same thing the OP is trying.
I tried your suggestions (just a side note, it's --kubeconfig instead of --kube-config) and it still does not work, this is what I got:
INFO[0000] starting gRPC server on port 50051
INFO[0000] starting gRPC HTTP server on port 50052
INFO[0000] Skaffold &{Version:v1.2.0 ConfigVersion:skaffold/v2alpha2 GitVersion: GitCommit:80f82f42fe271aea1058f4a37776d52ab5a7c441 GitTreeState:clean BuildDate:2020-01-17T01:04:55Z GoVersion:go1.13.6 Compiler:gc Platform:linux/amd64}
INFO[0000] Activated kube-context "microk8s"
INFO[0000] Using kubectl context: microk8s
Listing files to watch...
- localhost:32000/skaffold-example
INFO[0000] List generated in 1.898702ms
Generating tags...
- localhost:32000/skaffold-example -> localhost:32000/skaffold-example:v1.2.0-76-g7978e172e-dirty
INFO[0000] Tags generated in 6.697932ms
Checking cache...
- localhost:32000/skaffold-example: Found Remotely
INFO[0000] Cache check complete in 3.344587ms
Tags used in deployment:
- localhost:32000/skaffold-example -> localhost:32000/skaffold-example:v1.2.0-76-g7978e172e-dirty@sha256:bcf3f33232460aa3ebd678163f498d3617b22352e095481c10870564ebf98793
Starting deploy...
- pod/getting-started created
INFO[0000] Deploy complete in 638.878794ms
Cleaning up...
- pod "getting-started" deleted
INFO[0004] Cleanup complete in 3.692586598s
FATA[0004] starting logger: initializing aggregate pod watcher: getting k8s client: getting client config for Kubernetes client: error creating REST client config for kubeContext 'microk8s': context "microk8s" does not exist
So it turns out the issue here is that I was using the microk8s kubectl as an alias. I had to run the following:
sudo snap unalias kubectl
sudo snap install kubectl --classic # This installs kubectl, it doesn't need to be done using snap though
microk8s.kubectl config view --raw > $HOME/.kube/config
Then everything worked just fine.
@dungahk Would you mind posting the command line that's working for you? I'm using:
skaffold dev --default-repo=127.0.0.1:32000 --kubeconfig="$HOME/.kube/microk8s.cfg" --kube-context=microk8s --insecure-registry=127.0.0.1:32000
Which is failing, mostly with:
FATA[0002] starting logger: initializing aggregate pod watcher: getting k8s client: getting client config for Kubernetes client: error creating REST client config for kubeContext 'microk8s': context "microk8s" does not exist
But every three or four runs with:
rpc error: code = Unknown desc = failed to resolve image "127.0.0.1:32000/skaffold-example@sha256:5058f56bf126d7ec7968de8ce5415a5a7b9cb3b153bc50f9dc7284a169128116": no available registry endpoint: failed to do request: Head https://127.0.0.1:32000/v2/skaffold-example/manifests/sha256:5058f56bf126d7ec7968de8ce5415a5a7b9cb3b153bc50f9dc7284a169128116: http: server gave HTTP response to HTTPS client
Thanks for the feedback @dungahk! @sfhardman does this solve your problem too?
If it does, we could document this in our getting started guides for local development.
@sfhardman -
Did you run microk8s.kubectl config view --raw > $HOME/.kube/microk8s.cfg?
Regarding the server gave HTTP response to HTTPS client error - if you're using --insecure-registry can you try to run it without the dockerCLI:true setting?
@sfhardman This is my skaffold file (the only change is the localhost:32000 added)
apiVersion: skaffold/v2alpha2
kind: Config
build:
artifacts:
- image: localhost:32000/skaffold-example
deploy:
kubectl:
manifests:
- k8s-*
This is my k8s-pod.yaml file, again, the only change is the localhost:32000 added:
apiVersion: v1
kind: Pod
metadata:
name: getting-started
spec:
containers:
- name: getting-started
image: localhost:32000/skaffold-example
The command I run now is skaffold dev simple as that.
I suppose that I don't even need to add the localhost:32000 to the files and instead use the CLI parameter --default-repo, but I haven't had the chance to check that yet.
Thanks - I now have things working. Things seem to be a bit sensitive as to how the local registry is referred to (i.e. looks like it wants to be called localhost, not referenced by ip address, and localhost can't resolve to an ipv6 address).
The getting started example works without changes to the yaml with these extra steps:
microk8s.kubectl config view --raw > $HOME/.kube/configskaffold dev --default-repo=localhost:32000@dungahk
The command I run now is
skaffold devsimple as that.
Yeah, I'm just running skaffold dev as-is given I have similarly declarative configs. Thank you very much for posting your fix - I, too, was using kubectl as an alias and suspect I would've squandered an hour debugging had I not stumbled across your comment.
Most helpful comment
So it turns out the issue here is that I was using the microk8s kubectl as an alias. I had to run the following:
Then everything worked just fine.