Minikube: How to access etcd (or other services) service from host OS?

Created on 19 Jun 2018  路  13Comments  路  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG

Please provide the following details:

Environment:
mac Sierra
Minikube version (use minikube version): v0.27.0

  • OS (e.g. from /etc/os-release): mac Sierra
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v0.26.0
  • Install tools:
  • Others:
    The above can be generated in one go with the following commands (can be copied and pasted directly into your terminal):
minikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver": 
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json

What happened:
https://github.com/kubernetes/minikube/blob/master/docs/accessing_etcd.md
Following this, when I run minikube ssh -- "sudo /usr/local/bin/localkube --host-ip"
sudo: /usr/local/bin/localkube: command not found
Actually, no local dir under usr

when I run ip addr, no ip address under vboxnet1.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:18:91:a8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 82034sec preferred_lft 82034sec
    inet6 fe80::a00:27ff:fe18:91a8/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:f8:08:7f brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
       valid_lft 1028sec preferred_lft 1028sec
    inet6 fe80::a00:27ff:fef8:87f/64 scope link 
       valid_lft forever preferred_lft forever
4: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
    link/sit 0.0.0.0 brd 0.0.0.0
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:e1:a9:31:b3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:e1ff:fea9:31b3/64 scope link 
       valid_lft forever preferred_lft forever
7: veth92b20cc@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 9e:82:9c:06:06:4b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::9c82:9cff:fe06:64b/64 scope link 
       valid_lft forever preferred_lft forever
9: veth13bc058@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 6e:b0:ff:9e:af:45 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::6cb0:ffff:fe9e:af45/64 scope link 
       valid_lft forever preferred_lft forever
11: veth679cbc4@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether d2:8a:d1:e9:95:73 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::d08a:d1ff:fee9:9573/64 scope link 
       valid_lft forever preferred_lft forever

What you expected to happen:
I want to access etcd with etcdctl, but

error #0: dial tcp 127.0.0.1:4001: connect: connection refused  
error #1: dial tcp 127.0.0.1:2379: connect: connection refused

Where is the etcd in minikube? I am sure I can access etcd in k8s with localhost:2379. But that seems not work on minikube. How can I access etcd with etcdctl?

How to reproduce it (as minimally and precisely as possible):
As above

Output of minikube logs (if applicable):

Anything else do we need to know:
Thanks

good first issue help wanted kindocumentation lifecyclrotten prioritbacklog 2019q2

Most helpful comment

I hope its not late and its what you are looking for
Was able to get to etcd with etcdctl on macOS Mojave 10.14.4

$ etcdctl --version
etcdctl version: 3.3.13
API version: 2

minikube version
minikube version: v1.1.0

etcdctl --ca-file=ca.crt --key-file=server.key --cert-file=server.crt --endpoints https:// from etcd pod >:2379 member list
4c7f7dc22d568a00: name=minikube peerURLs=https:// from etcd pod >:2380 clientURLs=https:// from etcd pod >:2379 isLeader=true

crt & key files are from etcd pod

All 13 comments

You got

minikube ssh -- "sudo /usr/local/bin/localkube --host-ip"
sudo: /usr/local/bin/localkube: command not found

because in recent versions of minikube localkube has been deprecated. The current default bootstrapper is kubeadm. If you want to use localkube as the bootstrapper, you need to start minikube with

minikube start --bootstrapper=localkube

vboxnet interfaces are created on the physical host, so you run ip addr on your computer, not in the minikube VM.

yeah, i check that no localkube directory there. Actually, I do not care what bootstrapper it use. I just want to access minicube etcd with curl or etcdctl from my mac, any idea? I am sure I can access with localhost inside the minicube VM. But no idea how to access it from outside.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

+1

I hope its not late and its what you are looking for
Was able to get to etcd with etcdctl on macOS Mojave 10.14.4

$ etcdctl --version
etcdctl version: 3.3.13
API version: 2

minikube version
minikube version: v1.1.0

etcdctl --ca-file=ca.crt --key-file=server.key --cert-file=server.crt --endpoints https:// from etcd pod >:2379 member list
4c7f7dc22d568a00: name=minikube peerURLs=https:// from etcd pod >:2380 clientURLs=https:// from etcd pod >:2379 isLeader=true

crt & key files are from etcd pod

@dpandhi-git thank you for providing this tips ! this could be added under a new doc page, called How to, or tutorials. how to connect to etc. this would useful for other users.

/assign rajalokan

@rajalokan - is this something you are still pursuing?

It might be nice to list a quick blurb about using ssh tunnels on https://minikube.sigs.k8s.io/docs/reference/networking/

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings