Hi,
I'm using the stable Helm chart (https://github.com/kubernetes/charts/tree/master/stable/kong) to run Kong in k8s.
I would like to run the service using http (not https) but it's not possibe with the current Helm chart (https://github.com/kubernetes/charts/blob/master/stable/kong/templates/service-kong-admin.yaml#L17) since admin.https exists in the default values.yaml.
-- Sven
+1
@svenmueller it has been fixed with Kong chart version 0.2.0. Please give it a try.
Closing this ticket, feel free to reopen if you run into any issue with the Kong chart.
@shashiranjan84
It does not works for me, follow the steps below
$ helm repo update
$ helm search kong
NAME CHART VERSION APP VERSION DESCRIPTION
stable/kong 0.2.0 0.12.2 Kong is open-source API Gateway and Microservic...
$ helm install stable/kong --name kong --set=admin.useTLS=false,proxy.useTLS=false
view the kube-dashboard
Back-off restarting failed container
Liveness probe failed: Get https://172.17.0.9:8444/status: http: server gave HTTP response to HTTPS client
maybe it should be this, but same errors again
helm install stable/kong --name kong --set=admin.useTLS=false,proxy.useTLS=false,livenessProbe.httpGet.scheme=HTTP
more details
$ minikube version
minikube version: v0.25.0
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-09T21:51:54Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2018-01-26T19:04:38Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
$ helm version
Client: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}
$ kubectl describe po/kong-kong-556666847d-gplwv
Name: kong-kong-556666847d-gplwv
Namespace: default
Node: minikube/192.168.64.17
Start Time: Thu, 15 Mar 2018 16:53:54 +0800
Labels: app=kong
pod-template-hash=1122224038
release=kong
Annotations: <none>
Status: Running
IP: 172.17.0.9
Controlled By: ReplicaSet/kong-kong-556666847d
Containers:
kong:
Container ID: docker://720bb48eb0bdb0cd6b684305316f86c0e715b9af9f7bf0ac656e1b435a3d84c0
Image: kong:0.12.2
Image ID: docker-pullable://kong@sha256:e0f313f0831daee2113f50dcf7c165a3820cfc38c4b333135feafcad24c1fdb2
Ports: 8444/TCP, 8443/TCP
State: Running
Started: Thu, 15 Mar 2018 17:12:05 +0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 15 Mar 2018 17:02:24 +0800
Finished: Thu, 15 Mar 2018 17:12:05 +0800
Ready: False
Restart Count: 3
Liveness: http-get https://:admin/status delay=300s timeout=60s period=60s #success=1 #failure=5
Readiness: http-get https://:admin/status delay=300s timeout=60s period=60s #success=1 #failure=5
Environment:
KONG_ADMIN_LISTEN: 0.0.0.0:8444
KONG_ADMIN_SSL: off
KONG_PROXY_LISTEN: 0.0.0.0:8443
KONG_SSL: off
KONG_NGINX_DAEMON: off
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_DATABASE: postgres
KONG_PG_HOST: kong-postgresql
KONG_PG_PASSWORD: <set to the key 'postgres-password' in secret 'kong-postgresql'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7fb48 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-7fb48:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7fb48
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned kong-kong-556666847d-gplwv to minikube
Normal SuccessfulMountVolume 19m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-7fb48"
Normal Pulling 19m kubelet, minikube pulling image "kong:0.12.2"
Normal Pulled 11m kubelet, minikube Successfully pulled image "kong:0.12.2"
Warning BackOff 11m (x2 over 11m) kubelet, minikube Back-off restarting failed container
Warning Unhealthy 1m (x5 over 5m) kubelet, minikube Liveness probe failed: Get https://172.17.0.9:8444/status: http: server gave HTTP response to HTTPS client
Warning Unhealthy 1m (x5 over 5m) kubelet, minikube Readiness probe failed: Get https://172.17.0.9:8444/status: http: server gave HTTP response to HTTPS client
Normal Created 1m (x4 over 11m) kubelet, minikube Created container
Normal Started 1m (x4 over 11m) kubelet, minikube Started container
Normal Pulled 1m (x3 over 11m) kubelet, minikube Container image "kong:0.12.2" already present on machine
Normal Killing 1m kubelet, minikube Killing container with id docker://kong:Container failed liveness probe.. Container will be killed and recreated.
You would need to update the probe scheme type to HTTP.
@shashiranjan84
yes, I did, but not work.
helm install stable/kong --name kong --set=admin.useTLS=false,proxy.useTLS=false,livenessProbe.httpGet.scheme=HTTP
You missing readinessProbe scheme change, just tested this command
helm install stable/kong --name kong --set=admin.useTLS=false,proxy.useTLS=false,livenessProbe.httpGet.scheme=HTTP,readinessProbe.httpGet.scheme=HTTP
Give it 3~4 minutes to reach desired state.
(master) ✔ [charts] kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
po/kong-kong-6f66484c69-fvb9l 1/1 Running 2 15m
po/kong-postgresql-f57cb6745-vwjwg 1/1 Running 0 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kong-kong-admin NodePort 10.103.215.157 <none> 8444:32152/TCP 15m
svc/kong-kong-proxy NodePort 10.102.199.83 <none> 8443:31258/TCP 15m
svc/kong-postgresql ClusterIP 10.100.116.82 <none> 5432/TCP 15m
svc/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19d
(master) ✔ [charts] http http://192.168.99.100:32152
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Mon, 19 Mar 2018 06:26:34 GMT
Server: kong/0.12.3
Transfer-Encoding: chunked
{
"configuration": {
"admin_access_log": "/dev/stdout",
"admin_error_log": "/dev/stderr",
"admin_http2": false,
"admin_ip": "0.0.0.0",
"admin_listen": "0.0.0.0:8444",
"admin_listen_ssl": "127.0.0.1:8444",
"admin_port": 8444,
"admin_ssl": false,
"admin_ssl_cert_csr_default": "/usr/local/kong/ssl/admin-kong-default.csr",
"admin_ssl_cert_default": "/usr/local/kong/ssl/admin-kong-default.crt",
"admin_ssl_cert_key_default": "/usr/local/kong/ssl/admin-kong-default.key",
"admin_ssl_ip": "127.0.0.1",
"admin_ssl_port": 8444,
"anonymous_reports": true,
"cassandra_consistency": "ONE",
"cassandra_contact_points": [
"127.0.0.1"
],
"cassandra_data_centers": [
"dc1:2",
"dc2:3"
],
"cassandra_keyspace": "kong",
"cassandra_lb_policy": "RoundRobin",
"cassandra_port": 9042,
"cassandra_repl_factor": 1,
"cassandra_repl_strategy": "SimpleStrategy",
"cassandra_schema_consensus_timeout": 10000,
"cassandra_ssl": false,
"cassandra_ssl_verify": false,
"cassandra_timeout": 5000,
"cassandra_username": "kong",
"client_body_buffer_size": "8k",
"client_max_body_size": "0",
"client_ssl": false,
"client_ssl_cert_csr_default": "/usr/local/kong/ssl/kong-default.csr",
"client_ssl_cert_default": "/usr/local/kong/ssl/kong-default.crt",
"client_ssl_cert_key_default": "/usr/local/kong/ssl/kong-default.key",
"custom_plugins": {},
"database": "postgres",
"db_cache_ttl": 3600,
"db_update_frequency": 5,
"db_update_propagation": 0,
"dns_error_ttl": 1,
"dns_hostsfile": "/etc/hosts",
"dns_no_sync": false,
"dns_not_found_ttl": 30,
"dns_order": [
"LAST",
"SRV",
"A",
"CNAME"
],
"dns_resolver": {},
"dns_stale_ttl": 4,
"error_default_type": "text/plain",
"http2": false,
"kong_env": "/usr/local/kong/.kong_env",
"latency_tokens": true,
"log_level": "notice",
"lua_package_cpath": "",
"lua_package_path": "./?.lua;./?/init.lua;",
"lua_socket_pool_size": 30,
"lua_ssl_verify_depth": 1,
"mem_cache_size": "128m",
"nginx_acc_logs": "/usr/local/kong/logs/access.log",
"nginx_admin_acc_logs": "/usr/local/kong/logs/admin_access.log",
"nginx_conf": "/usr/local/kong/nginx.conf",
"nginx_daemon": "off",
"nginx_err_logs": "/usr/local/kong/logs/error.log",
"nginx_kong_conf": "/usr/local/kong/nginx-kong.conf",
"nginx_optimizations": true,
"nginx_pid": "/usr/local/kong/pids/nginx.pid",
"nginx_worker_processes": "auto",
"pg_database": "kong",
"pg_host": "kong-postgresql",
"pg_password": "******",
"pg_port": 5432,
"pg_ssl": false,
"pg_ssl_verify": false,
"pg_user": "kong",
"plugins": {
"acl": true,
"aws-lambda": true,
"basic-auth": true,
"bot-detection": true,
"correlation-id": true,
"cors": true,
"datadog": true,
"file-log": true,
"galileo": true,
"hmac-auth": true,
"http-log": true,
"ip-restriction": true,
"jwt": true,
"key-auth": true,
"ldap-auth": true,
"loggly": true,
"oauth2": true,
"rate-limiting": true,
"request-size-limiting": true,
"request-termination": true,
"request-transformer": true,
"response-ratelimiting": true,
"response-transformer": true,
"runscope": true,
"statsd": true,
"syslog": true,
"tcp-log": true,
"udp-log": true
},
"prefix": "/usr/local/kong",
"proxy_access_log": "/dev/stdout",
"proxy_error_log": "/dev/stderr",
"proxy_ip": "0.0.0.0",
"proxy_listen": "0.0.0.0:8443",
"proxy_listen_ssl": "0.0.0.0:8443",
"proxy_port": 8443,
"proxy_ssl_ip": "0.0.0.0",
"proxy_ssl_port": 8443,
"real_ip_header": "X-Real-IP",
"real_ip_recursive": "off",
"server_tokens": true,
"ssl": false,
"ssl_cert_csr_default": "/usr/local/kong/ssl/kong-default.csr",
"ssl_cert_default": "/usr/local/kong/ssl/kong-default.crt",
"ssl_cert_key_default": "/usr/local/kong/ssl/kong-default.key",
"ssl_cipher_suite": "modern",
"ssl_ciphers": "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256",
"trusted_ips": {},
"upstream_keepalive": 60
},
"hostname": "kong-kong-6f66484c69-fvb9l",
"lua_version": "LuaJIT 2.1.0-beta3",
"node_id": "4405f37d-e111-4d1e-8c5c-4a2beee97f1f",
"plugins": {
"available_on_server": {
"acl": true,
"aws-lambda": true,
"basic-auth": true,
"bot-detection": true,
"correlation-id": true,
"cors": true,
"datadog": true,
"file-log": true,
"galileo": true,
"hmac-auth": true,
"http-log": true,
"ip-restriction": true,
"jwt": true,
"key-auth": true,
"ldap-auth": true,
"loggly": true,
"oauth2": true,
"rate-limiting": true,
"request-size-limiting": true,
"request-termination": true,
"request-transformer": true,
"response-ratelimiting": true,
"response-transformer": true,
"runscope": true,
"statsd": true,
"syslog": true,
"tcp-log": true,
"udp-log": true
},
"enabled_in_cluster": []
},
"prng_seeds": {
"pid: 17": 174133321681,
"pid: 18": 121240122483
},
"tagline": "Welcome to kong",
"timers": {
"pending": 5,
"running": 0
},
"version": "0.12.3"
}
(master) ✔ [charts]
@shashiranjan84 thx for your help!
@shashiranjan84 thanks
maybe it's more convenient to set http port mappings the same time :)
helm install --name kong --set admin.useTLS=false,admin.servicePort=8001,admin.containerPort=8001,proxy.useTLS=false,proxy.servicePort=8000,proxy.containerPort=8000,livenessProbe.httpGet.scheme=HTTP,readinessProbe.httpGet.scheme=HTTP stable/kong
Most helpful comment
@shashiranjan84 thanks
maybe it's more convenient to set http port mappings the same time :)