1. Describe Pods
$ kubectl describe pod superset-6d55cbd6cd-n4g26 -n superset
Name: superset-6d55cbd6cd-n4g26
Namespace: superset
Priority: 0
Node: ip-XXXXXXXXXXXXXXXXXX.internal/XXXXXXXXXXXXXXX
Start Time: Tue, 20 Oct 2020 19:00:33 -0400
Labels: app=superset
io.cattle.field/appId=superset
pod-template-hash=6d55cbd6cd
release=superset
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 172.77.103.174
IPs:
IP: 172.77.103.174
Controlled By: ReplicaSet/superset-6d55cbd6cd
Init Containers:
wait-for-postgres:
Container ID: docker://6f69681cfae08d9e718387584743c9b5016f685709d308026d91c80ddcfae1df
Image: busybox:latest
Image ID: docker-pullable://busybox@sha256:a9286defaba7b3a519d585ba0e37d0b2cbee74ebfe590960b0b1d6a5e97d1e1d
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
until nc -zv $DB_HOST $DB_PORT -w1; do echo 'waiting for db'; sleep 1; done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 20 Oct 2020 19:00:34 -0400
Finished: Tue, 20 Oct 2020 19:00:34 -0400
Ready: True
Restart Count: 0
Environment Variables from:
superset-env Secret Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-47hg6 (ro)
Containers:
superset:
Container ID: docker://2be8afd1300a905c7bc5fcb4638d2b9a8bf224ecccd2020203fa992514e42150
Image: preset/superset:latest
Image ID: docker-pullable://preset/superset@sha256:211d58cfbefa1daa0cb0ca2f850bef62b99ab730575ee9b6bdfa394a2cb9d031
Port: 8088/TCP
Host Port: 0/TCP
Command:
/bin/sh
-c
. /app/pythonpath/superset_bootstrap.sh; /usr/bin/docker-entrypoint.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Wed, 21 Oct 2020 10:29:17 -0400
Finished: Wed, 21 Oct 2020 10:29:20 -0400
Ready: False
Restart Count: 185
Environment Variables from:
superset-env Secret Optional: false
Environment:
SUPERSET_PORT: 8088
Mounts:
/app/pythonpath from superset-config (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-47hg6 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
superset-config:
Type: Secret (a volume populated by a Secret)
SecretName: superset-config
Optional: false
default-token-47hg6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-47hg6
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 80s (x4249 over 15h) kubelet, ip-XXXXXXXXXXXXXX.internal Back-off restarting failed container
2. Kubectl Logs:
$ kubectl logs -f superset-6d55cbd6cd-n4g26 -n superset
: not found /app/pythonpath/superset_bootstrap.sh:
: not found /app/pythonpath/superset_bootstrap.sh:
ERROR: Invalid requirement: ''
WARNING: You are using pip version 20.2.3; however, version 20.2.4 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
: not found /app/pythonpath/superset_bootstrap.sh:
[2020-10-21 14:29:18 +0000] [13] [INFO] Starting gunicorn 20.0.4
[2020-10-21 14:29:18 +0000] [13] [INFO] Listening at: http://0.0.0.0:8088 (13)
[2020-10-21 14:29:18 +0000] [13] [INFO] Using worker: gthread
[2020-10-21 14:29:18 +0000] [16] [INFO] Booting worker with pid: 16
Found but failed to import local superset_config
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/cachelib/redis.py", line 41, in __init__
import redis
ModuleNotFoundError: No module named 'redis'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/superset/config.py", line 956, in <module>
import superset_config # pylint: disable=import-error
File "/app/pythonpath/superset_config.py", line 39, in <module>
key_prefix='superset_results'
File "/usr/local/lib/python3.7/site-packages/cachelib/redis.py", line 43, in __init__
raise RuntimeError('no redis module found')
RuntimeError: no redis module found
Failed to create app
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/cachelib/redis.py", line 41, in __init__
import redis
ModuleNotFoundError: No module named 'redis'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/superset/app.py", line 59, in create_app
app.config.from_object(config_module)
File "/usr/local/lib/python3.7/site-packages/flask/config.py", line 174, in from_object
obj = import_string(obj)
File "/usr/local/lib/python3.7/site-packages/werkzeug/utils.py", line 568, in import_string
__import__(import_name)
File "/app/superset/config.py", line 956, in <module>
import superset_config # pylint: disable=import-error
File "/app/pythonpath/superset_config.py", line 39, in <module>
key_prefix='superset_results'
File "/usr/local/lib/python3.7/site-packages/cachelib/redis.py", line 43, in __init__
raise RuntimeError('no redis module found')
RuntimeError: no redis module found
[2020-10-21 14:29:19 +0000] [16] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/cachelib/redis.py", line 41, in __init__
import redis
ModuleNotFoundError: No module named 'redis'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/gthread.py", line 92, in init_process
super().init_process()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 119, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 411, in import_app
app = app(*args, **kwargs)
File "/app/superset/app.py", line 69, in create_app
raise ex
File "/app/superset/app.py", line 59, in create_app
app.config.from_object(config_module)
File "/usr/local/lib/python3.7/site-packages/flask/config.py", line 174, in from_object
obj = import_string(obj)
File "/usr/local/lib/python3.7/site-packages/werkzeug/utils.py", line 568, in import_string
__import__(import_name)
File "/app/superset/config.py", line 956, in <module>
import superset_config # pylint: disable=import-error
File "/app/pythonpath/superset_config.py", line 39, in <module>
key_prefix='superset_results'
File "/usr/local/lib/python3.7/site-packages/cachelib/redis.py", line 43, in __init__
raise RuntimeError('no redis module found')
RuntimeError: no redis module found
[2020-10-21 14:29:19 +0000] [16] [INFO] Worker exiting (pid: 16)
[2020-10-21 14:29:20 +0000] [13] [INFO] Shutting down: Master
[2020-10-21 14:29:20 +0000] [13] [INFO] Reason: Worker failed to boot
Issue-Label Bot is automatically applying the label #bug to this issue, with a confidence of 0.84. Please mark this comment with :thumbsup: or :thumbsdown: to give our bot feedback!
Links: app homepage, dashboard and code for this bot.
@cguan7 can you plz look into this?
@cgivre any ideas?
@craig-rueda 👀 🙏
Maintainer on the Chart.yaml
https://github.com/apache/incubator-superset/blob/master/helm/superset/Chart.yaml#L21-L24
- name: Chuan-Yen Chiang
email: [email protected]
url: https://github.com/cychiang
Pointer to the lib that seem to be missing:
https://github.com/apache/incubator-superset/blob/master/helm/superset/values.yaml#L28
@mistercrunch i did email them.
I just tried this out and was able to have an installation up in ~2min.
What I tried:
cd helm/supersethelm dependency updatehelm install superset .After a few min, the thing was up :)
Verified by running: kubectl port-forward svc/superset 8088:8088 and logging into http://localhost:8080 with admin/admin
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/superset-5bdc9554d9-l9cwq 1/1 Running 0 5m27s
pod/superset-init-db-9jxwh 0/1 Completed 0 5m27s
pod/superset-postgresql-0 1/1 Running 0 5m27s
pod/superset-redis-master-0 1/1 Running 0 5m27s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/superset NodePort 172.20.60.220 <none> 8088:30817/TCP 5m27s
service/superset-postgresql ClusterIP 172.20.216.19 <none> 5432/TCP 5m27s
service/superset-postgresql-headless ClusterIP None <none> 5432/TCP 5m27s
service/superset-redis-headless ClusterIP None <none> 6379/TCP 5m27s
service/superset-redis-master ClusterIP 172.20.238.102 <none> 6379/TCP 5m27s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/superset 1/1 1 1 5m27s
NAME DESIRED CURRENT READY AGE
replicaset.apps/superset-5bdc9554d9 1 1 1 5m27s
NAME READY AGE
statefulset.apps/superset-postgresql 1/1 5m27s
statefulset.apps/superset-redis-master 1/1 5m27s
NAME COMPLETIONS DURATION AGE
job.batch/superset-init-db 1/1 101s 5m27s
@joehoeller - just to be clear around this chart - it's meant to allow for admins to stand up superset (potentially in a production env). We don't make any assumptions about how the thing is configured, nor do we load examples. This is done on purpose as you could imagine that after running Superset for some time, you would not want to re-load examples, etc. on upgrade. Regarding the redis issue you mentioned, I couldn't repro this. Our base Docker image doesn't include any deps like redis, postgres, etc. again on purpose as we don't want to make assumptions, but rather want to encourage people running the thing in their envs to add their own layers for needed drivers. There is a mechanism to add extra deps, which @mistercrunch pointed out above, which added redis correctly :).
It does deploy correctly but once you check the logs, run:
kubectl get pods -A
kubectl describe pods -f -n
kubectl get logs -n
You will see it actually fails silently. The commands you ran do not
demonstrate that you cannot reproduce this.
On Sun, Oct 25, 2020 at 11:40 AM Craig Rueda notifications@github.com
wrote:
I just tried this out and was able to have an installation up in ~2min.
What I tried:
- Fresh clone
- cd helm/superset
- helm dependency update
- helm install superset .
After a few min, the thing was up :)
Verified by running: kubectl port-forward svc/superset 8088:8088 and
logging into http://localhost:8080 with admin/admin$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/superset-5bdc9554d9-l9cwq 1/1 Running 0 5m27s
pod/superset-init-db-9jxwh 0/1 Completed 0 5m27s
pod/superset-postgresql-0 1/1 Running 0 5m27s
pod/superset-redis-master-0 1/1 Running 0 5m27sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/superset NodePort 172.20.60.2208088:30817/TCP 5m27s
service/superset-postgresql ClusterIP 172.20.216.195432/TCP 5m27s
service/superset-postgresql-headless ClusterIP None5432/TCP 5m27s
service/superset-redis-headless ClusterIP None6379/TCP 5m27s
service/superset-redis-master ClusterIP 172.20.238.1026379/TCP 5m27s NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/superset 1/1 1 1 5m27sNAME DESIRED CURRENT READY AGE
replicaset.apps/superset-5bdc9554d9 1 1 1 5m27sNAME READY AGE
statefulset.apps/superset-postgresql 1/1 5m27s
statefulset.apps/superset-redis-master 1/1 5m27sNAME COMPLETIONS DURATION AGE
job.batch/superset-init-db 1/1 101s 5m27s@joehoeller https://github.com/joehoeller - just to be clear around
this chart - it's meant to allow for admins to stand up superset
(potentially in a production env). We don't make any assumptions about how
the thing is configured, nor do we load examples. This is done on purpose
as you could imagine that after running Superset for some time, you would
not want to re-load examples, etc. on upgrade. Regarding the redis issue
you mentioned, I couldn't repro this. Our base Docker image doesn't include
any deps like redis, postgres, etc. again on purpose as we don't want to
make assumptions, but rather want to encourage people running the thing in
their envs to add their own layers for needed drivers. There is a mechanism
to add extra deps, which @mistercrunch https://github.com/mistercrunch
pointed out above, which added redis correctly :).—
You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub
https://github.com/apache/incubator-superset/issues/11363#issuecomment-716175968,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABHVQHA34VBFTGI5RW4KMD3SMRII3ANCNFSM4SZ4XH6A
.
@mistercrunch plz see above — I’ve left multiple messages in slack channel. You didn’t do the right things to try & reproduce the issue. I’m a bit frustrated tbh: You also didn’t answer any of my questions to clarify carries in things.
Hey, sorry k8s / helm isn't my specialty. Personally can't help. I'm sorry you feel that way.
@mistercrunch and @craig-rueda How are you building your base docker image? Is it possible I can get that file?
Sure, here you go: https://github.com/apache/incubator-superset/blob/master/Dockerfile
In my case the database never set up correctly, which I would think could cause some issues.
$ kubectl -n superset logs superset-postgresql-0
postgresql 19:12:44.91
postgresql 19:12:44.91 Welcome to the Bitnami postgresql container
postgresql 19:12:44.92 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
postgresql 19:12:44.92 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
postgresql 19:12:44.92 Send us your feedback at [email protected]
postgresql 19:12:44.92
postgresql 19:12:44.94 INFO ==> * Starting PostgreSQL setup *
postgresql 19:12:45.03 INFO ==> Validating settings in POSTGRESQL_* env vars..
postgresql 19:12:45.04 INFO ==> Loading custom pre-init scripts...
postgresql 19:12:45.05 INFO ==> Initializing PostgreSQL database...
postgresql 19:12:45.07 INFO ==> postgresql.conf file not detected. Generating it...
postgresql 19:12:45.09 INFO ==> pg_hba.conf file not detected. Generating it...
postgresql 19:12:45.09 INFO ==> Generating local authentication configuration
postgresql 19:12:49.12 INFO ==> Starting PostgreSQL in background...
postgresql 19:12:53.94 INFO ==> Creating user superset
postgresql 19:12:53.97 INFO ==> Grating access to "superset" to the database "superset"
postgresql 19:12:54.00 INFO ==> Configuring replication parameters
postgresql 19:12:54.04 INFO ==> Configuring fsync
postgresql 19:12:54.05 INFO ==> Loading custom scripts...
postgresql 19:12:54.06 INFO ==> Enabling remote connections
postgresql 19:12:54.08 INFO ==> Stopping PostgreSQL...
postgresql 19:12:55.09 INFO ==> * PostgreSQL setup finished! *
postgresql 19:12:55.17 INFO ==> * Starting PostgreSQL *
2020-11-17 19:12:55.199 GMT [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2020-11-17 19:12:55.201 GMT [1] LOG: could not create IPv6 socket for address "::": Address family not supported by protocol
2020-11-17 19:12:55.204 GMT [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2020-11-17 19:12:55.226 GMT [213] LOG: database system was shut down at 2020-11-17 19:12:54 GMT
2020-11-17 19:12:55.235 GMT [1] LOG: database system is ready to accept connections
2020-11-17 19:12:56.984 GMT [226] LOG: incomplete startup packet
2020-11-17 19:12:56.985 GMT [227] LOG: incomplete startup packet
2020-11-17 19:12:58.854 GMT [228] LOG: incomplete startup packet
2020-11-17 19:13:42.394 GMT [272] ERROR: relation "ab_user" already exists
2020-11-17 19:13:42.394 GMT [272] STATEMENT:
CREATE TABLE ab_user (
id INTEGER NOT NULL,
first_name VARCHAR(64) NOT NULL,
last_name VARCHAR(64) NOT NULL,
username VARCHAR(64) NOT NULL,
password VARCHAR(256),
active BOOLEAN,
email VARCHAR(64) NOT NULL,
last_login TIMESTAMP WITHOUT TIME ZONE,
login_count INTEGER,
fail_login_count INTEGER,
created_on TIMESTAMP WITHOUT TIME ZONE,
changed_on TIMESTAMP WITHOUT TIME ZONE,
created_by_fk INTEGER,
changed_by_fk INTEGER,
PRIMARY KEY (id),
UNIQUE (username),
UNIQUE (email),
FOREIGN KEY(created_by_fk) REFERENCES ab_user (id),
FOREIGN KEY(changed_by_fk) REFERENCES ab_user (id)
)
2020-11-17 19:13:42.420 GMT [271] ERROR: relation "ab_permission_view" already exists
2020-11-17 19:13:42.420 GMT [271] STATEMENT:
CREATE TABLE ab_permission_view (
id INTEGER NOT NULL,
permission_id INTEGER,
view_menu_id INTEGER,
PRIMARY KEY (id),
UNIQUE (permission_id, view_menu_id),
FOREIGN KEY(permission_id) REFERENCES ab_permission (id),
FOREIGN KEY(view_menu_id) REFERENCES ab_view_menu (id)
)
2020-11-17 19:13:54.604 GMT [288] ERROR: relation "ab_permission_view_role" does not exist at character 219
2020-11-17 19:13:54.604 GMT [288] STATEMENT: SELECT ab_permission_view.id AS ab_permission_view_id, ab_permission_view.permission_id AS ab_permission_view_permission_id, ab_permission_view.view_menu_id AS ab_permission_view_view_menu_id
FROM ab_permission_view, ab_permission_view_role
WHERE 1 = ab_permission_view_role.role_id AND ab_permission_view.id = ab_permission_view_role.permission_view_id
2020-11-17 19:13:54.612 GMT [288] ERROR: current transaction is aborted, commands ignored until end of transaction block
2020-11-17 19:13:54.612 GMT [288] STATEMENT: SELECT ab_view_menu.id AS ab_view_menu_id, ab_view_menu.name AS ab_view_menu_name
FROM ab_view_menu
WHERE ab_view_menu.name = 'ResetMyPasswordView'
...