Globally: the cache warm-up tasks launched by Celery workers all silently fail. Indeed, they perform GETs on the main server's URL without providing the required authentication. However, dashboards may not be loaded without being logged in.
Related bugs:
--beat flag to listen on CeleryBeat schedules (cf docker-compose.yml configuration)At stake: long dashboard load times for our users, or outdated dashboards.
Main files to be fixed:
superset/tasks/cache.pyWhen the Celery worker logs this (notice 'errors': []):
superset-worker_1 | [2020-04-20 13:05:00,299: INFO/ForkPoolWorker-3] Task cache-warmup[73c09754-4dcb-4674-9ac2-087b04b6e209]
succeeded in 0.1351924880000297s:
{'success': [
'http://superset:8088/superset/explore/?form_data=%7B%22slice_id%22%3A%2031%7D',
'http://superset:8088/superset/explore/?form_data=%7B%22slice_id%22%3A%2032%7D',
'http://superset:8088/superset/explore/?form_data=%7B%22slice_id%22%3A%2033%7D'],
'errors': []}
... we would expect to have something (more or less) like this in the Superset server logs:
superset_1 | 172.20.0.6 - - [2020-04-20 13:05:00,049] "POST /superset/explore_json/?form_data=%7B%22slice_id%22%3A HTTP/1.1"
200 738 "http://superset:8088/superset/dashboard/1/" "python-urllib2"
Of course, we also hope to have a bunch of items in the Redis logs, and that loading dashboards is lightning-quick.
But we get these logs instead, which show there is a 302 redirect to the login page, followed by a 200 on the login page. This redirect is interpreted as a success by the tests.
superset_1 | 172.20.0.6 - - [20/Apr/2020 08:12:00] "GET /superset/explore/?form_data=%7B%22slice_id%22%3A%2030%7D HTTP/1.1"
302 -
superset_1 | INFO:werkzeug:172.20.0.6 - - [20/Apr/2020 08:12:00] "GET /superset/explore/?form_data=%7B%22slice_id%22%3A%2030%7D HTTP/1.1"
302 -
superset_1 | 172.20.0.6 - - [20/Apr/2020 08:12:00] "GET /login/?next=http%3A%2F%2Fsuperset%3A8088%2Fsuperset%2Fexplore%2F%3Fform_data%3D%257B%2522slice_id%2522%253A%252030%257D HTTP/1.1"
200 -
(I added a few line returns)
In the Redis, here is the only stored key:
$ docker-compose exec redis redis-cli
127.0.0.1:6379> KEYS *
1) "_kombu.binding.celery"
Last, the dashboards take time loading the data on the first connection.
None
I had to patch the master branch to get this to work. In particular, I have to admit it was not very clear to me whether the config was read from file docker/pythonpath_dev/superset_config.py or file superset/config.py. So I kind of adapted superset/config.py and copied it over to the pythonpath one (which looks like it is read by the celery worker, but not the server).
Anyway, this reproduces the bug:
$ docker system prune --all to remove all dangling images, exited containers and volumes.$ git checkout master && git pull origin master$ wget -O configs.patch https://gist.githubusercontent.com/Pinimo/c339ea828974d2141423b6ae64192aa4/raw/e449c97c11f81f7270d6e0b2369d55ec41b079a9/0001-bug-Patch-master-to-reproduce-sweetly-the-cache-warm.patch && git apply configs.patch--beat flag and specify a cache warmup task on all dashboards every minute.$ docker-compose up -d$ docker-compose logs superset-worker | grep cache-warmup$ docker-compose logs superset | grep slice$ docker-compose exec redis redis-cli then type KEYS *(please complete the following information):
BTW, I noticed there is such a login procedure (with a headless browser) in the email report generator. Perhaps that procedure could be factored out and reused to warm up the cache in our case?
@Pinimo what does your cache config look like?
I setup redis caching with with a timeout of 5 minutes and cache warmup with the topndashboard strategy every 2 minutes (just to test).
I can see this in Celery worker logs:
[2020-04-21 00:36:00,009: INFO/ForkPoolWorker-1] cache-warmup[e41de539-0bf7-4e70-b02b-4d2a132d8d0e]: Loading strategy
[2020-04-21 00:36:00,010: INFO/ForkPoolWorker-1] cache-warmup[e41de539-0bf7-4e70-b02b-4d2a132d8d0e]: Loading TopNDashboardsStrategy
[2020-04-21 00:36:00,014: INFO/ForkPoolWorker-1] cache-warmup[e41de539-0bf7-4e70-b02b-4d2a132d8d0e]: Success!
[2020-04-21 00:36:00,043: INFO/ForkPoolWorker-1] cache-warmup[e41de539-0bf7-4e70-b02b-4d2a132d8d0e]: Fetching http://0.0.0.0:8088/superset/explore/?form_data=%7B%22slice_id%22%3A%201%7D
[2020-04-21 01:06:00,131: INFO/ForkPoolWorker-2] cache-warmup[d2d68627-adce-4fa5-852e-522e95350a6c]: {'success': ['http://0.0.0.0:8088/superset/explore/?form_data=%7B%22slice_id%22%3A%201%7D'], 'errors': []}
but in Superset logs, I only see:
superset_1 | 2020-04-21 00:36:00,049 [DEBUG] [stats_logger] (incr) explore
Needless to say, my charts are not being updated
This is my config:
CACHE_DEFAULT_TIMEOUT = 300
CACHE_CONFIG = {
'CACHE_TYPE': 'redis',
'CACHE_DEFAULT_TIMEOUT': 180,
'CACHE_KEY_PREFIX': 'superset_results',
'CACHE_REDIS_URL': 'redis://localhost:6379/0',
}
class CeleryConfig(object):
BROKER_URL = 'redis://localhost:6379/0'
CELERY_IMPORTS = (
'superset.sql_lab',
'superset.tasks',
)
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
CELERYD_LOG_LEVEL = 'DEBUG'
CELERYD_PREFETCH_MULTIPLIER = 10
CELERY_ACKS_LATE = True
CELERYBEAT_SCHEDULE = {
'cache-warmup-hourly': {
'task': 'cache-warmup',
'schedule': crontab(minute='*/2', hour='*'),
'kwargs': {
'strategy_name': 'top_n_dashboards',
'top_n': 5,
'since': '7 days ago',
},
},
}
CELERY_CONFIG = CeleryConfig
@jayhjha Perhaps it would be worth changing your config variable SUPERSET_SERVER_ADDRESS to "superset".
Any news on this?
A colleague made a POC on this, but came to the conclusion it is already quite difficult to have the email reports working... He wanted to use part of that code (headless browser + login) to work around the login problem. I think he found out the dependencies for the feature were not included in the Dockerfile.
To my knowledge the feature is (and will stay...) broken :cry:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. For admin, please label this issue .pinned to prevent stale bot from closing the issue.
Any news from the community on this issue?
@Pinimo any workaround you find to this issue? I am getting the same exact issue.
@mukulsaini I have not found the time to address the issue, to my best knowledge it has not been solved yet. If you too find this issue is a real problem, I invite you to talk it over on Superset's Slack :left_speech_bubble:
Here are a few educated guesses as how to solve the issue:
We could work around the auth by... signing in through a headless browser in the Celery process. After thinking it over, it seems difficult to me:
Perhaps a better solution would involve setting up an API server with its own authentication procedure -- or a new auth method on the same server, to allow the Celery worker to perform cache requests:
POST requests that only return empty documents (not usable to extract data from the instance).Not sure at all about this last idea: we could code a new CLI route to return the chart data. The Celery worker would then execute the CLI (if I'm remembering right, all the configs and docker images are the same). However, that would possibly infringe memory limits for the Celery worker.
Yet another draft solution:
db-inits, with a very specific caching role. I find it important that this role should never be able to actually extract any data (so as not to care too much for the password being stolen), just to ping the server and get it to cache the data. It would even be possible to modify the @login_required decorator to add the constraint:__cache_worker, thenMmmh, maybe I'm missing something, but it seems like we shouldn't have to go through the web server to do this.
Refactoring / mimicking what explore_json does might be an option.
https://github.com/apache/incubator-superset/blob/master/superset/views/core.py#L525-L536
@mistercrunch What I see from the previous commits, previously the route used to cache warmup in cache.py get_url was /explore_json. Any reason it was changed to /explore ?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. For admin, please label this issue .pinned to prevent stale bot from closing the issue.