Some users are reporting sometimes to be stuck on reprocessing. Mostly happens with self-installations but we also had two support issues.
This seems to be triggered by internal server errors in the processing pipeline in bad places.
Related: https://forum.sentry.io/t/stuck-there-are-x-events-pending-reprocessing/1518/6
Let me know how we can help with this issue.
Same here uploading symbol files!
@servomac @st3fan in case you use sentry.io it would be great to know which org/project it happens for (you can mail me at [email protected]).
In case it happens on self hosted please reach out to me as well by mail and we can maybe chat about how it gets stuck there.
Thanks @mitsuhiko, I have just sent you a mail. I'm using a self-hosted sentry, recently upgraded to 8.19.
The worker logs show an IOError trying to reach the symbol's file:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry/tasks/base.py", line 54, in _wrapped
result = func(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry/tasks/store.py", line 165, in process_event_from_reprocessing
return _do_process_event(cache_key, start_time, event_id)
File "/usr/local/lib/python2.7/site-packages/sentry/tasks/store.py", line 117, in _do_process_event
new_data = process_stacktraces(data)
File "/usr/local/lib/python2.7/site-packages/sentry/stacktraces.py", line 378, in process_stacktraces
if processor.preprocess_step(processing_task):
File "/usr/local/lib/python2.7/site-packages/sentry/lang/native/plugin.py", line 151, in preprocess_step
on_dsym_file_referenced=on_referenced
File "/usr/local/lib/python2.7/site-packages/sentry/lang/native/symbolizer.py", line 115, in __init__
project, to_load, on_dsym_file_referenced=on_dsym_file_referenced
File "/usr/local/lib/python2.7/site-packages/sentry/models/dsymfile.py", line 330, in fetch_dsyms
project, image_uuid, on_dsym_file_referenced=on_dsym_file_referenced
File "/usr/local/lib/python2.7/site-packages/sentry/models/dsymfile.py", line 365, in fetch_dsym
with dsf.file.getfile() as sf:
File "/usr/local/lib/python2.7/site-packages/sentry/models/file.py", line 169, in getfile
mode=kwargs.get('mode'),
File "/usr/local/lib/python2.7/site-packages/sentry/models/file.py", line 228, in __init__
self.open()
File "/usr/local/lib/python2.7/site-packages/sentry/models/file.py", line 250, in open
self.seek(0)
File "/usr/local/lib/python2.7/site-packages/sentry/models/file.py", line 268, in seek
self._nextidx()
File "/usr/local/lib/python2.7/site-packages/sentry/models/file.py", line 239, in _nextidx
self._curfile = self._curidx.blob.getfile()
File "/usr/local/lib/python2.7/site-packages/sentry/models/file.py", line 138, in getfile
return storage.open(self.path)
File "/usr/local/lib/python2.7/site-packages/django/core/files/storage.py", line 33, in open
return self._open(name, mode)
File "/usr/local/lib/python2.7/site-packages/django/core/files/storage.py", line 159, in _open
return File(open(self.path(name), mode))
IOError: [Errno 2] No such file or directory: u'/tmp/sentry-files/17394/35156/05b11a671f444f499e3d49eba01a3203'
09:45:59 [ERROR] celery.worker.job: Task sentry.tasks.store.process_event_from_reprocessing[1bccbbae-99a8-47f2-8eb9-7a1960e34472] raised unexpected: IOError(2, 'No such file or directory') (data={u'hostname': 'celery@ec1e35be7f18', u'name': 'sentry.tasks.store.process_event_from_reprocessing', u'args': '[]', u'internal': False, u'kwargs': "{'event_id': 'CD958D73B34745B991F0BC4E9C87A5AF', 'start_time': 1502876758.463073, 'cache_key': 'e:CD958D73B34745B991F0BC4E9C87A5AF:29'}", u'id': '1bccbbae-99a8-47f2-8eb9-7a1960e34472'})
@servomac you need to configure a persistent volume to use this feature. That IOError should be pretty self explanatory.
@servomac as dcramer said this is because you did not configure a persistent storage for the uploads. So the files were uploaded to /tmp only and when something cleans up temp files (eg: reboot) this will cause this error. You will need to wipe the sentry_fileblob tables in your database and upload the files again after fixing the configuration error.
(This is definitely not the source of this bug. It's expected behavior for when your files go away but will look similar as reprocessing cannot happen.)
Thank you both, I will try to configure S3 and delete fileblob entries.
But even in that case, when the problem is created by a bad configuration/deployment (like in my case), at a product level providing a feedback to the user about an stalled "Reprocessing" (when really its exploding) do not really gives a lot of information.
We've added a new button called "Discard all" which can be found above your processing issues list.
This will discard all processing issues and the corresponding events.
We've also found an error in our processing pipeline we've yet to fix.
I will close this issue for now and link new issues regarding processing errors later.