Sidekiq: Can a job be executed twice at the same time?

Created on 19 Jun 2015  路  3Comments  路  Source: mperham/sidekiq

I've been reading the features of sidekiq, and about concurrency and idempotent jobs. However it's still not clear for me the level of concurrency sidekiq does work with.

My problem is that I've seen a couple of jobs being executed twice even if the first one succeeds. I even think that somes time a single job gets grabbed at the same time by two different workers (with like ms of difference between each other).

As I said, I understand that sidekiq can process a single job multiple times, but I'm not sure if it's expected to be processed at the same time.

Thank you.

Most helpful comment

I'm currently debugging an issue (that may be this issue), and after reading through the relevant threads #2398, #3470, #3742, and #3311 (am I missing any?) I'm a bit disappointed they are all closed. It seems there is an issue here, and there should be at least one open issue tracking it.

Issue Timeline

  1. Duplicate JIDs are discovered related to workers with connection (triggering reconnect) issues
  2. Potential solution reconnect_attempts: 0 is suggested, and confirmed to solve this issue
  3. Proposed solution is merged in 2749c12
  4. Released solution causes issues for users expecting spurious connection errors to resolve themselves
  5. Reverted solution to ensure users connections operate as they have been in the past

This leaves us where we started. And while it's true, there is a potential workaround by setting reconnect_attempts: 0 manually. It should be tracked, or noted that a job can in fact run twice with the same JID.

I'm only writing this up to 1) ensure I'm not missing something important, and 2) encourage visibility into this issue for future developers. There may still be a good solution out there.

@mperham thoughts about re-opening this issue?

All 3 comments

No, it should not be possible to see the same JID processing at the same time.

That's what I would expect. I've setup an extra check to ensure I'm not queueing the same job twice, as that's the only reason this would happen then.

I'll report back if I find a real issue. Thanks

I'm currently debugging an issue (that may be this issue), and after reading through the relevant threads #2398, #3470, #3742, and #3311 (am I missing any?) I'm a bit disappointed they are all closed. It seems there is an issue here, and there should be at least one open issue tracking it.

Issue Timeline

  1. Duplicate JIDs are discovered related to workers with connection (triggering reconnect) issues
  2. Potential solution reconnect_attempts: 0 is suggested, and confirmed to solve this issue
  3. Proposed solution is merged in 2749c12
  4. Released solution causes issues for users expecting spurious connection errors to resolve themselves
  5. Reverted solution to ensure users connections operate as they have been in the past

This leaves us where we started. And while it's true, there is a potential workaround by setting reconnect_attempts: 0 manually. It should be tracked, or noted that a job can in fact run twice with the same JID.

I'm only writing this up to 1) ensure I'm not missing something important, and 2) encourage visibility into this issue for future developers. There may still be a good solution out there.

@mperham thoughts about re-opening this issue?

Was this page helpful?
0 / 5 - 0 ratings