Hi,
I'm upgrading all our system (which currently is ruby 2.0.0, rails 4.0.2 and sidekiq 2.7 in production) to a ruby 2.4 rails 5.0.2 and sidekiq 4.2.10 system.
Sidekiq is working correctly in this rails 5 enviroment in production (AWS EC2, deployed with capistrano, ran on nginx with puma), but somehow it doesn't work for some workers in none of any of the developers machines (mac osx). I believe this is something related to the database connection, because it hangs when trying to use Active record (one simple "find" takes like 2 minutes) , but for a very strange reason it releases itself from this lock when I run Sidekiq.redis {|c| c.flushdb } in the console while the job is frozen
Ruby version: 2.4.0
Sidekiq : 4.2.10 open source (also tried with 5.0.0 beta, but same error)
My dev environment is mac OSX.
relevant gems (may be causing problems)
pg (0.20.0)
rails (5.0.2)
sidekiq-status (0.6.0)
initializer
require 'sidekiq'
require 'sidekiq-status'
require 'sidekiq/middleware/i18n'
Sidekiq.configure_client do |config|
config.redis = { :namespace => ENV['REDIS_NAMESPACE'],
:url => ENV['REDIS_URL'] }
end
Sidekiq.configure_server do |config|
config.redis = { :namespace => ENV['REDIS_NAMESPACE'],
:url => ENV['REDIS_URL'],
:network_timeout => 5 }
end
Sidekiq.configure_client do |config|
config.client_middleware do |chain|
chain.add Sidekiq::Status::ClientMiddleware
end
end
Sidekiq.configure_server do |config|
config.server_middleware do |chain|
chain.add Sidekiq::Status::ServerMiddleware, expiration: 30.minutes # default
end
config.client_middleware do |chain|
chain.add Sidekiq::Status::ClientMiddleware
end
end
sidekiq.yml
development:
:concurrency: 15
production:
:concurrency: 15
:queues:
- default
- mailers
- crawler
- chewy
I launch redis like:
sidekiq -C config/sidekiq.yml
A worker that works on production rails 5, and also in development&production in rails 4..0.2
class ThaebusWorker
include Sidekiq::Worker
sidekiq_options :queue => :crawler, :retry => false
def perform(bus_travel_id, gateways)
ActiveRecord::Base.connection_pool.with_connection do
bus_travel = BusTravel.find(bus_travel_id) #here it hangs for first time, if I run Sidekiq.redis {|c| c.flushdb } it will go through the whole worker (next lines) as normal ... if not, takes minutes for each line.
bus_operator = BusOperator.find_by_internal_name('thaebus') # 2 minutes later passes here
gateway = GatewayThaebus.new( direction: Scheduleable::OUTBOUND,
bus_travel: bus_travel,
bus_operator: bus_operator, gateways: gateways ) #2 minutes later comes here
gateway.run
#from here, it doesn't take any time because no more active record calls need to be made.
if bus_travel.travel_with_return_trip?
# return trip
gateway = GatewayThaebus.new( direction: Scheduleable::INBOUND,
bus_travel: bus_travel,
bus_operator: bus_operator, gateways: gateways )
gateway.run
end
end
end
end
I call the worker like ThaebusWorker.perform_async(770, [{"dep":1,"dest":3}])
Any ideas?
Use the TTIN signal to get backtraces and determine where it's locked up.
On Mar 21, 2017, at 15:31, BenjamÃn Silva notifications@github.com wrote:
Hi,
I'm upgrading all our system (which currently is ruby 2.0.0, rails 4.0.2 and sidekiq 2.7 in production) to a ruby 2.4 rails 5.0.2 and sidekiq 4.2.10 system.Sidekiq is working correctly in this rails 5 enviroment in production (AWS EC2, deployed with capistrano, ran on nginx with puma), but somehow it doesn't work for some workers in none of any of the developers machines (mac osx). I believe this is something related to the database connection, because it hangs when trying to use Active record (one simple "find" takes like 2 minutes) , but for a very strange reason it releases itself from this lock when I run Sidekiq.redis {|c| c.flushdb } in the console while the job is frozen
Ruby version: 2.4.0
Sidekiq : 4.2.10 open source (also tried with 5.0.0 beta, but same error)
My dev environment is mac OSX.relevant gems (may be causing problems)
pg (0.20.0)
rails (5.0.2)
sidekiq-status (0.6.0)initializer
require 'sidekiq'
require 'sidekiq-status'
require 'sidekiq/middleware/i18n'Sidekiq.configure_client do |config|
config.redis = { :namespace => ENV['REDIS_NAMESPACE'],
:url => ENV['REDIS_URL'] }
endSidekiq.configure_server do |config|
config.redis = { :namespace => ENV['REDIS_NAMESPACE'],
:url => ENV['REDIS_URL'],
:network_timeout => 5 }
endSidekiq.configure_client do |config|
config.client_middleware do |chain|
chain.add Sidekiq::Status::ClientMiddleware
end
endSidekiq.configure_server do |config|
config.server_middleware do |chain|
chain.add Sidekiq::Status::ServerMiddleware, expiration: 30.minutes # default
end
config.client_middleware do |chain|
chain.add Sidekiq::Status::ClientMiddleware
end
endsidekiq.yml
development:
:concurrency: 15
production:
:concurrency: 15
:queues:
- default
- mailers
- crawler
- chewy
I launch redis like:
sidekiq -C config/sidekiq.ymlA worker that works on production rails 5, and also in development&production in rails 4..0.2
class ThaebusWorker
include Sidekiq::Worker
sidekiq_options :queue => :crawler, :retry => false
def perform(bus_travel_id, gateways)
ActiveRecord::Base.connection_pool.with_connection do
bus_travel = BusTravel.find(bus_travel_id) #here it hangs for first time, if I run Sidekiq.redis {|c| c.flushdb } it will go through the whole worker (next lines) as normal ... if not, takes minutes for each line.
bus_operator = BusOperator.find_by_internal_name('thaebus') # 2 minutes later passes heregateway = GatewayThaebus.new( direction: Scheduleable::OUTBOUND, bus_travel: bus_travel, bus_operator: bus_operator, gateways: gateways ) #2 minutes later comes here gateway.runfrom here, it doesn't take any time because no more active record calls need to be made.
if bus_travel.travel_with_return_trip? # return trip gateway = GatewayThaebus.new( direction: Scheduleable::INBOUND, bus_travel: bus_travel, bus_operator: bus_operator, gateways: gateways ) gateway.run end endend
end
Any ideas?—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
Wrong thread, that's a quiet thread blocked on Redis waiting for a job. Normal.
On Mar 21, 2017, at 15:52, BenjamÃn Silva notifications@github.com wrote:
Hi mike, thanks for the quick response
see trace
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/connection/ruby.rb:87:in
select' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/connection/ruby.rb:87:inrescue in _read_from_socket'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/connection/ruby.rb:83:in_read_from_socket' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/connection/ruby.rb:75:ingets'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/connection/ruby.rb:380:inread' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:262:inblock in read'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:250:inio' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:261:inread'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:120:inblock in call' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:231:inblock (2 levels) in process'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:367:inensure_connected' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:221:inblock in process'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:306:inlogging' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:220:inprocess'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:120:incall' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/newrelic_rpm-4.0.0.332/lib/new_relic/agent/instrumentation/redis.rb:69:incall'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:209:inblock in call_with_timeout' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:280:inwith_socket_timeout'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis/client.rb:208:incall_with_timeout' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis.rb:1137:inblock in _bpop'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis.rb:58:inblock in synchronize' /Users/benja/.rvm/rubies/ruby-2.4.0/lib/ruby/2.4.0/monitor.rb:214:inmon_synchronize'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis.rb:58:insynchronize' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis.rb:1134:in_bpop'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-3.3.3/lib/redis.rb:1179:inbrpop' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-namespace-1.5.3/lib/redis/namespace.rb:435:incall_with_namespace'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/redis-namespace-1.5.3/lib/redis/namespace.rb:321:inmethod_missing' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/fetch.rb:36:inblock in retrieve_work'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq.rb:95:inblock in redis' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/connection_pool-2.2.1/lib/connection_pool.rb:64:inblock (2 levels) in with'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/connection_pool-2.2.1/lib/connection_pool.rb:63:inhandle_interrupt' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/connection_pool-2.2.1/lib/connection_pool.rb:63:inblock in with'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/connection_pool-2.2.1/lib/connection_pool.rb:60:inhandle_interrupt' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/connection_pool-2.2.1/lib/connection_pool.rb:60:inwith'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq.rb:92:inredis' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/fetch.rb:36:inretrieve_work'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:91:inget_one' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:101:infetch'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:84:inprocess_one' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:73:inrun'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/util.rb:17:inwatchdog' /Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/util.rb:26:inblock in safe_thread'
2017-03-21T22:51:40.934Z 9284 TID-owyd92odo WARN: Thread TID-owydzlpqk processor
Thanks in advance—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
Gist the entire output. You've only shown one thread. Sidekiq should have 4-5 threads, minimum. One of those will be the locked up one.
On Mar 21, 2017, at 15:58, BenjamÃn Silva notifications@github.com wrote:
What do you mean is Normal? this keeps happening to all of us in the team, how do we solve this?
Thanks
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
I deleted the other comments, here is the full output, my bad
https://gist.github.com/silva96/08a4824454bb92a3bc18ef9390ee3fcd
Looks like there is something here
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/util.rb:26:in `block in safe_thread'
2017-03-21T23:09:22.572Z 11360 TID-ouzu1aoss WARN: Thread TID-ouzunqm2s processor
2017-03-21T23:09:22.573Z 11360 TID-ouzu1aoss WARN: /Users/benja/.rvm/rubies/ruby-2.4.0/lib/ruby/2.4.0/monitor.rb:111:in `sleep'
/Users/benja/.rvm/rubies/ruby-2.4.0/lib/ruby/2.4.0/monitor.rb:111:in `wait'
/Users/benja/.rvm/rubies/ruby-2.4.0/lib/ruby/2.4.0/monitor.rb:111:in `wait'
/Users/benja/.rvm/rubies/ruby-2.4.0/lib/ruby/2.4.0/monitor.rb:123:in `wait_while'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/concurrency/share_lock.rb:220:in `wait_for'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/concurrency/share_lock.rb:82:in `block (2 levels) in start_exclusive'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/concurrency/share_lock.rb:186:in `yield_shares'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/concurrency/share_lock.rb:81:in `block in start_exclusive'
/Users/benja/.rvm/rubies/ruby-2.4.0/lib/ruby/2.4.0/monitor.rb:214:in `mon_synchronize'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/concurrency/share_lock.rb:76:in `start_exclusive'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/concurrency/share_lock.rb:148:in `exclusive'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/dependencies/interlock.rb:11:in `loading'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/dependencies.rb:37:in `load_interlock'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/dependencies.rb:358:in `require_or_load'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/dependencies.rb:511:in `load_missing_constant'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/dependencies.rb:203:in `const_missing'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/dependencies.rb:543:in `load_missing_constant'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/dependencies.rb:203:in `const_missing'
/Users/benja/dev/ibilbidea/app/workers/thaebus_worker.rb:9:in `block in perform'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activerecord-5.0.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:398:in `with_connection'
/Users/benja/dev/ibilbidea/app/workers/thaebus_worker.rb:8:in `perform'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:202:in `execute_job'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:173:in `block (2 levels) in process'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/middleware/chain.rb:128:in `block in invoke'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-status-0.6.0/lib/sidekiq-status/server_middleware.rb:37:in `call'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/middleware/i18n.rb:22:in `call'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/newrelic_rpm-4.0.0.332/lib/new_relic/agent/instrumentation/sidekiq.rb:33:in `block in call'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/newrelic_rpm-4.0.0.332/lib/new_relic/agent/instrumentation/controller_instrumentation.rb:362:in `perform_action_with_newrelic_trace'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/newrelic_rpm-4.0.0.332/lib/new_relic/agent/instrumentation/sidekiq.rb:29:in `call'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-superworker-1.0.0/lib/sidekiq/superworker/server/middleware.rb:11:in `call'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/middleware/chain.rb:133:in `invoke'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:172:in `block in process'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:140:in `block (5 levels) in dispatch'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/job_retry.rb:97:in `local'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:139:in `block (4 levels) in dispatch'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/rails.rb:43:in `block in call'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/execution_wrapper.rb:85:in `wrap'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/reloader.rb:68:in `block in wrap'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/execution_wrapper.rb:85:in `wrap'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/activesupport-5.0.2/lib/active_support/reloader.rb:67:in `wrap'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/rails.rb:42:in `call'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:135:in `block (3 levels) in dispatch'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:218:in `stats'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:130:in `block (2 levels) in dispatch'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/job_logger.rb:9:in `block in call'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/logging.rb:32:in `with_context'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/job_logger.rb:5:in `call'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:129:in `block in dispatch'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/job_retry.rb:72:in `global'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:128:in `dispatch'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:171:in `process'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:85:in `process_one'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/processor.rb:73:in `run'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/util.rb:17:in `watchdog'
/Users/benja/.rvm/gems/ruby-2.4.0@rails_5/gems/sidekiq-5.0.0.beta2/lib/sidekiq/util.rb:26:in `block in safe_thread'
im actually doing the same use case as https://github.com/mperham/sidekiq/issues/3284
We have multiple workers that need to be completed before running a final worker, the one I showed was one of this "multiple workers"
he main worker (which monitor the "multiple worker" statuses is like this
class BusGatewaySuperWorker
include Sidekiq::Worker
sidekiq_options :queue => :crawler, :retry => false
def perform(bus_travel_id, outdated_bus_operator_names)
StartWorker.perform_async(bus_travel_id)
@jobs = []
outdated_bus_operator_names.each do |internal_name, gateways|
job = case internal_name
when 'thaebus' then ThaebusWorker.perform_async(bus_travel_id, gateways)
when 'bus_sur' then BusSurWorker.perform_async(bus_travel_id, gateways)
when 'tacoha' then TacohaWorker.perform_async(bus_travel_id, gateways)
(... many other workers for each bus operator)
end
@jobs << job
end
logger.info "#{DateTime.now}: Launching: #{outdated_bus_operator_names.keys.join(',')}."
while job_incomplete?
sleep 0.2
end
EndWorker.perform_async(bus_travel_id)
end
def job_incomplete?
@jobs.any? { |job| Sidekiq::Status::queued?(job) || Sidekiq::Status::working?(job) }
end
end
You can't do that safely. Sidekiq Pro's Batches solves that use case.
We have been doing this for more than one year with rails 4.0.2 in production, with hundreds of jobs per minute,
I still wonder why it works in production with rails 5, and still wonder why it works in both, dev and prod in rails 4.0.2 but it doesn't in rails 5 dev environment
Any idea?
Because Rails 5 can't reload job code and execute further jobs until your job returns and your job currently spins forever. This is the requirement for code reloading in dev.
On Mar 21, 2017, at 17:01, BenjamÃn Silva notifications@github.com wrote:
We have been doing this for more than one year with rails 4.0.2 in production, with hundreds of joba per minute,
I still wonder why it works in production with rails 5, and still wonder why it works in both, dev and prod in rails 4.0.2 but it doesn't in rails 5 dev environment
Any idea?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
I see, I've been reading about the interlock and now I understand a little bit better what's going on here. I was wondering if there is some configuration that allows me to disable automatic code reloading for sidekiq while preserving it for rails, since we don't change the workers code very frequently, but while developing, obviously our rails app code changes a lot.
I decided to do:
config.eager_load = true
for my dev environment and that solved the problem.
Same issue here, my workers hang (in monitor.rb#wait / active_support/concurrency/share_lock.rb#wait_for) whenever they have to lookup any models or other constants.
Setting config.eager_load = true in the dev environment fixes it as @silva96 suggested.
@Agiley the config.eager_load = true works but for some reason I didn't figure out, it randomly hangs some times to.
I kept searching and there was this issue in rails rails/rails#26847
What I ended up doing was wrapping the inner worker (the one that hangs) inside a Rails.application.executor.wrap do
like this
class TheWorker
include Sidekiq::Worker
sidekiq_options :queue => :crawler, :retry => false
def perform(arg1, arg2)
Rails.application.executor.wrap do
user = User.find(arg1)
end
end
end
And then in the main worker, where I kept checking the status (in the while loop) wrap it inside a
ActiveSupport::Dependencies.interlock.permit_concurrent_loads
Like this
class SuperWorker
include Sidekiq::Worker
sidekiq_options :queue => :crawler, :retry => false
def perform(arg0)
AWorkerThatMustBeRanFirst.perform_async(arg0)
@jobs = []
arg0.each do |name|
job = case name
when 'the_name' then TheWorker.perform_async(the_name)
...
end
@jobs << job
end
ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
while job_incomplete?
sleep 0.2
end
AWorkerThatClosesTheWholeProcess.perform_async(arg0)
end
end
def job_incomplete?
@jobs.any? { |job| Sidekiq::Status::queued?(job) || Sidekiq::Status::working?(job) }
end
end
note that Sidekiq::Status api is provided by sidekiq-status gem
Enabling eager_load just fixed my issue as well. This has been a huge PITA. After 9 hours, I'm happy to finally move on with my life. Thanks for this suggestion.
Most helpful comment
@Agiley the
config.eager_load = trueworks but for some reason I didn't figure out, it randomly hangs some times to.I kept searching and there was this issue in rails rails/rails#26847
What I ended up doing was wrapping the inner worker (the one that hangs) inside a
Rails.application.executor.wrap dolike this
And then in the main worker, where I kept checking the status (in the while loop) wrap it inside a
ActiveSupport::Dependencies.interlock.permit_concurrent_loadsLike this
note that
Sidekiq::Statusapi is provided by sidekiq-status gem