Every time a delayed job is dispatched, when it is executed, I have new idle connection (not correctly closed).
I explained this with detail here: http://stackoverflow.com/questions/42838840/laravel-jobs-leaves-an-idle-postgresql-process-on-deallocate
Here is the original post:
Every time a Delayed Job is has run on my server I can see a new idle process in postgreSQL. Running select * from pg_stat_activity; I can see:
DEALLOCATE pdo_stmt_00000018
I tried to understand, and one more line (and one more process in htop) appears each time a delayed queued job had just ran.
The last line of my Job is:
$this->log->info("Invitation {$this->invitation->uuid} sent");
````
And I can see this in my logs, so everything is alright BUT it does not clean up after. I have a idle process every time with "DEALLOCATE pdo_stmt_00000xxx".
What can I do to avoid this problem? What is causing this?
Here is my supervisor config:
[program:laravel-queue-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /path/to/my/site/artisan queue:work --queue=invitation,default --sleep=3 --tries=3
autostart=true
autorestart=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/path/to/my/logs/worker.log
```
Side note: the idle processes disappear when I run php artisan queue:restart
Dispatch a delayed job on a ubuntu server 16.04 using redis queue, supervisor and postgreSQL. Check in postgreSQL, you will see idle process on DEALLOCATE which never(?) disappear.
I found a (quick and dirty) workaround. Adding this at the end of my Job handle function:
\DB::disconnect();
sleep(1);
I was inspired by this: https://github.com/laravel/framework/issues/17785
Not sure it's the same problem though. And I still don't understand why I have to do that.
_EDIT: my quick solution seems to not work anyway. I still have 'DEALLOCATE'_
I'm not a postgresql user myself, and I only did some quick googling. It seems that the query you see is the last executed one, and not something that is stuck. The worker process itself is sleeping, waiting to check the database again in a few seconds for any new jobs.
The query column now means "current or last query" --- there's a separate column to check for the idle status. Those DEALLOCATEs are evidently what your app does before going idle.
Source: https://www.postgresql.org/message-id/20442.1350597286%40sss.pgh.pa.us
Closing for lack of activity.
+1, i have this issue too.
My application use laravel/horison as queue jobs manager, and sometimes job restarts itself with a:
$this->release(1);
return;
So, after a some time, I connect to the database and see next idle-ed queries:
select * from pg_stat_activity;
DEALLOCATE pdo_stmt_00003bec
DEALLOCATE pdo_stmt_000035d2
DEALLOCATE pdo_stmt_00000dd4
DEALLOCATE pdo_stmt_00000ddd
DEALLOCATE pdo_stmt_00004204
DEALLOCATE pdo_stmt_000045c3
DEALLOCATE pdo_stmt_00003efd
DEALLOCATE pdo_stmt_000046c9
DEALLOCATE pdo_stmt_00003e2a
DEALLOCATE pdo_stmt_000038a2
DEALLOCATE pdo_stmt_0000387b
DEALLOCATE pdo_stmt_000046f3
DEALLOCATE pdo_stmt_000037a3
DEALLOCATE pdo_stmt_0000420a
DEALLOCATE pdo_stmt_00003f44
DEALLOCATE pdo_stmt_000034e5
DEALLOCATE pdo_stmt_000042c8
DEALLOCATE pdo_stmt_0000403a
DEALLOCATE pdo_stmt_000039ef
DEALLOCATE pdo_stmt_0000385a
DEALLOCATE pdo_stmt_000043ef
DEALLOCATE pdo_stmt_000040cd
DEALLOCATE pdo_stmt_0000338d
DEALLOCATE pdo_stmt_00003817
DEALLOCATE pdo_stmt_000040c2
DEALLOCATE pdo_stmt_00003f59
DEALLOCATE pdo_stmt_00003ad7
DEALLOCATE pdo_stmt_00003f2f
DEALLOCATE pdo_stmt_00003a41
DEALLOCATE pdo_stmt_00003c9d
DEALLOCATE pdo_stmt_00003ce0
DEALLOCATE pdo_stmt_000037f8
DEALLOCATE pdo_stmt_00000380
DISCARD ALL
DISCARD ALL
select * from pg_stat_activity
DISCARD ALL
DISCARD ALL
Horizon config:
return [
'use' => 'queue-connection',
'prefix' => env('HORIZON_PREFIX', 'horizon:'),
'waits' => [
'redis:queue-connection' => 60,
],
'environments' => [
'production' => [
'supervisor' => [
'connection' => env('QUEUE_DRIVER', 'redis'),
'queue' => [
QueuesPriorityManager::HIGH_PRIORITY_QUEUE_NAME,
QueuesPriorityManager::DEFAULT_PRIORITY_QUEUE_NAME,
QueuesPriorityManager::LOW_PRIORITY_QUEUE_NAME,
],
'balance' => 'false',
'processes' => (int) env('HORIZON_PRODUCTION_SUPERVISOR_PROCESSES', 32),
'tries' => 20,
'timeout' => 40,
'sleep' => 1,
],
],
],
];
Database connection config:
return [
'default' => env('DB_CONNECTION', 'pgsql'),
'connections' => [
'pgsql' => [
'driver' => 'pgsql',
'host' => env('DB_HOST', '127.0.0.1'),
'port' => (int) env('DB_PORT', 5432),
'database' => env('DB_DATABASE', 'application'),
'username' => env('DB_USERNAME', 'application'),
'password' => env('DB_PASSWORD', 'application'),
'charset' => env('DB_CHARSET', 'utf8'),
'prefix' => env('DB_PREFIX', ''),
'schema' => env('DB_SCHEMA', 'public'),
'options' => [],
],
],
];
Can you say me the best way for a fixing this issue?
+1
+1
@Katerou22 Have you read my previous answer and the link to the documentation? If so, what's the actual problem you're having?
+1
+1
@Katerou22 Have you read my previous answer and the link to the documentation? If so, what's the actual problem you're having?
I have same issue with Postgres, after few days of running queues with horizon those idle processes rise up to number of maximum connections database can have and this is a problem as standard app connections will be then refused. For now we need to restart queues every day to make it running smoothly.
@themsaid Not sure why this was closed? Can confirm this is still a bug that is happening and impacting numerous people.
there is a solution? i have the same issue, my DB collapses with a few users
I had this problem using sqs as queue driver
A hacky solution could be closing the connection explicit at the end of job
with:
DB::disconnect('foo');
We use AWS t2.micro, with some limitation, the connection can only handle 50 at once.
Basically, we make our own migration for each company on its own schema (you get the idea), we iterate through companies and do migrate, but from each iteration, the idle connection counter keep there and not (supposedly) releasing (?), and keep going up until it hit max connection
We use a workaround to close the connection after each iteration, we also use Migrator to execute companies migration folder, is this how we supposedly handle this?
->each (function () {
$generator = new Generator($companyDatabase);
$engine = $generator->engine();
$connection = $generator->newConnection();
$migration = new Executor($connection);
$engine->assign($engine->config('username'), $engine->config('database'));
$migration->setOutput($command->output);
$migration->migrate();
$connection->disconnect(); //we add this
});
Same issue there. Quick & dirty workaround :
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'dbname' AND pid <> pg_backend_pid() AND state in ('idle');
Out of curiosity, does everyone experiencing this issue call DB::connection()->... at some point in their jobs/application logic?
@atrauzzi I've got this issue, did a quick project search and i'm not seeing any calls to DB::connection so the answer would be no
check if you did beginTransaction without commit or without rollback, properly way is that
\DB::beginTransation();
try{
//some matter stuff
\DB::commit();
} catch(Exception $e) {
\DB::rollback();
}
I've screenshotted the only bit of code in our code base which is using transactions.
Transactions should always be committed unless i'm missing something blatantly obvious.

The problem is how do you use transactions, so you should move last 3 lines to
try {
// current code
// 3 last lines with
DB::commit()
} catch (){
DB::rollback();
}
DB::commit always place to the end of try block, but DB::rollback to the beginning of catch block.
Your case - If last 3 lines fail (for example, DB connection broken, Model's validation error, Events errors and etc) you will get Uncommited transaction.
Also to prevent catch current issues I added this code to AppServiceProvider::boot
Queue::looping(static function () {
$level = DB::transactionLevel();
if ($level > 0) {
Log::critical("Rollback Transactions. Level: {$level}");
}
while ($level > 0) {
DB::rollBack();
$level--;
}
});
I was under the impression i had to commit the rollback aswel. I update the code and see if this solves our issue! Thanks in advance!
I have tried moving the commit & apiUpdateStatus code within the try block. This has not solved our problem, i will have a look at the code you added to the AppServiceProvider today.
Any suggestions for this issue?
We ended up closing idle connections manually through the use of a cron job.
$result = DB::select(DB::raw("SELECT count(*) FROM pg_stat_activity WHERE state = 'idle' AND datname = :database"), ["database" => config("DB_DATABASE")]);
if($result[0]->count > 5){
DB::select(DB::raw("SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = :database AND pid <> pg_backend_pid() AND state in ('idle')"), ["database" => config("DB_DATABASE")]);
}
Most helpful comment
@themsaid Not sure why this was closed? Can confirm this is still a bug that is happening and impacting numerous people.