Loopback-next: throws MaxListenersExceededWarning

Created on 26 Dec 2018  路  46Comments  路  Source: strongloop/loopback-next

Description

I'm using LB4 with PostgreSQL DataSource. Currently I have 36 models with multiple relations between them. Throws the following warning

(node:39148) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 connected listeners added. Use emitter.setMaxListeners() to increase limit

setup

"dependencies": {
    "@loopback/boot": "^1.0.7",
    "@loopback/context": "^1.3.0",
    "@loopback/core": "^1.1.2",
    "@loopback/openapi-v3": "^1.1.4",
    "@loopback/repository": "^1.1.0",
    "@loopback/rest": "^1.5.0",
    "@loopback/rest-explorer": "^1.1.3",
    "@loopback/service-proxy": "^1.0.4",
    "loopback-connector-postgresql": "^3.5.0"
  }
2019Q3 bug help wanted p1

Most helpful comment

Ideally, we should find out the root cause - why is LB4 registering so many event listeners and LB2/LB3 does not? Can we improve LB4 code to avoid so many listeners being registered?

All 46 comments

I have the same issue with postgreSQL

Have the same problem with postgresql. I did not ever test that with other dbs.

The juggler uses events to manage relations internally. This is a warning message only, it does not crash the application.

https://nodejs.org/api/events.html#events_eventemitter_defaultmaxlisteners

You can reset the max listeners as a workaround for now.

We will Investigate and also increase the max limit on the event emitter, and provide a simple unit test.

Ideally, we should find out the root cause - why is LB4 registering so many event listeners and LB2/LB3 does not? Can we improve LB4 code to avoid so many listeners being registered?

Yes, fix the root cause, not treat the symptom. MaxListenersExceededWarning is an indication something sub-optimal is happening somewhere.

I'm getting the same error and since I don't have any special logic added to the project I believe this is an internal issue of loopback.
To explain more, I've created a project and added a couple of models. I've only added a single custom endpoint that simply generates two entries in two relevant models, one for each, and returns the result.

@reyraa thank you for chiming in and confirming the problem.

What we really need now is a small application we can use to reproduce the issue on our machines. See https://loopback.io/doc/en/contrib/Reporting-issues.html#loopback-4x-bugs

@reyraa Can you share the sample app with us to help reproduce the issue?

There is a way to trace where it came from : https://github.com/nodejs/help/issues/1051

@Joel-Raju , @sbacem , @zhaoqin-github , @reyraa

If any of you can provide a repository with an application ( and exact steps) that can reproduce this
warning message, it would greatly be appreciated.

I have placed an application with multiple belongsTo and hasMany relations between
CustomerN and OrderN (where N=1 through 35)
in
https://github.com/emonddr/relations_app

in an effort to create many models as @Joel-Raju did , but I wasn't able to reproduce the issue.

The application connects to postgresql running in a docker container.

Please see the README.md .

Also, if you still have your application showing the warning message, you can try using the
--trace-warning flag to output better information to help locate the cause. :)

For example:

node --trace-warning app.js

@emonddr
That's the output from my application:

MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 connected listeners added. Use emitter.setMaxListeners() to increase limit
    at _addListener (events.js:280:19)
    at PostgresDbDataSource.addListener (events.js:297:10)
    at PostgresDbDataSource.once (events.js:341:8)
    at PostgreSQL.SQLConnector.execute (E:\workspace\boom\backend\node_modules\loopback-connector\lib\sql.js:626:28)
    at PostgreSQL.showFields (E:\workspace\boom\backend\node_modules\loopback-connector-postgresql\lib\migration.js:28:10)
    at PostgreSQL.SQLConnector.getTableStatus (E:\workspace\boom\backend\node_modules\loopback-connector\lib\sql.js:240:8)
    at E:\workspace\boom\backend\node_modules\loopback-connector\lib\sql.js:84:10
    at E:\workspace\boom\backend\node_modules\async\dist\async.js:3110:16
    at eachOfArrayLike (E:\workspace\boom\backend\node_modules\async\dist\async.js:1069:9)
    at eachOf (E:\workspace\boom\backend\node_modules\async\dist\async.js:1117:5)
    at Object.eachLimit (E:\workspace\boom\backend\node_modules\async\dist\async.js:3172:5)
    at PostgreSQL.SQLConnector.autoupdate (E:\workspace\boom\backend\node_modules\loopback-connector\lib\sql.js:78:9)
    at PostgresDbDataSource.DataSource.autoupdate (E:\workspace\boom\backend\node_modules\@loopback\repository\node_modules\loopback-datasource-juggler\lib\datasource.js:1117:18)
    at Object.dsUpdate (E:\workspace\boom\backend\dist\src\migrate.js:24:8)
    at <anonymous>

I hope that helps.

@emonddr Sorry for the delay. I missed the notifications.
This is what I've made. There are a few models created. but the one that throws the error is the Customer model.
As you can see the only thing I've done is to create or find an instance of Customer and IdToken and return the values.

I had this error even before I added all the other models. Although even now they shouldn't be the cause since they are basic models generated by the cli.

@emonddr Sorry for the delay. I missed the notifications.
This is what I've made. There are a few models created. but the one that throws the error is the Customer model.
As you can see the only thing I've done is to create or find an instance of Customer and IdToken and return the values.

I had this error even before I added all the other models. Although even know they shouldn't be the cause since they are basic models generated by the cli.

@reyraa , thank you for providing this. I will investigate.

The LB3 application provided by @reyraa has the following 12 models listed in

server/model-config.json:

{
  "_meta": {
    "sources": [
      "loopback/common/models",
      "loopback/server/models",
      "../common/models",
      "./models"
    ],
    "mixins": [
      "loopback/common/mixins",
      "loopback/server/mixins",
      "../common/mixins",
      "./mixins"
    ]
  },
  "Customer": {
    "dataSource": "my_postgresql",
    "public": true
  },

  "User": {
    "dataSource": "my_postgresql"
  },
  "AccessToken": {
    "dataSource": "my_postgresql",
    "public": false
  },
  "ACL": {
    "dataSource": "my_postgresql",
    "public": false
  },
  "RoleMapping": {
    "dataSource": "my_postgresql",
    "public": false,
    "options": {
      "strictObjectIDCoercion": true
    }
  },
  "Role": {
    "dataSource": "my_postgresql",
    "public": false
  },
  "Hotel": {
    "dataSource": "my_postgresql",
    "public": true
  },
  "Country": {
    "dataSource": "my_postgresql",
    "public": true
  },
  "City": {
    "dataSource": "my_postgresql",
    "public": true
  },
  "Order": {
    "dataSource": "my_postgresql",
    "public": true
  },
  "IdToken": {
    "dataSource": "my_postgresql",
    "public": false
  },
  "Activity": {
    "dataSource": "my_postgresql",
    "public": true
  }
}

When starting the application the warning message appears immediately.

Some of these models has hasMany or belongsTo, a few don't (like Activity).

I experimented a bit.

If each model above is placed as the ONLY model in server/model-config.json and the app is started, no warning message appears.

I found that 10 models in the server/model-config.json when the app is started will NOT cause the warning. Adding an 11th model to the list DOES cause the warning when the app is started.

I decided to create 12 simple models (like Activity) that do not have any belongsTo or hasMany, and ONLY place them in server/model-config.json (none of the models provided by @reyraa)

e.g.

SimpleModel1.(js/json) through SimpleModel12.(js/json)

models/SimpleModel1.js:

'use strict';

module.exports = function(SimpleModel1) {

};

models/SimpleModel1.json:

{
  "name": "SimpleModel1",
  "plural": "SimpleModel1ies",
  "base": "PersistedModel",
  "idInjection": true,
  "options": {
    "validateUpsert": true
  },
  "properties": {
    "name": {
      "type": "string",
      "required": true
    },
    "color": {
      "type": "string",
      "required": true
    }
  },
  "validations": [],
  "relations": {},
  "acls": [],
  "methods": {}
}

And again... 10 models are fine. But the 11th will trip the warning message.

Debugger listening on ws://127.0.0.1:7000/3c6b1e4d-097a-4d00-b55f-3e37111a7cc5
For help, see: https://nodejs.org/en/docs/inspector
Debugger attached.
(node:86743) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 connected listeners added. Use emitter.setMaxListeners() to increase limit
    at _addListener (events.js:247:17)
    at DataSource.addListener (events.js:263:10)
    at DataSource.once (events.js:292:8)
    at PostgreSQL.SQLConnector.execute (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/loopback-connector/lib/sql.js:626:28)
    at PostgreSQL.showFields (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/loopback-connector-postgresql/lib/migration.js:28:10)
    at PostgreSQL.SQLConnector.getTableStatus (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/loopback-connector/lib/sql.js:240:8)
    at /Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/loopback-connector/lib/sql.js:116:10
    at /Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/async/dist/async.js:3110:16
    at eachOfArrayLike (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/async/dist/async.js:1069:9)
    at eachOf (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/async/dist/async.js:1117:5)
    at Object.eachLimit (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/async/dist/async.js:3172:5)
    at PostgreSQL.SQLConnector.isActual (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/loopback-connector/lib/sql.js:115:9)
    at DataSource.isActual (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/loopback-datasource-juggler/lib/datasource.js:1951:20)

The dependencies in the package.json are:

"dependencies": {
    "compression": "=1.0.3",
    "cors": "=2.5.2",
    "helmet": "=3.10.0",
    "jsonwebtoken": "^8.5.1",
    "loopback": "=3.22.0",
    "loopback-boot": "^2.6.5",
    "loopback-component-explorer": "=6.2.0",
    "loopback-connector-postgresql": "=3.6.1",
    "request": "^2.88.0",
    "serve-favicon": "=2.0.1",
    "strong-error-handler": "=3.0.0"
  },

I will try with a db2 connector and see if the same thing happens.

I first tried an earlier version of the postgres connector:

"loopback-connector-postgresql": "=3.1.0",

but the outcome was the same. warning appeared on 11th model added to server/model-config.json.

I tried with

"loopback-connector-db2": "=2.1.2",

and the app started without the warning using the original models with hasMany and belongsTo, and the simple models I created.

I think the root cause is https://github.com/strongloop/loopback-connector/blob/master/lib/sql.js#L626. We try to defer SQL executions (in your case, migration statements) if the datasource is not in connected state. Even though once is used (the listener will be removed once the event is triggered), there is a chance that we have a spike of listeners added to the datasource. It should be safe to increase the maximum number of listeners for the datasource as these listeners are transient.

It seems there is different code (not common code) running (different things happening) when we use the loopback-connector-postgres connector vs the loopback-connector-db2. I would think the common code would add the same number of listeners regardless of the relational database used.

It seems there is different code (not common code) running (different things happening) when we use the loopback-connector-postgres connector vs the loopback-connector-db2. I would think the common code would add the same number of listeners regardless of the relational database used.

It might be related to how fast the DB connection can be established and if there are SQL executions upon startup such as migration.

It seems there is different code (not common code) running (different things happening) when we use the loopback-connector-postgres connector vs the loopback-connector-db2. I would think the common code would add the same number of listeners regardless of the relational database used.

It might be related to how fast the DB connection can be established and if there are SQL executions upon startup such as migration.

That's the thing.... I am running the same LB3 app code upon startup but using a different connector. So perhaps the database connection speed, or the connections are handled differently. Not sure yet.

Definitely some migration going on for postgres connector:

image

This line https://github.com/strongloop/loopback-connector-postgresql/blob/master/lib/migration.js#L22 is responsible for the debugging output I am getting :

> DEBUG=loopback:connector:postgresql node --trace-warnings --inspect-brk=7000 .

Debugger listening on ws://127.0.0.1:7000/f8d3ad64-8e78-4c62-80cc-f5ee3ee67eb1
For help, see: https://nodejs.org/en/docs/inspector
Debugger attached.
  loopback:connector:postgresql Settings {"host":"localhost","port":5432,"url":"","database":"postgres","password":"mysecretpassword","name":"my_postgresql","user":"postgres","connector":"postgresql","debug":false} +0ms
(node:90122) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 connected listeners added. Use emitter.setMaxListeners() to increase limit
    at _addListener (events.js:247:17)
    at DataSource.addListener (events.js:263:10)
    at DataSource.once (events.js:292:8)
    at PostgreSQL.SQLConnector.execute (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/loopback-connector/lib/sql.js:626:28)
    at PostgreSQL.showFields (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/loopback-connector-postgresql/lib/migration.js:28:10)
    at PostgreSQL.SQLConnector.getTableStatus (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/loopback-connector/lib/sql.js:240:8)
    at /Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/loopback-connector/lib/sql.js:116:10
    at /Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/async/dist/async.js:3110:16
    at eachOfArrayLike (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/async/dist/async.js:1069:9)
    at eachOf (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/async/dist/async.js:1117:5)
    at Object.eachLimit (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/async/dist/async.js:3172:5)
    at PostgreSQL.SQLConnector.isActual (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/loopback-connector/lib/sql.js:115:9)
    at DataSource.isActual (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/loopback-datasource-juggler/lib/datasource.js:1951:20)
    at /Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/server/boot/schemaUpdate.js:9:8
    at /Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/async/dist/async.js:3110:16
    at replenish (/Users/dremond/Documents/PRs_LoopBack/community_relations_emit/reyraa-api/node_modules/async/dist/async.js:1011:17)
  loopback:connector:postgresql SQL: SELECT column_name AS "column", data_type AS "type", datetime_precision AS time_precision, is_nullable AS "nullable", character_maximum_length as "length" FROM "information_schema"."columns" WHERE table_name='simplemodel1' and table_schema='public' +9m
  loopback:connector:postgresql SQL: SELECT column_name AS "column", data_type AS "type", datetime_precision AS time_precision, is_nullable AS "nullable", character_maximum_length as "length" FROM "information_schema"."columns" WHERE table_name='simplemodel2' and table_schema='public' +6s

Notice the warning appears before any of this migration output:

loopback:connector:postgresql SQL: SELECT column_name AS "column", data_type AS "type", datetime_precision AS time_precision, is_nullable AS "nullable", character_maximum_length as "length" FROM "information_schema"."columns" WHERE table_name='simplemodel1' and table_schema='public' +9m

No migration code for the db2 connector:
image

Observations:

/reyraa-api/server/boot/schemaUpdate.js

async.eachSeries(datasources, function(dsName, cb) {
...
ds.isActual(function(err, actual)
...
}

which calls:

/reyraa-api/node_modules/loopback-datasource-juggler/lib/datasource.js

DataSource.prototype.isActual = function(models, cb) {
...
this.connector.isActual(models, cb);
...
}

which calls:

/reyraa-api/node_modules/loopback-connector/lib/sql.js

SQLConnector.prototype.isActual = function(models, cb) {
...
async.each(models, function(model, done) {
self.getTableStatus(model, function(err, fields) {}
}
...
}

which calls:

SQLConnector.prototype.getTableStatus = function(model, cb) {
...
this.showFields(model, function(err, data) {
...
}

which calls :

/reyraa-api/node_modules/loopback-connector-postgresql/lib/migration.js

PostgreSQL.prototype.showFields = function(model, cb) {

var sql = 'SELECT column_name AS "column", data_type AS "type", ' +
'datetime_precision AS time_precision, ' +
'is_nullable AS "nullable", character_maximum_length as "length"' // , data_default AS "Default"'
+ ' FROM "information_schema"."columns" WHERE table_name=\'' +
this.table(model) + '\' and table_schema=\'' +
this.schema(model) + '\'';
===>  this.execute(sql, function(err, fields) {

}

which calls :

/reyraa-api/node_modules/loopback-connector/lib/sql.js

SQLConnector.prototype.execute = function(sql, params, options, callback) {

...

var self = this;
if (!this.dataSource.connected) {
return this.dataSource.once('connected', function() {
self.execute(sql, params, options, callback);
});

}
...

The function 'function()' is added to the datasource (which inherits from EventEmitter) for the
event 'connected' for EVERY model in
/community_relations_emit/reyraa-api/server/model-config.json

If the number of models in model-config.json is > 10, then the warning message appears.

I think we have found out the root cause. Can we work on the fix based on https://github.com/strongloop/loopback-next/issues/2198#issuecomment-515107435 and https://github.com/strongloop/loopback-next/issues/2198#issuecomment-515108571?

One option is to modify:

https://github.com/strongloop/loopback-datasource-juggler/blob/master/lib/datasource.js#L201

copied here for convenience:

DataSource.prototype._setupConnector = function() {
  this.connector = this.connector || this.adapter; // The legacy JugglingDB adapter will set up `adapter` property
  this.adapter = this.connector; // Keep the adapter as an alias to connector
  if (this.connector) {
    if (!this.connector.dataSource) {
      // Set up the dataSource if the connector doesn't do so
      this.connector.dataSource = this;
    }
    const dataSource = this;
    this.connector.log = function(query, start) {
      dataSource.log(query, start);
    };

    this.connector.logger = function(query) {
      const t1 = Date.now();
      const log = this.log;
      return function(q) {
        log(q || query, t1);
      };
    };
    // Configure the connector instance to mix in observer functions
    jutil.mixin(this.connector, OberserverMixin);
  }
};

and change it to:

DataSource.prototype._setupConnector = function() {
  this.connector = this.connector || this.adapter; // The legacy JugglingDB adapter will set up `adapter` property
  this.adapter = this.connector; // Keep the adapter as an alias to connector
  if (this.connector) {
    if (!this.connector.dataSource) {
      // Set up the dataSource if the connector doesn't do so
      this.connector.dataSource = this;
    }
    const dataSource = this;

    let maxListeners = 10;
    if (this.settings && this.settings.maxListeners){
       if (this.settings.maxListeners > maxListeners)
         maxListeners = this.settings.maxListeners;
    }
    dataSource.setMaxListeners(maxListeners);

    this.connector.log = function(query, start) {
      dataSource.log(query, start);
    };

    this.connector.logger = function(query) {
      const t1 = Date.now();
      const log = this.log;
      return function(q) {
        log(q || query, t1);
      };
    };
    // Configure the connector instance to mix in observer functions
    jutil.mixin(this.connector, OberserverMixin);
  }
};

and users have to add a new property maxListeners in
server/datasources.json

{
  "my_postgresql": {
    "host": "localhost",
    "port": 5432,
    "url": "",
    "database": "postgres",
    "password": "mysecretpassword",
    "name": "my_postgresql",
    "user": "postgres",
    "connector": "postgresql",
    "maxListeners": 12
  }
}

@raymondfeng , what are your thoughts?

maxListeners exposes implementation details. In our case, it's really the maximum of number of requests can be queued on the datasource before it's connected. I suggest that we use something like maxOfflineRequests.

  1. We should increase the default value to a higher number, such as 16.
  2. We probably should reject a request if the number of pending ones exceeds the max limit.

maxOfflineRequests

Since the datasource is an event emitter , setting the max listeners means setting the maxListeners for each event name that the datasource is emitting. I think it emits several event names. What would 'maxOfflineRequests' mean exactly?

Based on the analysis in https://github.com/strongloop/loopback-next/issues/2198#issuecomment-516153206, it's my understanding that we start automigration before the connection is set up. During automigration, we check whether model tables are actual (getTableStatus) and we run this check in parallel (one check for each model, see the async.each call). getTableStatus calls to lower-level methods and eventually we end up in execute. Now because we are not connected yet, execute calls this.dataSource.once('connected'). This is called once for each model, because we are running everything in parallel.

Instead of tweaking maxListeners settings, I'd like us to fix automigrate/autoupdate to wait for the connection to be established before firing a bunch of getTableStatus calls in parallel.

In fact, I have recently made such change in https://github.com/strongloop/loopback-datasource-juggler/pull/1756:

I would hope the change should have fixed the problem with maxListeners too.

@emonddr could you please verify?

(IMO, we should not need to install N event listeners when migrating N models - a single listener should be enough.)

@bajtos Please note we defer the SQL execution for each statement when the datasource is not connected - see https://github.com/strongloop/loopback-connector/blob/master/lib/sql.js#L626. This code cannot coordinate multiple operations with one listener.

Of course, we can optimize the migration by checking the datasource connected flag first to avoid multiple listeners. But we still to have to fix the listener issue caused by https://github.com/strongloop/loopback-connector/blob/master/lib/sql.js#L626.

I have confirmed that

https://github.com/strongloop/loopback-connector/blob/master/lib/sql.js#L626

if (!this.dataSource.connected) {
  ====>  return this.dataSource.once('connected', function() {
      self.execute(sql, params, options, callback);
    });
  }

is never entered when migration is happening in an LB4 application.

By the time that migration-related SQL is executed, the datasource IS connected, and no listeners are added for the 'connected' event. So we never run into the warning.

I tried running the migration from

  • commandline . e.g. npm run migrate
    which uses a pre-generated migrate.ts file
import {RelationsApp} from './application';

export async function migrate(args: string[]) {
  const existingSchema = args.includes('--rebuild') ? 'drop' : 'alter';
  console.log('Migrating schemas (%s existing schema)', existingSchema);

  const app = new RelationsApp();
  await app.boot();
  await app.migrateSchema({existingSchema});

  // Connectors usually keep a pool of opened connections,
  // this keeps the process running even after all work is done.
  // We need to exit explicitly.
  process.exit(0);
}

migrate(process.argv).catch(err => {
  console.error('Cannot migrate database schema', err);
  process.exit(1);
});
  • inside the application
    e.g.
export async function main(options: ApplicationConfig = {}) {
  const app = new RelationsApp(options);
  await app.boot();
  await app.migrateSchema();
  await app.start();

  const url = app.restServer.url;
  console.log(`Server is running at ${url}`);
  console.log(`Try ${url}/ping`);

  return app;
}

and never encountered the warning, despite my app having many,many models.

Based on the analysis in #2198 (comment), it's my understanding that we start automigration before the connection is set up. During automigration, we check whether model tables are actual (getTableStatus) and we run this check in parallel (one check for each model, see the async.each call). getTableStatus calls to lower-level methods and eventually we end up in execute. Now because we are not connected yet, execute calls this.dataSource.once('connected'). This is called once for each model, because we are running everything in parallel.

Instead of tweaking maxListeners settings, I'd like us to fix automigrate/autoupdate to wait for the connection to be established before firing a bunch of getTableStatus calls in parallel.

In fact, I have recently made such change in strongloop/loopback-datasource-juggler#1756:

I would hope the change should have fixed the problem with maxListeners too.

@emonddr could you please verify?

@bajtos @raymondfeng , the particular change you made was in the master branch of loopback-datasource-juggler and didn't make it into 3.x branch of loopback-datasource-juggler.

@reyraa kindly provided his LB3 application, and his server/boot/schemaUpdate.js
looks like this :
https://github.com/reyraa/reyraa-api/blob/develpment/server/boot/schemaUpdate.js#L5

Not certain if this a typical schema update for an LB3 application. @raymondfeng @bajtos can you confirm?

Regardless, with @b-admike 's help, I created my own local branch (using 3.x branch as a base, and cherry-picking your commit ).

Using this code (with your commit), I still get the warning about max listeners.

When I add my suggested code change to datasource.js mentioned earlier, then I do NOT get the warning anymore.

Not certain if this a typical schema update for an LB3 application. @raymondfeng @bajtos can you confirm?

Replying to my own question.

https://loopback.io/doc/en/lb3/Creating-a-database-schema-from-models.html mentions this code:

dataSource.isActual(models, function(err, actual) {
  if (!actual) {
    dataSource.autoupdate(models, function(err, result) {
      // ...
    });
  }
});

which is identical to https://github.com/reyraa/reyraa-api/blob/develpment/server/boot/schemaUpdate.js#L5 .

So looks like this is following a recommended approach. Not sure why @bajtos commit mentioned earlier doesn't prevent migration from occurring before a datasource is connected.

@emonddr By the time that migration-related SQL is executed, the datasource IS connected, and no listeners are added for the 'connected' event. So we never run into the warning.

It only happens when the DB connection takes time to establish, for example, a remote DB over slow network. Did you test against a local DB?

In general, DB operations can be performed upon the startup of an application. Such requests such as data loading, schema migration, data seeding, or other things. Using connected listener to defer DB calls is a workaround to the fact we don't use async/await in LB3.

@emonddr By the time that migration-related SQL is executed, the datasource IS connected, and no listeners are added for the 'connected' event. So we never run into the warning.

It only happens when the DB connection takes time to establish, for example, a remote DB over slow network. Did you test against a local DB?

@raymondfeng , I am using a docker image for postgresql. So yes, it is on my local laptop.

docker pull postgres

docker run --name my-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres

So I've created a PR to merge into 3.x branch
https://github.com/strongloop/loopback-datasource-juggler/pull/1766

I cherry-picked this commit from master:

Miroslav's commit for automigrate and autoupdate
https://github.com/strongloop/loopback-datasource-juggler/pull/1756/commits/40286fcd28cc0119e8530d56ff0d82bc55f4517d

Other commits of interest:
miroslav fixed a typo from commit above
https://github.com/strongloop/loopback-datasource-juggler/commit/dd7167b533c620824252aeb827fa662054e5d4ce

I didn't cherry-pick this one, I just fixed the typo in my PR. I also needed to delete a test suite on the new Datasource.execute capability.

@dhmlau mentioned to me afterwards that we usually commit to master and then cherry-pick back to 3.x. We can delete this PR and do that too.

I am just wondering right now, what test can I create to test this fix. I was using @reyraa 's application to test my fix.

We probably should reject a request if the number of pending ones exceeds the max limit.

I think this is unnecessary complexity. There is a timer that also removes the 'connected' listeners. So we may run into timing issues when trying to obtain an accurate count of how many are queued.

If we simply have the code suggested above, the only problem the user would face is the same warning message concerning max emitters. Then he/she only needs to bump up the number. My 2 cents.

Create mockup connector from SQL connector. Artificially defer the connection.
Then the mocha test can start sending SQL queries to the connector. Use the event api to determine the number of listeners for the 'connected' event. The numbers should line up.
And once connection is made, a timer empties out the listener list. We check that it is down to zero.

we should emit a new error message in here
https://github.com/strongloop/loopback-connector/blob/master/lib/sql.js#L626
if # of listeners for connected event has reached the limit.
send back callback(err) where err is the new error message.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

half-blood-programmer picture half-blood-programmer  路  3Comments

rexliu0715 picture rexliu0715  路  3Comments

mightytyphoon picture mightytyphoon  路  3Comments

shahulhameedp picture shahulhameedp  路  3Comments

mhdawson picture mhdawson  路  3Comments