In Winston 3, the logging functions no longer take a callback. Previously this could be used to wait until logging had completed:
logger.info('some message', { foo: 42 }, callback); // Winston 2.x
Winston 3 does allow you to listen to the logged
event on your transports, but that doesn't give me any easy way to tell when _this message that I am currently logging_, has completed. Since everything is async the next logged
event that occurs after you write your message may not be related to the message you just logged. And it's more complex if you are listening for the event on multiple transports.
That makes Winston difficult to use in certain environments. For example, in AWS Lambda we set the callbackWaitsForEmptyEventLoop
parameter to false
(documentation). (sidenote: you do this if you have a database connection or something that cannot be unref
ed, because otherwise Lambda will never freeze your process). If set to false
, Lambda freezes your process as soon as you return results to the caller and may even terminate your process. If your logging transport hasn't finished writing by the time that happens, then you either lose logs or (worse) the logs get written later when Lambda unfreezes your process.
TL;DR: Our Lambda process would normally await (or the callback equivalent) on Winston before returning results to the caller, guaranteeing that logging is done before the process is frozen.
Is there another way you'd recommend to detect when logging is complete?
We’re using a different solution.
Does not make it not an issue. Thanks for bringing it up.
EDIT: this does not work
============
I'm not sure if this is the best way, nor whether it works in all cases, but so far we are handling this by doing the following:
I've created a wait-for-logger.js
function as such:
function waitForLogger(logger) {
return new Promise((resolve) => {
logger.on('close', resolve);
logger.close();
});
}
module.exports = waitForLogger;
At the end of our handler, we wait for the function output before returning:
const logger = require('./logger');
const waitForLogger = require('./wait-for-logger');
async function handler(event, context) {
// your functionality
logger.log('info', 'This should make it through all the transports');
// ...
await waitForLogger(logger);
}
We are using the nodejs8.10 Lambda runtime - if you are using an older runtime, you can probably do something like the following:
function handler(event, context, callback) {
// your functionality
logger.log('info', 'This should make it through all the transports');
// ...
waitForLogger(logger).then(() => callback(null));
}
Thanks @dpraul. One of our transports logs to an external service, and that could take a few extra ms. Will logger.close()
wait for that to complete?
@natesilva thought about this some more
winston@3
both the Logger
and the Transport
are Node.js streams..end
method and a 'finish'
event.Therefore in theory this _should_ work, but I have not tested it yet:
const winston = require('winston');
const logger = winston.createLogger({
transports: [
new winston.transports.Console()
]
});
process.on('exit', function () {
console.log('Your process is exiting');
});
logger.on('finish', function () {
console.log('Your logger is done logging');
});
logger.log('info', 'Hello, this is a raw logging event', { 'foo': 'bar' });
logger.log('info', 'Hello, this is a raw logging event 2', { 'foo': 'bar' });
logger.end();
Verified:
{"foo":"bar","level":"info","message":"Hello, this is a raw logging event"}
{"foo":"bar","level":"info","message":"Hello, this is a raw logging event 2"}
Your logger is done logging
Your process is exiting
going to add an example with this as a way to resolve the issue. Please feel free to reopen if this does not work for you in Lambda. Functions-as-a-service platforms are 100% in winston
's target user base so care about this working correctly.
Would be great if you could contribute an end-to-end AWS lambda example using winston
if you have the time @natesilva.
I am reviewing this as we speak, and I believe I have verified neither of these methods to work. I'll try to share a full test-case here shortly.
const winston = require('winston');
class CustomTransport extends winston.Transport {
log({ message }, cb) {
setTimeout(() => {
console.log(`custom logger says: ${message}`);
cb(null);
}, 3000);
}
}
const logger = winston.createLogger({
transports: [
new winston.transports.Console(),
new CustomTransport(),
],
});
logger.on('finish', () => {
console.log('done!');
});
logger.log('info', 'here is some content');
logger.end();
Expected output:
{"level":"info","message":"here is some content"}
custom logger says: here is some content
done!
Actual output:
{"level":"info","message":"here is some content"}
done!
custom logger says: here is some content
The finish
event is firing before all the transports have finished. The same behavior occurs when using .on('close', fn)
and the .close()
event at the end.
Sorry, I deleted my comment. My understanding of these stream events is limited but I can see how finish
might fire at the wrong time, per @dpraul’s example.
I've gone through the same example with the end
, finish
, and close
events, and utilizing both stream.Transform.close()
and stream.Transform.end()
- all have the same results, where the events trigger before CustomerTransport
has called its callback
.
Might there be some issue in how events are passed between the Transports
and Logger
?
Regardless, @natesilva or @indexzero, you may want to reopen this issue.
@dpraul that seems plausible, but I don't see where that would be:
Transport
in winston-transport
implements the _write
method to fulfill Node.js' Writeable
interface. This receives the callback
and passes it to the log
method.Console
transport in winston
calls process.{stderr,stdout}.write
and then invokes the callback
. Afaik writing to process.stdout
and process.stderr
are synchronous operations so there _shouldn't_ be any potential risk of back pressure. Clearly there is something being missed in this flow. Going to chat with some Node.js folks and see what I can dig up.
I still don't know what's causing the issue, but I found a workaround that waits for all the transports to finish!
const winston = require('winston');
class CustomTransport extends winston.Transport {
log({ message }, cb) {
setTimeout(() => {
console.log(`custom logger says: ${message}`);
cb(null);
}, 3000);
}
}
const logger = winston.createLogger({
transports: [
new winston.transports.Console(),
new CustomTransport(),
],
});
async function waitForLogger(l) {
const transportsFinished = l.transports.map(t => new Promise(resolve => t.on('finish', resolve)));
l.end();
return Promise.all(transportsFinished);
}
logger.log('info', 'here is some content');
waitForLogger(logger).then(() => console.log('all done!'));
Output:
{"level":"info","message":"here is some content"}
custom logger says: here is some content
all done!
It seems that logger.end()
properly propagates to the transports, but fires the finish
event before all the transports have fired theirs. The workaround is to wait on each Transport
to fire its own finish
event and not rely on the Logger
at all.
Thanks for continuing to investigate @dpraul. The logger
is piped to all of the transports, when the logger is closed it was my understanding that it would not emit finish until all of its pipe targets had, but I may be mistaken about that.
Going to verify with some folks.
Thanks to some input from @mcollina @davidmarkclements & @mafintosh I learned that my understanding of pipe
semantics was flawed.
'error'
and 'finish'
events do not back propagate through the pipe chain. 'error'
events cause unpipe to occur.Going to explore how to either back propagate these events or expose an explicit method on Logger
that implements the technique you're using now @dpraul (maybe using end-of-stream).
@natesilva going to leave this issue open until the explicit fix is in, but in the meantime you should be able to use the technique above:
'finish'
events on all transportslogger.end()
'finish'
events have been emitted you are done logging.Important modules that might be handy: pump and end-of-stream.
@mcollina theoretically from a Node.js streams perspective we should be able to implement _final
on the logger to force finish
events to be emitted later, right?
@indexzero yes, that would be correct.
Tell me pls, in winston@3 i can't use callbacks like in version 2?
(there is no answer in the documentation)
// ... some check the connection to db ... and if fail:
logger.error('message', function() {
process.exit(1);
});
And if not use callback my app closes faster than message was logged =(
@TuxGit FYI this has been documented in UPGRADE-3.0
(see: the PR code)
@natesilva @dpraul PR is open to address this now: https://github.com/winstonjs/winston/pull/1346
I believe this is still broken. Although the finish
event is now propagating correctly and is being awaited, at least in the File
transport, the underlying WriteableStream
has not finished flushing its write to disk yet.
I believe that the PassthroughStream
is instead having its finish
event get emitted, which is why its not awaiting the writes.
Nevermind - the PassthruStream
is fine, but it doesn't working listening to the finish
event.
I can confirm that wiring up to end-of-stream
and using that to listen to the transport._stream
does indeed work.
Any update on this ? I am trying logger.on('finish', () => process.exit(0))
but it does not seem to exit. It just exists after the timeout expires
I am having the same issue when using console transport (haven't tried file to be honest). It seems that Logger
extends stream.Transform
from stream-readable
module and _final
method is not getting called in _stream_transform
, only in _stream_writable
. I really didn't want to dwell on it since this is my first time using any of these modules and I apologize if I misunderstood something.
../lib/winston $ grep -n _final *.js
logger.js:249: _final(callback) {
.../readable-stream/lib $ grep -n _final *.js
_stream_writable.js:130: // if _final has been called
_stream_writable.js:276: if (typeof options.final === 'function') this._final = options.final;
_stream_writable.js:601: stream._final(function (err) {
_stream_writable.js:613: if (typeof stream._final === 'function') {
Add a custom transport and check for log's level and add custom logic in respective level. This is not the equivalent of doing something in a callback. But however if you want tweak and send some logs to some other destination like airbrake, this would help.
const Transport = require('winston-transport');
class YourCustomTransport extends Transport {
log(info, callback) {
setImmediate(() => {
this.emit('logged', info);
});
if (info.level === 'error') {
console.log('DO WHATEVER YOU WANT HERE.....', info.message);
}
callback();
}
}
module.exports = YourCustomTransport;
And in the file where you create your custom winston logger using winston.createLogger(...), include your CustomTransport in the config, to the "transports" option's array.
transports: [
new winston.transports.Console({ silent: isTestEnvironment }),
new CustomTransport(),
],
Hello, I've used the recommended code below, but it does not finish writing the files before closing the process. I do see the errors written to the Console. If I don't call the process.exit, the files do write correctly. I see this issue is closed, but has been resolved?
logger.log('info', 'some message');
logger.on('finish', () => process.exit());
logger.end();
Does anybody have this working correctly? Any ideas?
@tsaockham
I have solution :)
function waitForLogger() {
return new Promise(resolve => setTimeout(resolve, 2500));
}
After 2h of searching solution I implemented this, even function waitForLogger by @dpraul didn't worked, I am done.
I keep getting alerts on this thread, so figure I'll share why I think this still isn't working for a lot of people. Background: our use-case (and the context wherein this issue was opened) is AWS Lambda, so what I talk about here only applies there.
Lambda runs Node under the Lambda Execution Context. Pertinent information being:
After a Lambda function is executed, AWS Lambda maintains the execution context for some time in anticipation of another Lambda function invocation.
and
- Any declarations in your Lambda function code (outside the handler code, see Programming Model) remains initialized, providing additional optimization when the function is invoked again.
i.e. to speed up launches, Lambda "freezes" and "thaws" environments instead of fully shutting them down. To do this, Lambda doesn't wait for Node to exit, it waits for your handler function to exit. Any asynchronous processes are run outside of the handler function, and thus are apt to be frozen with the rest of the Lambda Execution Context if they are not waited for.
Now let's look at the suggested solution for waiting for Winston - adapted from UPGRADE-3.0.md, assuming we're running in Lambda:
async function lambdaHandler(event, context) {
const logger = winston.createLogger({
transports: [
new winston.transports.Console(),
new CustomAsyncTransport(),
],
});
logger.log('info', 'some message');
logger.on('finish', () => process.exit());
logger.end();
}
Spot the problem? logger
fires logger.end()
within the handler function context, but the function fired by logger.on('finish')
runs outside the handler context. Any async processes tied-up by CustomAsyncTransport
will stall the finish
event from firing, making it likely that the Execution Context freezes before that event fires.
To solve this, lambdaHandler
must wait for the logger to exit before resolving:
async function lambdaHandler(event, context) {
const logger = winston.createLogger({
transports: [
new winston.transports.Console(),
new CustomAsyncTransport(),
],
});
const loggerFinished = new Promise(resolve => logger.on('finish', resolve));
logger.log('info', 'some message');
logger.end();
await loggerFinished;
}
Since lambdaHandler
doesn't exit until logger
fires the finish
event, our CustomAsyncTransport
should close before our lambda handler, saving those processes from being frozen (assuming the finish
event is correctly implemented by @indexzero).
This can be generalized to something similar to the code I've shared previously:
async function waitForLogger(logger) {
const loggerDone = new Promise(resolve => logger.on('finish', resolve));
// alternatively, use end-of-stream https://www.npmjs.com/package/end-of-stream
// although I haven't tested this
// const loggerDone = new Promise(resolve => eos(logger, resolve));
logger.end();
return loggerDone;
}
async function lambdaHandler(event, context) {
const logger = winston.createLogger({
transports: [
new winston.transports.Console(),
new CustomAsyncTransport(),
],
});
logger.log('info', 'some message');
await waitForLogger(logger);
}
Hope this helps some people.
I guess this doesn't work for the default logger?
winston.on()
isn't a function.
@dpraul solution worked for me.
Adapted it a little bit:
async close() { return new Promise((resolve, reject) => { logger.on('close', () => { return resolve() }) logger.on('error', err => { return reject(err) }) logger.close() }) }
Just to note, it's good practice to setup your error handler at the same time as you create your logger
. The errors can be fired at any time - if you only listen to them when you close the logger, you will likely miss them.
I want to use winston3 under Nodejs8 logging to mariaDB with the sqlTransport module. For this, I adapted the sqlTransport for Winston3 according to docs/transports.md.
Winston3 caused the program to hang at the end.
I experimented with various proposals from this issue and came to the following conclusion:
There must be a close() method implemented into the sqlTransport module that closes the module ( i.e. the client inside) correctly.
Note: do not implement an end() method! - I really don't know why.
Now, logger.end() is sufficient to close the program correctly.
Logger.close(), which causes an immediate abort of the logger and a data loss.
If a error handler is required at the end of the programm you can use
function waitForLogger(myLogger) {
const loggerDone = new Promise((resolve, reject) => {
myLogger.on('finish', () => { return resolve() })
myLogger.on('error', err => { return reject(err) })
})
myLogger.end();
return loggerDone;
}
and
waitForLogger(logger).then(() => { console.log('all done!')}).catch((err) => { console.error(err)});
I still can't get this to work. I'm using winston 3.1.0 and this slack transport winston-slack-webhook-transport that makes an http request. Started receiving this error:
Error: write after end
at writeAfterEnd (/var/task/node_modules/winston/node_modules/readable-stream/lib/_stream_writable.js:257:12)
at DerivedLogger.Writable.write (/var/task/node_modules/winston/node_modules/readable-stream/lib/_stream_writable.js:301:21)
at DerivedLogger.(anonymous function) [as error] (/var/task/node_modules/winston/lib/winston/create-logger.js:81:14)
at errorHandler (/var/task/error/error-handler.js:27:36)
at Layer.handle_error (/var/task/node_modules/express/lib/router/layer.js:71:5)
at trim_prefix (/var/task/node_modules/express/lib/router/index.js:315:13)
at /var/task/node_modules/express/lib/router/index.js:284:7
at Function.process_params (/var/task/node_modules/express/lib/router/index.js:335:12)
at Immediate.next (/var/task/node_modules/express/lib/router/index.js:275:10)
at Immediate._onImmediate (/var/task/node_modules/express/lib/router/index.js:635:15)
EDIT:
I noticed now that I must create the logger inside the handler, if I don't, the lambda may reuse the context and try to use an "ended" logger instance.
I'm getting the same error as @iudelsmann, is it possible to check if the logger is closed already?
With AWS API Gateway, We're able to log 1-2 messages but after logger.end(); it's still trying to use the same logger and we're getting
UnhandledPromiseRejectionWarning: Error: write after end
at writeAfterEnd (/var/task/node_modules/winston/node_modules/readable-stream/lib/_stream_writable.js:257:12)
at DerivedLogger.Writable.write (/var/task/node_modules/winston/node_modules/readable-stream/lib/_stream_writable.js:301:21)
at DerivedLogger.log (/var/task/node_modules/winston/lib/winston/logger.js:223:12)
We're using Papertrail logging with winston-syslog.
Here is the code.
`const winston = require('winston')
require('winston-syslog')
const hostname = require("os").hostname()
const options = {
host: PAPERTRAIL_URL,
port: PAPERTRAIL_PORT,
app_name: PAPERTRAIL_LOG_CHANNEL,
localhost: hostname
}
const ptTransport = new winston.transports.Syslog(options)
const logger = winston.createLogger({
transports: [ptTransport],
format: winston.format.combine(
winston.format.colorize({
all: true
}),
winston.format.simple()
)
})`
@hdpa that's probably because after you close it your function is reused, our current solution is this:
// logger.js
let logger;
const recreateLogger = () => {
logger = winston.createLogger({
....
};
const getLogger = () => {
if (logger && !logger.writable) {
recreateLogger();
}
return logger;
}
const closeLogger = async () => {
const loggerClosed = new Promise(resolve => logger.on('finish', resolve));
// https://github.com/winstonjs/winston/issues/1250
logger.end();
return loggerClosed;
}
then in our function we just call getLogger()
to get the logger which automatically reopens it if it's closed and at the end of the function we use closeLogger()
to make sure it flushes all the logs to cloudwatch
Implemented the above changes but still getting the same issue. I printed the logger object in the console.Below is the response for the first and second requests,
DerivedLogger { _readableState: ReadableState { objectMode: true, highWaterMark: 16, buffer: BufferList { head: null, tail: null, length: 0 }, length: 0, pipes: Syslog { _writableState: [Object], writable: true, domain: null, _events: [Object], _eventsCount: 6
DerivedLogger { _readableState: ReadableState { objectMode: true, highWaterMark: 16, buffer: BufferList { head: null, tail: null, length: 0 }, length: 0, pipes: null, pipesCount: 0, flowing: false, ended: true, endEmitted: false, reading: false, sync: false, needReadable:
Error: write after end at writeAfterEnd (/var/task/node_modules/winston/node_modules/readable-stream/lib/_stream_writable.js:257:12) at DerivedLogger.Writable.write (/var/task/node_modules/winston/node_modules/readable-stream/lib/_stream_writable.js:301:21) at DerivedLogger.(anonymous function) [as error] (/var/task/node_mod
Looks like Syslog transport is being removed after end method and that's where we're getting an error.
Forget my post, it seems that on the first request it does work, on the second one something just timeouts 😕 would be good to just wait for it to flush instead of closing it
Oh the problem with above is that I try to close an already closed logger so promise never resolves, this fixes it
const closeLogger = async () => {
if (logger && !logger.writable) {
// If it's already closed don't try to close it again
return Promise.resolve();
}
const loggerClosed = new Promise(resolve => logger.on('finish', resolve));
// https://github.com/winstonjs/winston/issues/1250
logger.end();
return loggerClosed;
}
Issue #1081 refers to this thread, but there is not real solution to Cloudwatch logs stripping off the request ID using logger transports. The only way I was able to include requestID is as below using the Lambda's context
object -
createLogger({
defaultMeta: { service: process.env.AWS_LAMBDA_FUNCTION_NAME,
requestID: context.awsRequestId
},
transports: [
new transports.Console({
level: 'debug',
format: format.combine(
format.timestamp({
format: 'YYYY-MM-DD HH:mm:ss'
}),
format.errors({ stack: true }),
format.splat(),
format.json()
)
})
]
});
However, this method doesn't work if your Logger.js
happens to be outside of your Lambda handler for example in a Lambda Layer. Apparently, context isn't accessible from imports, so if you have your Logger definition included in a Layer and you import it in your Lambda(s) then you would have to inject defaultMeta
to the logger
object inside of the handler function.
PS: I feel defining Logger by individual Lambda function is a terrible idea, thus the usage of Layers.
const logger = require('./opt/nodejs/Logger'); //import from Lambda Layer
exports.handler = async (event, context) => {
//inject defaultMeta
logger.log.defaultMeta = { service: process.env.AWS_LAMBDA_FUNCTION_NAME,
requestID: context.awsRequestId };
logger.log.info(`This is just an info message`);
logger.log.error("This is an error message");
await logger.waitForLogger(logger.log);
return event;
};
logger.end()
works only for the first function call. Second call leads to an error:
{
"errorType": "Error",
"errorMessage": "write after end",
"code": "ERR_STREAM_WRITE_AFTER_END",
"stack": [
"Error: write after end",
" at writeAfterEnd (/var/task/node_modules/readable-stream/lib/_stream_writable.js:257:12)",
" at DerivedLogger.Writable.write (/var/task/node_modules/readable-stream/lib/_stream_writable.js:301:21)",
" at DerivedLogger.(anonymous function) [as info] (/var/task/node_modules/winston/lib/winston/create-logger.js:81:14)",
" at Runtime.handler (/var/task/lambda_logger.js:65:12)",
" at Runtime.handleOnce (/var/runtime/Runtime.js:66:25)"
]
}
I use winston
as a decorator for lambda handler:
// lambda_logger.js
const winston = require('winston')
const WinstonCloudWatch = require('winston-cloudwatch')
const { format } = winston
async function closeLogger(logger) {
const transportsFinished = logger.transports.map(
t => new Promise(resolve => t.on('finish', resolve))
)
logger.end()
return Promise.all(transportsFinished)
}
const decorator = (wrapped, loggerName = null) => {
return async (event, context, callback) => {
const LAMBDA_LOGGER_NAME =
loggerName || `${process.env.AWS_LAMBDA_FUNCTION_NAME}-lambda-logger`
winston.loggers.add(LAMBDA_LOGGER_NAME, {
format: format.combine(
format.timestamp({ format: 'DD-MM-YYYY HH:mm:ss' }),
format.errors(),
format.label({
label: `${process.env.AWS_LAMBDA_FUNCTION_NAME}:${process.env.AWS_LAMBDA_FUNCTION_VERSION}`,
}),
format.splat(),
format.json()
),
transports: [
new WinstonCloudWatch({
logGroupName: process.env.AWS_LAMBDA_LOG_GROUP_NAME,
logStreamName: process.env.AWS_LAMBDA_LOG_STREAM_NAME,
awsRegion: process.env.AWS_REGION,
jsonMessage: true,
retentionInDays: 3,
}),
],
defaultMeta: { context, logger_name: LAMBDA_LOGGER_NAME },
})
const logger = winston.loggers.get(LAMBDA_LOGGER_NAME)
logger.info({ env: process.env, event })
let res
try {
res = await wrapped(event, {
context,
callback,
loggerName: LAMBDA_LOGGER_NAME,
})
logger.debug('RES:', res)
} catch (e) {
console.error(e)
throw e
}
await closeLogger(logger)
return res
}
}
module.exports = decorator
// handler.js
const winston = require('winston')
const loggingDecorator = require('./lambda_logger')
const hello = async (event, opts) => {
const { loggerName } = opts
const logger = winston.loggers.get(loggerName)
logger.warn({ logger })
const res = {
statusCode: 200,
body: JSON.stringify({
message: 'We use Winston logger as a decorator for lambda handdler!!'
}),
}
return res
}
module.exports.hello = loggingDecorator(hello)
Any ideas how to fix it?
I think this issue should put in the first priority.
For process with code like:
process.on('uncaughtException', err => {
logger.error(err, () => {
process.exit(1);
});
});
It will cause the process hang forever and unable to restart gracefully.
Will this issue be fixed or it will just keep closed?
@dpraul's solution here (https://github.com/winstonjs/winston/issues/1250#issuecomment-452128291) explained the problem and workaround very well. This worked for me in AWS lambda.
One note that he called out earlier in the thread that I'll re-iterate. You will need to create your logger AND transport objects at the beginning of the lambda handler. If you try to re-use them you'll get errors since the transport stream will be closed when the lambda execution environment is frozen.
A simpler way I really like myself:
function noop() {}
class AsyncTransport extends Transport {
constructor(opts) {
super(opts)
this.callsInProgress = new Map()
this.index = 0
}
async finish() {
return Promise.all(this.callsInProgress.values())
}
log(info) {
const promise = someAsyncFn(info)
const i = this.index++
this.callsInProgress.set(
i,
new Promise(resolve => {
promise.then(resolve)
setTimeout(resolve, 3000)
}).then(() => {
this.callsInProgress.delete(i)
}),
)
}
}
const asyncTransport = new AsyncTransport({ level: 'info' })
const logger = createLogger({
transports: [
asyncTransport,
],
})
logger.flush= async () => asyncTransport.flush()
module.exports = logger
Then in your lambda you can again:
exports.handler = async function() {
logger.info('I will be awaited at the end!')
await logger.flush()
}
There is documentation floating around that says you can call end() on a Winston logger to flush, but it doesn't say that end() is the same as calling close(). You just have to know. Therefore, there is no way to flush a logger and continue using it, which is a use case for JavaScript AWS lambdas.
There is a reusable flush option for the WinstonCloudWatch transport: kthxbye. Search for kthxbye for more information.
If you want to write a generic process.on('exit' handler that closes a Winston logger, here's what you need to do.
// Send kthxbye to WinstonCloudWatch transports
if (logger.writable) {
await new Promise(resolve => {
logger.on('error', resolve).on('close', resolve).on('finish',
() => logger.close()).end();
});
}
// Send kthxbye to WinstonCloudWatch transports again
The act of closing loggers can cause unhandled exceptions (including those thrown before the above code executes) to be sent to transports (particularly WinstonCloudWatch) so you need to call kthxbye again after the above code finishes. Yes. Really. Hopefully other transports like file get flushed in this scenario - I've tested this and they seem to get flushed, probably because nodeJS flushes file handles at exit (which for some cases might not be soon enough).
Therefore, Winston 3 still doesn't have a flush operation that works with all transports.
Most helpful comment
I keep getting alerts on this thread, so figure I'll share why I think this still isn't working for a lot of people. Background: our use-case (and the context wherein this issue was opened) is AWS Lambda, so what I talk about here only applies there.
Lambda runs Node under the Lambda Execution Context. Pertinent information being:
and
i.e. to speed up launches, Lambda "freezes" and "thaws" environments instead of fully shutting them down. To do this, Lambda doesn't wait for Node to exit, it waits for your handler function to exit. Any asynchronous processes are run outside of the handler function, and thus are apt to be frozen with the rest of the Lambda Execution Context if they are not waited for.
Now let's look at the suggested solution for waiting for Winston - adapted from UPGRADE-3.0.md, assuming we're running in Lambda:
Spot the problem?
logger
fireslogger.end()
within the handler function context, but the function fired bylogger.on('finish')
runs outside the handler context. Any async processes tied-up byCustomAsyncTransport
will stall thefinish
event from firing, making it likely that the Execution Context freezes before that event fires.To solve this,
lambdaHandler
must wait for the logger to exit before resolving:Since
lambdaHandler
doesn't exit untillogger
fires thefinish
event, ourCustomAsyncTransport
should close before our lambda handler, saving those processes from being frozen (assuming thefinish
event is correctly implemented by @indexzero).This can be generalized to something similar to the code I've shared previously:
Hope this helps some people.