Outcome of discussion in diagnostics WG
ref: https://github.com/nodejs/diagnostics/pull/361
Is your feature request related to a problem? Please describe.
async_storage API is landed, but may be a long way to go before large scale field adoption - in terms of API stability, interfaces, performance, ...
Describe the solution you'd like
One proven method is to consume it ourselves and allow things to refine and evolve.
We have good amount of client-server tests (net, http, http2...) and a sub-set of those may have opportunities to process data elements through async-storage
mechanism. It is so possible that other types of tests have opportunities too.
So the proposal is to:
async_storage
cc @nodejs/testing @vdeturckheim @Qard
@gireeshpunathil I really like this plan. But I am unsure of how much work that is atm. Would you have an example of test you would like to see using this API? I guess, based on how many of them there are, we can ask for help around (with "help wanted" and "good first issue") :)
@vdeturckheim - I haven't look at the tests yet, but will try to look at and find out one, will let you know.
While the mechanism is still experimental, I'd be reluctant to switch over to it wholescale so I think the plan to "develop new tests based on existing qualifying tests" is good. I would just be reluctant to require those be green to pass CI. Perhaps moving those into a separate tests/experimental
bucket would be good?
I'd suggested "develop new tests based on existing qualifying tests" due to the concern (ie changing existing tests to depend on an experimental feature) which I share, but I think that for new tests I think they should be green. If a test can't be green then we should not add it, or potentially mark it as flaky temporarily.
I don't think we've said tests for other experimental features don't have to be green.
I agree, experimental gives us a certain amount of discretion in making breaking changes, or removing the feature, or even having the feature be incomplete in some sense, but we don't knowingly ship bugs in experimental features, or ignore test failures related to them.
I have attempted to port
the general approach is:
Most helpful comment
I have attempted to to a corresponding cls variant. My plan is to identify unique patterns in the existing bucket, and develop one cls variant test case per pattern.
port
the general approach is:
please let me know if you have suggestions!
I am also looking to see if there are benchmarks that I can work with to see if non-closure variants can bring-in improvements in terms of performance or footprint.