Node: Implement window.fetch into core

Created on 16 Mar 2018  ·  158Comments  ·  Source: nodejs/node

https://github.com/bitinn/node-fetch

It would make sense if window.fetch was implemented into core. it seems to be a stable enough API that would make a good candidate for inclusion. Not sure what the process is from here but thought I'd raise an issue :)

feature request http http2

Most helpful comment

This would definitely be useful in simple cross-platform APIs and I think is what a lot of people use node-fetch for already. Also it would be nice if HTTP/1 v HTTP/2 negotiation can be handled automatically like in browsers as well.

All 158 comments

this has come up from time to time but has not had much traction just yet. Let's see what folks think tho :-)

bradley and i were discussing this in a roundabout way on the subject of importing from urls. if that feature was introduced (and i think in general we do want it) we would need to implement this: https://html.spec.whatwg.org/multipage/webappapis.html#fetch-a-single-module-script which uses the fetch spec. as another note if this was added in core i would want to pull in an existing c++ implementation from one of the browsers. however at a bare minimum node will definitely be adding Request and Response objects, it just might not add a function called fetch

-1 this kind of higher-level functionality is best left to userland

This would definitely be useful in simple cross-platform APIs and I think is what a lot of people use node-fetch for already. Also it would be nice if HTTP/1 v HTTP/2 negotiation can be handled automatically like in browsers as well.

I would love this! ❤️

Isomorphic JS is one of the big reasons people who start with JS on the front end, eventually pick up Node.js on the backend.

You get to run the exact same function in the browser and the server and the one place of contention I keep finding is window.fetch.

Some of the code that runs on the server and client needs to make HTTP requests to another server (think microservices with server side rendering and client side rendering).

One case for bringing it into core is that making HTTP requests (client) is closely tied to responding to HTTP requests (server).

And we now have isomorphic URL parsing so now all we need is fetch! Let’s make fetch happen!

Fetch is 'low level' according to it's author and major supporter hence missing a bunch of features like support for content types, JSON not being default, no query string encoding. Apps that need a high level quality HTTP client available in all JavaScript environments can continue using superagent.

@mikemaccana the fetch we are talking about is https://fetch.spec.whatwg.org/ and i don't think its appropriate to be plugging other http libraries

@devsnek Yes I know, that's the one I was specifically referring to. I don't have any particular enjoyment of superagent asides from it being:

  • a full featured HTTP client
  • available in node and the browser
  • more popular than fetch
  • has JSON as a default
  • encodes query strings
  • uses content types to determine response body
  • uses HTTP verbs as method names, so you can happily .get() and .post() things rather than 'fetching with method POST' which is a somewhat odd mental model

If fetch supported these I'd suggest it be included in node. To repeat: I've asked fetch's author and main proponent, Jake Archibald, why it doesn't support these things and it's stated that fetch is designed to be a low level API and things like sensible defaults can/should be added by higher level APIs. As a result I see no reason to use fetch in node, or in the browser.

@mikemaccana

These are actually desirable properties for the argument of including fetch in Node.js core, if we want to keep the core low-level and small. I believe we will need to eventually provide a promisified API for http/http2 anyway, and fetch as an existing spec for a similar set of low-level functionality as our http is something worth considering. In my experience the missing pieces in fetch feels pretty similar to the missing pieces in Node.js's http, the major difference is that you have a spec'ed API to count on.

Also I kind of doubt the "more popular than fetch" part, I don't think there are as many people using super-agent in the browser as in Node, given that fetch simply exists in the browser (modulo situations needing polyfills)?

Although, before we start implementing fetch, I believe we will need to introduce the stream API in the browser? Is introducing yet another stream in Node.js on the table?

The one time I've been forced to resort to XHR in the browser was when I needed progress, although I think with the browser's ReadableStreams it's possible to do the same thing with an API that's a bit awkward (res.body.read().byteLength)- if I'd implement progress in Node.js I think I'll need to use chunk.byteLength from the chunk being emitted in the data event, which is where the difference between the two streams start to matter.

Also, the fetch spec does not seem to include timeout or agents (yet?) at the moment, there might be more functionalities missing compared to our existing http API. For reference, node-fetch seems to implement these non-standard options. Again, not sure if our implementation should implement non-standard functionalities even if they supply the missing pieces compared to the old API.

also cc @TimothyGu you might be interested?

@thecodingdude

entirely disagree; Node uses v8, and by extension, should implement as many as v8 features as possible that make sense. fetch is one of those where developers wouldn't need to npm install request or node-fetch which are very popular libraries so this functionality warrants being in core.

Technically fetch is not a v8 feature though, it's an API spec'ed by WHATWG, whereas v8 implements ECMA-262, a spec by ECMA TC39 - if v8 implemented fetch then there would not be this feature request, because we basically just expose what v8 exposes.

BTW: I don't think we are talking about pulling npm packages into the core? Rather, we are talking about implementing the spec on our own and pulling in the WPT to test compliance, possibly with a bunch of code written in C++, much like what we did for WHATWG URL.

@thecodingdude the continued popularity of non-fetch libraries isn't a personal opinion, it is a fact - superagent had 1.6 million downloads this week . Nor is the lack of reasonable high-level defaults in fetch: again (again) that is acknowledged by fetch's author. Please don't reframe verifiable objective technical facts as irrelevant subjective opinions because you do not like them.

please don't plug other libraries here, they are irrelevant to our discussion.

developers wouldn't need to npm install request or node-fetch

😂👍

I'm personally -1 for two reasons:

  1. Having two different APIs (fetch() and http.request()) for doing the same thing is uncohesive.

  2. fetch() is not flexible enough to support things like proxying, custom TLS certificates, etc. It's handled transparently by the browser but that won't work in node.js.

currently, 84 people agree

I wouldn't put too much stock in that. GH polls are easily gamed through Twitter brigading and whatnot. I'd take them more seriously if it cost a dollar to cast an upvote.

thecodingdude I have commented with a bullet pointed list of technical reasons why fetch is unsuitable as a general purpose HTTP client. I am aware you "do not care how/good bad fetch is". I think this is wrong: node should pick and adopt quality APIs. That you're personally uninterested in fetch's quality does not make it irrelevant to the discussion. The browser vendors were comparing fetch to window.xhr, node is not.

@mikemaccana Why does fetch need to have high level features like "HTTP verbs as method names" for it to be included in core? You can trivially write your own methods that do this, in addition to the other high level features you've mentioned.

The nice thing about having standard APIs shared across both the browser and Node is that by writing the high-level API to target the low-level one (in this case fetch) you can get consistent behaviour much more easily than having to make your API work around differences between different APIs in node and the browser.

@thecodingdude any reason this was closed? discussion still seems to be moving

@devsnek yeah, I am not satisfied with the way this discussion and issue went, and I realise that github issues are simply not the appropriate forum for discussing whether features should land into core. I much prefer the node-eps since that was a proper document outlining the proposal. My intention was to gather the thoughts of the core team, and whomever else is involved in merging features into core (planning, coding, testing etc). Instead the discussion here so far is basically meaningless. It should be about feasibility, what would be required, and when such a feature could be merged.

Go take a look at this issue and try and follow the avalanche of conversation about websockets in core. There's too much noise and not a straightforward approach to proposing new features. I think PHP's rfc system is a good example of getting new features added. Clear, concise, accountable.

I'm down for discussing an _rfc_ and its merits in a separate issue but it's clear this isn't going anywhere, with little to no interaction from TSC or anyone else that matters so there's no point leaving it up just to spam people's inboxes.

I'll refrain from making such issues in the future, my bad.
cc @jasnell

@Jamesernator Why does fetch need to have high level features like "HTTP verbs as method names" for it to be included in core? You can trivially write your own methods that do this, in addition to the other high level features you've mentioned

Sure. I, and everyone else that needs to use query strings, can write a method to encode query strings. Better yet: let's include a good quality, batteries included HTTP client in node where everyone doesn't need to fix it.

@thecodingdude looks like you deleted your comment, but to answer anyway: HTTP2 is necessary because it is required to speak the current version of HTTP. fetch is not necessary, because it is but one of many HTTP clients, and not a very good one, for the reasons discussed.

@mikemaccana But the reason for including fetch in Node core isn't about batteries included, it's so that you can write code that targets both the browser and Node at once. That's why Node also supports the URL constructor (and it's even now global like in the browser in node 10) and the docs even refer to the old url.Url objects as the legacy url API.

This is a good thing, as it means some module that uses dynamic import for instance they don't need to include any library to be able to load resources relative to it e.g.:

...

export default async function isWordForLanguage(language="english") {
    const wordDataUrl = new URL(`./resources/${ language }.csv`, import.meta.url)
    const words = await loadSomehow(wordDataUrl)
    return makeIsWord(words)
}

Now that code has no platform specific behavior (assuming a loadSomehow method which could potentially be fetch). The benefits to this should be fairly obvious:

  • Using APIs that are standardised and included in both the browser and Node means you'll get support in both contexts if there's any issues with implementations (and hence no chance of dead or deprecated libraries)
  • Smaller libraries for high-level features as they don't need to smooth over platform differences, they just use the universal API (URL or fetch or whatever) and build directly on top of that
  • Ditto for your own libraries that need high-level behaviour

I think it's a pointless endeavour to create another http library in Node if it isn't for interoperability. If Node creates another one that isn't in the browser then you simply get the same situation you have now, if you want browser inter-operable code you another library built on top of it just to get interoperability with browsers.

If you think it's important for browsers to support high-level APIs out of the box then it's probably worth supporting the efforts to make high level APIs part of the browser and make clear your goals what you want to see out of high-level APIs. I'm sure many developers would appreciate getting high-level API specifications.

Even better would be if there was a way to get Node and Browsers to collaborate on high-level features, I'm not sure how this might work but if it could be done then it'd be a lot easier to propose high-level APIs that target all environments instead of being silo-ed into Node or the browsers.

Thanks @Jamesernator - it's nice to know there's an effort to resolve this on a larger scale. Thanks also for being polite and actually reading my comments before responding.

I tend to think that were NodeJS to have been developed today, there's no doubt that fetch would have been implemented in the name of aligning with browser APIs. Yes there are some really interesting edge cases, but @TimothyGu has handled these really well in the node-fetch integration.

@thecodingdude do you mind if I reopen? We potentially could still have things line up here - I don't think the detailed discussion has run its course fully quite yet and I think there's still fruitful discussion to be had personally.

@bnoordhuis node-fetch handles the differences here by supporting the NodeJS-specific agent option that delegates features like proxy https://github.com/bitinn/node-fetch#options. It breaks the universal API, but could also be done conditionally quite easily, so I tend to think it seems a fairly good compromise with this stuff. It could also be decided that such a divergence is a bad idea as well though certainly.

The key thing though is that this fetch API is purposely restricted - HTTP and HTTP/2 APIs still remain fundamental, this could just provide the easy universal experience. There's nothing wrong with having two APIs to do the same thing, when one builds on top of the other to provide a compelling user experience.

@guybedford feel free to make a new issue discussing this further if that's the direction that is most suitable. I'd much prefer technical discussions about the implementation itself, and not what fetch does/doesn't do vs alternatives which is why I closed the issue in the first place.

Quite honestly, I don't see why this discussion needs to happen on github - at the least is should be locked to collaborators who are going to be working on the code and discussions focused entirely on implementing fetch, identical to the browser where possible, and nothing more and nothing less.

@thecodingdude I'm still not quite sure I follow why you need a new issue here. Discussing the problem boundaries should surely be the first step for any new feature, and allowance should be made for a wider discussion at the start as well. 16 comments hardly seems off the rails as well.

Certainly the technicalities of whether it should be JS or C++, or where to draw the line on pooling and proxies are important, but things have to run their own course.

There's nothing wrong with having two APIs to do the same thing, when one builds on top of the other to provide a compelling user experience.

But it's not 'on top of' - they do the same thing, just with different interfaces.

Arguments in favor of fetch() boil down to 'convenience' - not a good reason for inclusion in core.

Thanks @bnoordhuis these are important points as to where the line is in Node, and I must admit I'm not familiar with these sorts of discussions historically, but so long as Node strives to provide core utilities, this seems like an important one to include to me.

But it's not 'on top of' - they do the same thing, just with different interfaces.

Currently node-fetch is built on top of the http module, I'd personally prefer such a JS-based implementation here, working from exactly what @TimothyGu has already built as one of the most popular libraries. The pattern of having high and low level APIs for the same thing is an established one.

Arguments in favor of fetch() boil down to 'convenience' - not a good reason for inclusion in core.

Convenience is a hugely important argument, the question is by how much to justify the maintenance costs. Node doesn't just support modules and native bindings - it provides server utilities and universal APIs where appropriate. As I've said, fetch seems to me a critical part of the universal toolbox these days.

I'd even push for fetch to transparently integrate HTTP/2 in future, allowing such a feature to not only build on top of the existing builtins, but also to provide a unification that we don't currently have. I know there are complexities to be tackled here, and it's not a given, but it feels like it would be huge win for Node and its users.

Currently node-fetch is built on top of the http module

That's an implementation detail. Looking at it from a user's perspective it's different ways of doing the same thing, neither one clearly superior.

Convenience is a hugely important argument, the question is by how much to justify the maintenance costs.

The public interface of built-in modules cannot be changed easily, sometimes not at all. Best case it's still a years long affair. Modules on npm don't have that issue.

fetch() has going for it that it's a standard, except we'd have to extend it to make it usable (e.g., that agent option you mentioned.)

It seems like we would need to assess if the agent approach used in node-fetch would be suitable, and how stable this API or another similar around this could be considered to be. Perhaps @TimothyGu can share thoughts re the stability of these cases to ensure backwards compat by default.

I am conflicted on this issue. As a perennial supporter of the small core, I point to the flexibility in making breaking changes being a userland module allows. However, node-fetch (and the Fetch API) has been remarkably stable: node-fetch only had one breaking release in the past few years, and the currently scheduled breaking changes are all logistical (bumping the supported versions of Node.js and a third-party package).

On the other hand, I feel it is would be a much improved experience for users if they do not have to install a package for the very basic purpose of fetching a file in a script, with an interface they could be familiar with. No, http.request does not count as one solution, since one would have to put more work into supporting redirects, compression, and now transparent HTTP/2.

Regarding the functionality gap @bnoordhuis mentioned:

fetch() has going for it that it's a standard, except we'd have to extend it to make it usable (e.g., that agent option you mentioned.)

In my opinion, if node-fetch has all of its extensions (grep for "node-fetch extension" in the README) removed, it would still be a perfectly useful tool, especially for single-file scripts. Telling the user "fetch() is offered only for convenience, and for more features/customizations (like the agent option), use node-fetch/request/etc." is perfectly viable solution.

In the end, I would be happy to see fetch() and peripheral APIs be a part of core.

For node to support fetch we may need to make a couple of spec changes to allow it to bypass things like CORS, but it sounds doable.

However, I don't think it's worth considering unless/until node implements https://streams.spec.whatwg.org/, which I'd love to see! Edit: Although, browsers shipped fetch before streams landed, so I guess node could do the same.

@mikemaccana

Fetch is 'low level' according to it's author

It's low-level for the web. But in some ways that makes it high-level for node due to its relatively relaxed security requirements. Although this could be tacked pretty easily in the spec.

and major supporter hence missing a bunch of features like support for content types

Fetch supports the Content-Type header.

JSON not being default

For what? Response bodies? Having a type that changes depending on a header from the sender sounds risky to me. Eg, you could easily end up with code that works great when the response pops out as an object, but if an attacker could manipulate headers to change that into a binary format, bad things could happen.

no query string encoding

This isn't true. The fetch API supports URL objects which support searchParams: https://url.spec.whatwg.org/#interface-urlsearchparams. These can be used as request bodies too.

'fetching with method POST' which is a somewhat odd mental model

It's… how HTTP works. The method name is a string.

I've asked fetch's author and main proponent, Jake Archibald

I'm not fetch's author, although I have contributed. I don't know what qualifies me as the main proponent either.

The list of "dislikes" seems pretty weak to me. A better way to prove this could be with code. Being aggressive & uninformed is a bad combo, try to shake at least one of these.

@joyeecheung

The one time I've been forced to resort to XHR in the browser was when I needed progress, although I think with the browser's ReadableStreams it's possible to do the same thing with an API that's a bit awkward

It's currently:

const response = await fetch(url);
const total = Number(response.headers.get('content-length'));
let bytes = 0;
for await (const chunk of iterateStream(response.body)) {
  bytes += chunk.length;
  console.log(bytes, ' - ', total);
}

I kinda like that this is explicit that you're trusting the Content-Length header for the total. Although I'd like to introduce some kind of observer object to make this easier. And you won't need iterateStream once ReadableStream implements Symbol.asyncIterator.

Also, the fetch spec does not seem to include timeout

You can do this with abort signals:

const controller = new AbortController();
const { signal } = controller;
setTimeout(() => controller.abort(), 3000);
const response = await fetch(url, { signal });

This gives you the flexibility to timeout in more complex ways, eg if no bytes are sent for some kind of timeout:

const controller = new AbortController();
const { signal } = controller;
let timeoutId;
const response = await fetch(url, { signal });
const resetTimer = () => {
  clearTimeout(timeoutId);
  timeoutId = setTimeout(() => controller.abort(), 3000);
};
resetTimer();

for await (const chunk of iterateStream(response.body)) {
  resetTimer();
  doSomething(chunk);
}

or agents

Yeah, we don't have anything like this yet. We'd have to think about the security implications of this for the web. Or, we just reserve that option for node in the spec, so we'll never use something of the same name to mean something different.

(I’ll re-open this because the discussion is obviously still ongoing and this issue hasn’t really come to a conclusion so far.)

@jakearchibald oh dear, here we we go again.

Fetch supports the Content-Type header.

Yes. That's not the same thing a actually using a content type: if I accept JSON, then give me JSON. Don't ask every user to manually decode it.

JSON not being default for what for what? Response bodies?

Content Type and accepts.

The fetch API supports URL objects which support searchParam

Cool. Is this documented for users? The only fetch documentation I've ever seen has people manually encoding query strings and the link you just gave is to an implementation guide for browser developers.

It's… how HTTP works. The method name is a string.

Doesn't make it 'fetching a GET' or however you want to put it less awkward. More like someone really, really wanted to use the name 'fetch' instead of http.

I don't know what qualifies me as the main proponent either.

(Edit: mention of Jake's behavior on other social media platforms removed)

A better way to prove this could be with code.

😂. You contributed to a spec that doesn't do in 2018 what superagent did in 2012 and you want code, like nobody's ever written an HTTP client that doesn't suck before?

This is not the place to discuss general opinions about fetch itself. Please keep the discuss focused on merits of implementing fetch in core.

I think this is one DOM API that'll be very hard to get right in core (due to header safelists etc) but easy to get well enough in userland.

@benjamingr I don't think node would have to comply with the parts of fetch that are there for web security. It'd be nice to get changes into the spec that allow node to do that while still being compliant, as long as isn't too complicated.

@jasnell my main concern is that adding fetch to core would still require users to either write a bunch of boilerplate code for common tasks, or install a high level library - as a result adding fetch would achieve very little for end users.

@jakearchibald you've always been very willing to take the extra step for the web platform - and as you know - I'm a fetch fan myself :) There are many things in fetch that are very different from Node's PoV as a client.

The API surface of fetch is actually quite large and browsers (and polyfills) don't really implement it. While fetch itself has been quite stable browsers haven't really:

  • Fetch introduces a lot of classes (Request, Response, Headers, etc) that would exist alongside node's existing infrastructure for those things.
  • Browsers haven't really figured out .body.getReader() yet, it works in Chrome and _sorta_ in Firefox behind a flag. What type would it return in Node? Is it asyncIterable? What does .cancel on the body do?
  • AbortController would mean discussing and settling how cancellation works in Node, compare it with our current cancellation mechanisms etc.

Things that are geared towards ServiceWorker won't work (although I'm not sure if you saw Cloudflare's new_ish_ "server-side service-workers" thing which is really cool!).

Fetch is great, but I don't agree it's as stable or ready in browsers as people here mention - most of the polyfills blatantly ignore a lot of the edge cases.

Worst - we both know fetch is a lower level API than XHR that is quite more powerful, builds on better primitives and adds _capabilities_. None of the comments here focused on that - it sounds like people (not participants from core though) want this _as a convenience_ rather than with the directed goal of web-node platform compatibility or exposing capabilities.

I think it would be interesting to make a _concrete proposal_ evaluating all the changes it would require in Node and evaluate it then.

@thecodingdude

Quite honestly, I don't see why this discussion needs to happen on github - at the least is should be locked to collaborators who are going to be working on the code and discussions focused entirely on implementing fetch, identical to the browser where possible, and nothing more and nothing less.

We really dislike locking issues here and only do so when we're unable to moderate them. In this issue (while heated) people have made an effort to stay polite. We see everyone responding or reading here as potential contributors and I would like to invite anyone who is reading this and is not sure how to start participating more in node to reach out (my email is benjamingr at gmail) and we'll do our best to find you interesting ways to contribute.

So far this discussion has been going on pretty well with people raising arguments for both sides. Thanks for opening the issue and bringing it here.


@mikemaccana I request that you please consider self-moderating this comment

oh dear, here we we go again.

I've enjoyed your perspective other than that particular comment and I request that we keep a welcoming discussion environment here - as much as possible.

@benjamingr and @addaleax please file a new issue in regards to this discussion. I never intended to open a can of worms which has seemingly happened and I dislike the general tone and attitude @mikemaccana has demonstrated throughout this issue with repeated offtopic remarks and general unhelpfulness. His contributor status should be reconsidered.

Heated discussions are not appropriate for Node and the request was quite simple: fetch into core. I would like to request you close/lock this issue and create a proposal if inclusion is viable. We do not need 50 more comments arguing too and fro as we end up going nowhere.

I think it would be interesting to make a concrete proposal evaluating all the changes it would require in Node and evaluate it then.

That pretty much ends the discussion. It'll be much easier to understand viability when there's a proposal (which is what node-eps was designed to do but sadly doesn't happen anymore).

@thecodingdude similarly, I dislike that you're not tolerant of discussion of the merits of including fetch and wish to only limit the discussion to how it should be done, rather than whether it should be done - which is a reasonable concern whether you personally like it or not.

@benjamingr

Fetch introduces a lot of classes (Request, Response, Headers, etc) that would exist alongside node's existing infrastructure for those things.

Yeah, I can't see how you'd implement fetch without also implementing those. Node implemented WHATWG url despite having its own methods, so I guess it'd be the same here. Question is if it's worth it in this case.

Browsers haven't really figured out .body.getReader() yet

I don't think this is true. The streams spec is pretty stable, with implementations in Chrome, Edge, & Safari. Firefox isn't there yet, but I hear they're finalising their implementation.

What type would it return in Node?

response.body would have to be a WHATWG readable stream (which is what I meant in https://github.com/nodejs/node/issues/19393#issuecomment-376443373).

What does .cancel on the body do?

This is standardised. https://streams.spec.whatwg.org/#rs-cancel covers the stream-specific parts, and the fetch spec reacts to cancelation in multiple places.

AbortController would mean discussing and settling how cancellation works in Node

Yeah, and AbortSignal is an EventTarget which I guess would need to be an EventEmitter in node. That's a compatibility issue we can't really avoid right now.

Things that are geared towards ServiceWorker won't work

Are there particular things you're thinking of? Fetch's references to service worker are pretty minor compared to CORS (which we'd need to find workarounds for).

Fetch is great, but I don't agree it's as stable or ready in browsers

I don't think that's fair to say. It's implemented in Chrome, Firefox, Edge, & Safari. Some implementations have been around for years. Ok, streams aren't part of the Firefox implementation yet, but it's the outlier.

I think the "should node implement?" question comes down to compatibility & suitability rather than stability. I don't have strong feeling here, but if you said "Node will implement either fetch without streams, or WHATWG streams but not fetch. Choose." I'd absolutely pick WHATWG streams.

I think it would be interesting to make a concrete proposal evaluating all the changes it would require in Node and evaluate it then.

+1 and the changes required in the fetch spec.

@mikemaccana fetch is fetch - if you dislike the standard for any reason then take it up with WhatWG, this is not the right place to air your grievances with what it does/doesn't do. Whether this actually ends up in core depends entirely on the Node developers who will be lumbered with the responsibility of maintaining it for a number of years - this is where 'userspace' can be more practical because it's outside Nodes scope so if we're really requiring _this_ much discussion then maybe it's not a suitable candidate to land in core.

Perhaps you'd be so kind to bug WhatWG with all the problems you've discussed here and get them to update the spec for you? You can file an issue with your concerns here.

@thecodingdude I kindly request that you consider your phrasing here. As a part of Node I've found the discussion (including Mike's most of the time) interesting. Please be respectful of different viewpoints and experiences and use welcoming and inclusive language.

We do _a lot more_ discussion than this before introducing a feature like fetch, it is then brought up (as a concrete proposal) to the TSC who decide on whether or not it's worth taking into core and then we do _more_ discussions on the implementation before landing it.

@thecodingdude This isn't about personal grievances. Again I'm referring to the boilerplate code or libraries most node users will need to install in order to use fetch as an HTTP client. As @Jamesernator already mentioned, https://github.com/drufball/layered-apis/ is the place where discussion around high level APIs is already taking place (thanks James). I've said everything I need to here in previous comments. I feel that you continually mischaracterise these as somehow being personal grievances - if you stopped doing that, I wouldn't have to reply.

I appreciate there may be good reasons to include standard things even if they're suboptimal. Fair enough. But technical limits requiring most users to apply the same boilerplate are not personal grievances.

Edit: I have used technical means to stop the discussion between you and I.

Edit 2 to use more welcoming language per @benjamingr request.

@mikemaccana as it currently stands your comment violates our code of conduct. Please moderate it asap to meet our standards.

@benjamingr Sorry, and done.

Fetch introduces a lot of classes (Request, Response, Headers, etc) that would exist alongside node's existing infrastructure for those things.

These are classes Fetch would need, but there are also APIs which work with Fetch which I think developers would expect to be there if Fetch is implemented in core. I'm thinking of FormData and URLSearchParams which both are handy APIs when working with Form data and URL parameters. URLSearchParams are already in core.

If Fetch is to be implemented in core would one expect FormData to be implemented also?

I would find it a bit weird to have Fecth and URLSearchParams in core but not FormData.

I have gone ahead and deleted the last few comments. Actions of an individual on social media are not in scope for our issue tracker. I highly encourage you to edit your comments to keep the discussion on-topic and constructive.

Just a question: If Node core gets a global fetch, does that mean that we could implement that by shipping node-fetch as a baked-in dependency, similar to what we do with node-inspect?

@addaleax technically yes but I think one thing we want to do is keep the underlying http code separate, and I for one want to implement as much of it natively as possible

I agree with @devsnek . Also node-fetch does not seem to integrate web platform test in its test suite, the tests there are good enough for an npm package, but if we want to put it into core for platform isomorphism and claim conformance, actually testing our implementation with WPT would be better, even if we need to port the tests so that they can run in a shell - and I suspect we would need to hack some internals to make the tests work as expected.

@trygve-lie Probably also AbortController [*] and Blob.

[*] Depending on what happens with the cancellation proposal it could be another CancelSignal or CancelToken or whatever. I personally think it would be a bad idea to implement DOM Events just to get AbortController in Node though as then there'd be two distinct event types in Node.

Regarding the implementation details, there are a few things to think of, derived from my experience maintaining node-fetch and work in both the standardization field and in Node.js.

We should not implement parts of the Fetch API.

I believe @jakearchibald has already touched on this point. Browsers have a significantly different threat model from Node.js, and requires a lot of things that don't make sense in the Node.js environment. Some examples include:

As a corollary, only a small subset of Web Platform Tests is actually applicable to Node.js. There are a substantial number of Web Platform Tests available for Fetch, but many of them focus on the things I mentioned above that are outside the scope of a possible Node.js implementation.

Web vs. Node.js

Response#body depends on the WHATWG Streams Standard for streaming data (both input and output). Frankly, web streams are almost always better designed than the current Node.js streams infrastructure; however, currently there is little to none userland support for it. Even the means by which it is implemented in Chrome is considered obsolescent; and while work on the spec continues to happen, it is certainly no longer at its most active. This poses a dilemma for a Node.js implementation: implement web streams and risk platform consistency and adoption, or use Node.js streams for Response#body (what node-fetch does) and forgo compatibility with browsers (arguably one of the most important reasons for adding this feature in Node.js in the first place). A set of unoptimized conversion utilities between Node.js streams and Web streams do exist though.

That is not to mention the implementation of web streams, which exposes quite a heavyweight API surface and poses additional complexity in bridging between JS and C++, a heavily optimized routine in existing Node.js streams.

Other places where the Web vs. Node.js dilemma exists include the use of DOMException in abort fetch, EventTarget in AbortSignal, FormData, and the support for File API through Blob (a topic I remember @jasnell to be interested in).

JavaScript vs. C++

This goes back to the question posed by @addaleax:

If Node core gets a global fetch, does that mean that we could implement that by shipping node-fetch as a baked-in dependency, similar to what we do with node-inspect?

The obvious answer in this case would be "if we use web streams, start from scratch and use C++; if we keep using Node.js streams, continue using JavaScript." But before one jumps into the first option, it is important to remember that despite its many quirks, our current http implementation is very well optimized, and it will surely take many many human-hours to get such a clean-room implementation up to par with the current http module performance-wise.

I am in support of essentially having a in-tree copy of node-fetch, perhaps with some core-specific tweaks. But I would encourage and volunteer to help with any experimentation with a Web Streams version.

Edit: Delete misleading statements about the Streams standard.

@TimothyGu I definitely think response.body being a Node stream would be a bad idea and does generally defeat the point of compatibility as you say.

However it could be the case that's there another Node-specific property response.readableStream that could be used if Node doesn't want to commit to a full stream implementation right away (or ever). People could temporarily use a simple if-branch in the meantime, and if both types of ReadableStream get Symbol.asyncIterator then they would have at least a common consumption method.

People could temporarily use a simple if-branch in the meantime, and if both types of ReadableStream get Symbol.asyncIterator then they would have at least a common consumption method.

This is already the case with Node streams :)

@TimothyGu

while work on the [whatwg streams] spec continues to happen, it is certainly no longer at its most active.

This is pretty misleading. The spec is pretty complete for the set of features it has, and work is now focused on creating transform streams to expose existing browser behaviour, eg https://github.com/whatwg/encoding/issues/72.

The next chunk of work in the streams spec will be around transferability, so streams can move between workers. We're also actively investigating ways to expose the streaming HTML parser.

Speaking for at least Chrome: Development of streams is really important to our goals for at least the next few years.

Node folks, web standards folks, and browser folks aren't on different planets. We're all right here. If you're worried that a standard is abandoned, you can just ask rather than guess using github stats.

@jakearchibald I certainly was not trying to imply that the Streams standard is unmaintained, rather to capture the sentiment that few userland libraries are prepared to deal with web streams, which I think is a fair characterization of the status quo. Apologies for the misleading interpretation of GitHub stats.

@TimothyGu that's fair. I think that's as we're just shipping transform streams, which are essential to userland IMO.

I think it's important to note that, as Jake mentions, "standards folks" don't live on another planet, which means if such huge ecosystem as Node decided to ship a faithful and interoperable implementation of the Fetch Standard then it would be reasonable and expected for the standard to change and accommodate (with reasonable flexibility and following proper and thorough discussion) in areas where such changes make sense, as long as they don't not break or otherwise adversely affect the already-existing developer ecosystem.

As seen in @TimothyGu's thorough assessment, the Node and Browser worlds already diverged quite a lot and this will continue in the future unless steps are taken to converge the two worlds for the greater common good. This convergence will have to start _somewhere_, and Fetch is both a popular and powerful API, that happens to make a lot of sense in both world (hence node-fetch and all other userland implementations of it). I don't think a few global classes and affordances (required by the implementation) will bother too many people, when they don't use them, but they will most certainly result in slightly increased maintenance burden and documentation work, that needs to be carefully evaluated.
My hunch here is, though, that gains from platform convergence and easing developer experience (from lower context-switching cost for full-stack devs to simplifying Electron, IoT, server-side rendering and other codebases where FE+BE usecases co-exist) will justify this increased maintenance cost.

In my opinion starting an experimental/MVP implementation of the API, would make a lot of sense even if that means some limitations and omissions initially (but only if these won't jeopardize future implementation and support for those omitted features). This requires an MVP that focuses on future "web-compatibility", as clearly (and many have pointed this out before me) the resulting software isomorphism & cross-platform consistency are one of the biggest wins of the convergence on Fetch.
On the long term, this should mean convergence in underlying structures (like WhatWG Streams), but this work does not necessarily need to be part of the initial explorations.

In the same vein, it wouldn't be impossible (rather, it would be preferable) to eventually instrument and/or refactor existing WPT tests related to Fetch to support the Node ecosystem as well. Again, these aren't set in stone, so if the long-term convergence between Node and the browser is pursued it is possible to change these tests in a way that caters for both sides' requirements.

Another very interesting application of fetch in core is that currently when loading WASM, it is recommended to use instantiateStreaming which is specifically defined against a Response object following the fetch specification. Fast universal Web Assembly loading could be an interesting aspect of this!

I think it's quite telling here that one of the first features implemented in Deno is a fetch global - https://github.com/ry/deno/blob/7762b55432ba73f07f90414aa7d023bb0779f5de/globals.ts#L52. This isn't just a "nice to have", but a critical part of the story.

@guybedford I think that’s only because deno is very much focused on browser-likeness… see its support for HTTP URLs in import paths. In fact to implement import the browser way you would have to implement the Fetch Standard.

@guybedford that's hardly fetch - it's just a very basic API that makes HTTP calls, it's missing a lot of the capabilities that make fetch fetch and mostly just shares the name.

can we please not start a thing of comparing node.js to deno

There is a huge amount of value in an experimental project like deno in terms of what we can learn from it, which seems unnecessary to try and disregard in general. I disagree with a lot of the assumptions of the project (eg HTTPS imports), but in my opinion there can only be value in considering ideas and insights from any other projects for inspiration in Node.

Since this issue is close I am really confused about what we will do moving forward.

Will Node implements fetch API or not?

We get stuck in an endless conversation about personal preferences about the specification where most of us just want a common API between the browser and Node, aka fetch specification.

Maybe be a little pragmatic and improve over time.

Look at these numbers:

cross-fetch: 145,954 weekly downloads
isomorphic-fetch: 1,015,885 weekly downloads

The fact that we have 1 M (with the big M) on isomorphic-fetch is the fact that we all need this, and the last thing we want is to have a discussion about the specification.

We just need the feature, as 1M weekly download shows the fact that many source code today have to depend on those packages that are basically trying to fix this issue.

Just be a little pragmatic.

Please.

The fact that we have 1 M (with the big M) on isomorphic-fetch is the fact that we all need this; and the last thing we want is to have a discussion about the specification.

1M downloads is less than I have on several packages I help maintain (like bluebird or sinon) - but I have no intention or want to pursue adding them to core. There are several other HTTP libraries:

  • request: 3,260,795 weekly downloads
  • axios: 835,153 weekly downloads

The fact something is popular does not indicate we should add it to core (like lodash, request or express).

The primary motivation to add something like fetch is standardisation and not popularity :)

The primary motivation to add something like fetch is standardization and not popularity :)
@benjamingr

But I hope we do not go sideways as this thread already does instead of focusing on how we could move forward.

Yes, you are right, the downloads do not means belongs to Node, but justify the needs of having such of standardization in Node. I would be foolish to think otherwise.

My issue is having to rely on isomorphic-[insert name here for cross-platform package wanted] packages because a mistaked was made back in 2009.

And for others that are bashing Deno conversation already, learn from Deno the irreparable mistakes on NodeJS that now we are facing and Deno project is trying to fix (for some reason that I don't know, as a brand new programming language).

Unless we are willing to introduce breaking changes and have a conversation about it we will be forced to repeat history.

Not that long ago, don't forget about IO vs NodeJS which was one of the best move ever for the greatest of the community to join forces even when it meant to have complete different directions in the Orgs, even when it meant to have a lot of moving pieces.

Maybe this issue and other issues that could be introduce by the same fundamental issue is a sign that NodeJS itself needs a refresh on the architecture and implementation.

But @benjamingr yes, you are 100% right.

because a mistaked was made back in 2009.

If this is talking about fetch, the fetch standard only started to come into shape around 2014 e.g. this was the commit that made fetch return a promise.

Maybe this issue and other issues that could be introduce by the same fundamental issue is a sign that NodeJS itself needs a refresh on the architecture and implementation.

I doubt if that's possible without introducing compatibility pain - refactoring can be done for sure but the current implementation has to stay and be deprecated gradually, if they are deprecatable (otherwise we would've been able to get rid of new Buffer now that TypedArrays have been around for so many years). Some users might be upset about having to learn a set of home-grown API that predated Web API counterparts, but there will be even more existing users upset if their current code stop working.

Since this issue is close I am really confused about what we will do moving forward.
Will Node implements fetch API or not?

As far as I can tell, neither side is going to convince each other by simply putting out the argument that a certain API is popular enough to be in core, or that a certain API is not designed for servers so it should not be in core. So far @TimothyGu 's note in https://github.com/nodejs/node/issues/19393#issuecomment-376721984 seems to be the most viable suggestion to me, but nothing is going to change until someone opens a PR or publishes a proof-of-concept, and has the determination to make whatever changes needed (including in the spec and in the implementation and in the WPT) to make the end result reasonable for Node so that people who don't want it in core can be convinced. Repeating the same arguments that have been made numerous times in this repo is not likely going to help making progress, as work doesn't magically do themselves, I am afraid, and AFAICT this is a lot of work that you can't really force anybody to volunteer.

In January axios had 15,943,234, isomorphic-fetch had 15,621,053 downloads and cross-fetch had 2,444,751 of them..

Tu put it into perspective, react package has 20,767,762 downloads.

Please consider reopening this issue, fetch is one of the most used features in node, and it's not even in the core.

It can be quite hard to sum up the parties in this thread. Some do not want fetch() because it is too low level, some do not want it because "high level APIs are bettered handled in userland".

How can it be both low level and high level?

All of your favourite popular HTTP clients like axios and requests could easily build their existing APIs on top of this core. Why? Because standardization means we have a better core, and for those who do not want high level API features they can just use that core.

HTTP/2 support is over here if you want some. https://github.com/grantila/fetch-h2

I love that browser-land JavaScript is standardizing around a single HTTP client, it makes building middlewares much easier.

PHP had to go through a laborious design by committee approach to hit standardization and produced several PSRs in the process: https://www.php-fig.org/psr/

Node currently has a bajillion different HTTP clients, which only work with a smattering of middlewares, and there are handfuls of feature-weak VCR packages which are split between only supporting 1 or 2 of these clients. If we could standardize a fetch() implementation, then more work could go into tools that work with this standard core, instead of spreading the work between so many different client-specific solutions.

Standards are good, and putting standard implementation for common activities like making a HTTP request into Node would save a lot of faffing about for everyone.

I think the main thing missing here is someone to actually work on it. this is an undertaking probably measured in hundreds of thousands of lines, and node doesn't have employees like browsers do to implement this.

Cannot Node just copy-paste implementation from browser like Chromium?

@sheerun no, we can't, the internal infrastructure is too different, and there are also differing goals with the implementations. browsers have a lot of indirection to make sure things like CORS wouldn't be breached, but node's impl wouldn't even have CORS and we would probably want to focus on performance.

Then maybe better approach would be copy-paste polyfill like https://www.npmjs.com/package/node-fetch as official API and then gradually rewrite it with native code. All everyone cares about is standardised way to perform http fetch in node, not that it's implemented natively in c++.

Performance is just one consideration. It's an important one, but not the only one. The other issue is whether Node.js can implement with proper spec compliance. At the very least, the Streams API would also need to be implemented and some global state issues would need to be looked at. I have an implementation that I've done that I chose not to pursue further because the impl is not fully compliant to the spec because of adaptations necessary to fit it into Node.js

Subset of fetch API probably would work as well. I understand that 100% compliance can be problematic but I don't think it's necessary for most applications (or even possible, you simply can't implement some features like CORS because they don't apply to this environment).

I think the best approach would be to decide what is the "Common API" of fetchis, and then diverge implementations for Browser (e.g. CORS is included), and Node (e.g. buffer is implemented instead of streams). Everyone should be fine as long as Node doesn't change fetch API, but instead skips implementing some parts of it and also adds some extra node-only API.

Please note that API of node-fetch is also not 100% browser-compatible yet it's good enough for isomorphic apps and developers have no problems using it.

Looking at types of Request and Response, the only field that uses streams is body, and most of the API doesn't need streams at all. I think it's fine if you skip supporting this field at least for 1.0.

For another datapoint, the Next.js project is strongly considering adding fetch automatically to the server runtime.

Serverless is blurring the lines between executing code on the server and the client. Every single one of our customers installs Next.js, then proceeds to bring isomorphic-fetch, node-fetch, fetch-h2, etc.

It's ultimately a waste of time and resources, and a huge source of security problems, incomplete and disparate implementations, node_modules black hole deepening, etc.

I was initially against fetch being in Node.js, but it's just where we're headed inevitably. I'd even be happy to just run Chrome as my server runtime, since I want more feature parity between the platforms and a consistent developer experience :)

I was initially against fetch being in Node.js, but it's just where we're headed inevitably. I'd even be happy to just run Chrome as my server runtime, since I want more feature parity between the platforms and a consistent developer experience :)

I tend to agree with this more and more over time. I think we need to figure out how to reconcile EventTarget and EventEmitter and our streams with web streams which is the biggest blocker in my opinion right now for landing a fetch-like abstraction in Node.

My main concern is that Node.js is not a browser. It never had the same guarantees and the same level of user-isolation and security that is part of a web browser. Those trade-off were needed to provide a performant and reliable server-side platform.

In a browser, there is only one single user. In Node.js, we have potentially tens of thousands. The fetch API is designed with a single user in mind: it uses the browser connection pool, TLS configs, keep-alive settings, caching, etc. I’m firm in saying that we cannot implement the full fetch spec, and what we end up with would be significantly divergent from the standard to not put it in a global.

It’s worth noting that AWS Lambda guarantees this 1-1 between a process and a user. It has a steep cost: TLS connections are not kept open, and latency is higher, among other things. (Other serverless environments do not have the same limitation).

I concur that we should do something about this and start the process of building a consensus about fetch.

@joyeecheung is this something the standards team can work on?

@mcollina Node's WHATWG URL implementation isn't on the global either, so I don't think that's worth worrying about too much.

I think the best approach would be to have a http.createFetch(opts) API that wraps the agent model, http/https/http2/http3 in a way that is a) coherent with Node.js API b) compatible with the most common usecases for the browser spec c) easy to intercept (our current API is extremely hard to wrap, see http://npm.im/nock).

I think this could give us enough flexibility to solve most of the use cases. There will definitely be some key differences at the semantics/behavior level (caching, connection pooling and security) that makes me wonder if we should call this “fetch” to begin with - however fetch is extremely popular, and the community is happy with node-fetch and other polyfills, so I’m not so much concerned about having something significantly different. We definitely need a new HTTP client API in core, as our current model is aging badly.

(Note that there are significant difference in the WHATWG URL implementation as well, and we are not doing a good job in telling these).

The WHATWG fetch currently ignores caching, which whilst makes me sad, I’d rather have a standard API without caching than no standard API.

--
Phil Sturgeon
@philsturgeon

On Apr 26, 2019, at 08:18, Matteo Collina notifications@github.com wrote:

I think the best approach would be to have a http.createFetch(opts) API that wraps the agent model, http/https/http2/http3 in a way that is a) coherent with Node.js API b) compatible with the most common usecases for the browser spec c) easy to intercept (our current API is extremely hard to wrap, see http://npm.im/nock).

I think this could give us enough flexibility to solve most of the use cases. There will definitely be some key differences at the semantics/behavior level (caching, connection pooling and security) that makes me wonder if we should call this “fetch” to begin with - however fetch is extremely popular, and the community is happy with node-fetch and other polyfills, so I’m not so much concerned about having something significantly different. We definitely need a new HTTP client API in core, as our current model is aging badly.

(Note that there are significant difference in the WHATWG URL implementation as well, and we are not doing a good job in telling these).


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

@philsturgeon

The WHATWG fetch currently ignores caching

That's… incorrect. Large parts of the spec and api are devoted to negotiating caches.

What made you think fetch ignores caches?

Sorry Im talking out my backside. I meant the GitHub implementation of fetch() (the one on non as “node-fetch”) ignores caching.

--
Phil Sturgeon
@philsturgeon

On Apr 26, 2019, at 09:33, Jake Archibald notifications@github.com wrote:

@philsturgeon

The WHATWG fetch currently ignores caching

That's… incorrect. Large parts of the spec and api are devoted to negotiating caches.

What made you think fetch ignores caches?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

+1 to a new impl, maybe we can even use napi to integrate it into core

I’m all for vendoring in an npm module that we then expose as fetch, regardless of whether it’s a new implementation or not.

@mcollina I don't think anyone is concerned with the full fetch specification in terms of caching. Node.js is not a browser but more and more code is shared between client and server and while we don't have the same guarantees as our API surface gets modernised (like fs.promises) I think fetch is not a bad fit. It does streams pretty cleanly, backpressure, cache handling and other things.

There are parts of it that deal with the cross-origin policy which we don't need/want but other than that I think there is merit in a single modern low-level primitive for making http requests in core (fetch being a low level primitive for requests).

What about something like (and I'm bikeshedding):

  • Make an EventTarget subclass of EventEmitter in core.
  • Make a WhatWG stream subclass (or close to) of ReadableStream.
  • Explicitly don't support blobs (but maybe add it later?).
  • Add fetch as experimental on top of http.

I’m pretty firm in not having a single global connection pool, security settings, caching, etc. I’m good with your plan as long as we can create all of that and tear it down if we want to. Node.js become extraordinarily better as it got rid of the default global agent, and I truly do not want that situation to happen again.

(minus some details on streams interop, but those are API details).

On one hand, I would love to see Fetch being in the nodejs core.
On the other, I think people should temper their expectations, as I suspect the end result is going to disappoint many users.

Reasons:

  • People stumping for native Fetch support want to use it in an isomorphic environment: ie. they want to write the same code and run on browsers, nodejs and even react-native.
  • Even if we skip topics like cookie, cache, protocol-relative url and error handling, there is still the problem with WHATWG Stream.
  • My strong preference is for nodejs to implement Fetch with WHATWG Stream, but that means the API won't play with most existing library (sure, you can convert them into nodejs stream, but the interop step itself isn't isomorphic).
  • On the flip side, with multiple implementation of Fetch using nodejs stream, I don't see the reason for yet another Fetch that's inherently non-isomorphic. (We can't even agree on simple things like highWaterMark, which cause code to run fine in browsers but face backpressure on nodejs.)
  • I urge people read limitation of existing Fetch implementations, as well as what we had to do to make Fetch more palatable on server-side, then consider if this is the Fetch you wish for.

https://github.com/bitinn/node-fetch/blob/master/LIMITS.md

@bitinn what about a node stream subclass that implements the whatwg stream interface?

How important do you think cookie/cache/protocol relative URLs are here? I think error handling and streams are solvable and Node streams are asynciterable anyway already :)

@benjamingr That could be a good compromise.

Pros:

  • People who just want a unified Fetch that abstracts away H1/H2 difference will be happy, they are not necessarily using Fetch in an isomorphic sense, but like the fact that API works similarly, so less things to learn and master.
  • Having Fetch API in node, even a limited one, will move the surrounding infrastructure, so HTTP-related code will become more isomorphic over time.

Cons:

  • People who use Fetch API in an isomorphic sense will demand more than just fetch(): to start there are new Response(), new Request(), new Headers(), Blob, FormData, AbortSignal, then inherent lack of cache and cookie will make people write non-isomorphic code.
  • People who want more control than standard Fetch will demand more than just fetch(): to start they will need to use custom agent, then there will be things like controlling download size, dealing with infinite response, socket support etc.

Fetch API is designed to do a few things very well, but not others. Moving spec in those directions will be difficult, so you almost always have to make judgement calls that alienate people.

See our tests (which is 2700 lines, when actual code is only 500 lines):

https://github.com/bitinn/node-fetch/blob/master/test/test.js

See also how error handling is non-trivial given existing Fetch spec:

https://github.com/bitinn/node-fetch/issues/549

@bitinn are you attending the summit (https://github.com/nodejs/summit/issues) by any chance? I think it would be great to talk this over with other interested parties (like @mcollina and maybe @jakearchibald if he's attending) and enumerate the challenges in adding fetch to core.

cc @nodejs/open-standards

I'd also like to add that I've found contributing to the fetch spec in the past a pleasant experience and the people working on fetch to be very welcoming and friendly - so I definitely think we should engage those people when doing this.

I think it would be important for Node to ensure that it doesn't behave differently when non-default options are used with fetch.

For example take credentials (cookies), supposing Node didn't ship with credentials support then my opinion would be that it would be better for Node to throw an error on fetch(someUrl, { credentials: 'include' }) instead of silently ignoring the credentials option. Node would instead introduce its own new option e.g. 'bypass' and use fetch(someUrl, { credentials: 'bypass' }) (bikeshed name).

Now obviously it would be a huge pain to have to do fetch(someUrl, { credentials: isNode ? 'bypass' : 'omit' }), instead one could just say that any options that are not provided will just use the host's default which in Node's case would be 'bypass'. Even better yet would be if the fetch spec itself could reflect that non-browser fetch implementations may use different defaults for various options.

Similarly I think it's very important that if Node is to diverge with the spec in a non-compatible way (e.g. to expose extra data or such) it should do so using new properties rather than behaving in a different way to the existing properties.


In my personal opinion it might even be worth considering features like cookies/CORS/caches/etc within Node (whether a global fetch is provided with these features or not I don't personally care).

For example Node could consider a factory with things like cookie jars and such similar to other http libraries:

import { makeFetch, createPersistentCookieJar, createInMemoryCache } from 'fetch'

const cache = createInMemoryCache({ limit: os.totalmem() / 4 })
const cookieJar = createPersistentCookieJar('./cookies.cookiejar')

const fetch = makeFetch({ cache, cookieJar, origin: 'https://mycoolwebsite.com' })

// Use fetch like a browser fetch

Having said that I still think it would be a lot better for Node to have a good subset of fetch than to not implement it all or be delayed for ages working on a complete implementation.

For example take credentials (cookies), supposing Node didn't ship with credentials support then my opinion would be that it would be better for Node to throw an error on fetch(someUrl, { credentials: 'include' }) instead of silently ignoring the credentials option.

I'm not so sure. In the browser, if you call fetch(someUrl, { credentials: 'include' }) when there are no no credentials (new site visit, incognito mode, just cleared cookies) it does not reject.

It feels like Node should behave like a browser with no credentials.

Node would instead introduce its own new option e.g. 'bypass' and use fetch(someUrl, { credentials: 'bypass' }) (bikeshed name).

How would this differ from 'omit'? I don't think Node should add new options without adding them to the spec.

@jakearchibald I think I was getting confused with CORS when trying to make an example, my example should've been fetch(someUrl, { mode: 'bypass-cors' }) where we completely ignore CORS but can still read the response as this is the typical behaviour most people want in Node.

Obviously there's alternatives:

  • Simply allow reading normally opaque responses in Node (treating file:/// as the origin)
  • Add another mechanism just for Node for reading opaque responses (treating file:/// as the origin)
  • Simply ignore all cors-related flags and read any response

I'd prefer to lean on the side of caution and go with either of the first three ideas and not just ignore existing flags (instead throwing on use of things node doesn't want to support like no-cors or actually supporting them)

To @Jamesernator and others reading this thread.

The tough part about getting Fetch as close to spec as possible is the lack of context, in browsers you take that for granted: cookies, cache, sessions can all be considered a part of it.

This is probably why fetch-h2 invented its context object, because it's more natural for HTTP/2.

This is also why node-fetch didn't, because we like the fact nodejs HTTP/1 is context-less, and you control connections through custom agent.

My take: Things will be easier if nodejs can provide a new HTTP abstraction layer where you don't think about HTTP/1 or HTTP/2; yes, it's as hard as writing a spec-compliant Fetch, but you don't have to be spec-compliant when you invent a new API, _you can effectively delay the compliant step to when you wrap this API into Fetch._


The second issue is simply API, you are allowed do things like new Response(new FormData()).Blob() and per spec it should just work.

My take: Things will be easier if nodejs can provide things like Blob, FormData and WHATWG Stream in the first place. Without them, you rely on userland to implement a compliant solution, but neither Blob nor FormData expose a way to reliably "detect" themselves, so you rely on name guessing and API duck-typing. (This is what node-fetch does.)


The third issue is spec change. There were instances where Fetch spec change breaks existing API. For example, reading Set-Cookie headers are still a hack in node-fetch because Headers API expect get(name) to be a single string, when you can have multiple Set-Cookie headers.

Browsers doesn't have to worry about this due to their context (there used to be getAll() but this API was dropped), the spec never needed to move and add an extra API.

My take: if nodejs officially show intent to implement Fetch, then I have a feeling Fetch spec team could be convinced to add special API for things like Set-Cookie (barring the security issue). This potentially extend to other spec-related dilemma.

Thx for reading.


@benjamingr Unfortunately I am not going to the summit (work mostly in C# nowadays), but I sure hope nodejs team get to discuss this issue with fetch spec team and browser vendors.

Maybe it would be better to turn the tables and instead of trying to 100% match official fetch spec, create node's own fetch spec that is simpler in scope than original fetch and is also dead-simple to polyfill with original fetch (ideally 1:1 API). With this change you don't need to expose global fetch or Request or Response because no longer you're trying to implement original fetch.

An API could look something like this:

const { fetch, Response, Request } = require('http')

And on browser-side it would be transformed to something like:

const { fetch, Response, Request } = window

I talked with @jakearchibald in private (asking about the summit) and since a lot of involved parties won't attend - I was thinking about maybe spinning this into a small team of interested parties.

@jakearchibald @bitinn and any other party maybe interested: I think this is worthwhile and I think we should discuss this. I think the start of June (after JSConf and the summit) would be a good time frame to discuss this.

@nodejs/open-standards wdyt?

@sheerun about your comment,

I would prefer that we put those things in the global object so we do not need to create special cases unless there is a technical reason for it (I don't mind re-exporting things).

const { fetch, Response, Request } = globalThis; // don't forget that we have this now

If there are going to be implementation differences, a module approach makes more sense like @sheerun describes, as a browser shim can be designed for the Node case, just like with workers.

@yordis It's just example. I'm pretty sure the shim woudn't be as simple and rather something like:

const { fetch, Response, Request } = __makeNodeFetch();

where __makeNodeFetch is injected by polyfilling bundler / library

EDIT: I would also suggest official shim be shipped by node team instead of created by 3rd party

Hey, just an update that we had a meeting about this in the summit and here are the notes https://docs.google.com/document/d/1tn_-0S_FG_sla81wFohi8Sc8YI5PJiTopEzSA7UaLKM/edit there is also the PR linked to above that is in a very early stage.

We actively need people to step up and help with the above issues.

@benjamingr question out of ignorant.

How hard would be to adapt Deno implementation of fetch into NodeJS? (please don't focus on Deno vs NodeJS 🙏)

Looking at the code it seems that could be portable but I am not sure of the walls I will face doing so.

Did anyone try that?

Should we try that?

How hard would be to adapt Deno implementation of fetch into NodeJS?

Very, Deno's implementation is very incomplete and lacks a lot of features. Starting off from node-fetch like this PR is a lot more feature complete already. Moreover, Deno's implementation works by hooking into tokio with flatbuffers (we use neither) and really wouldn't make much sense for node.

(There is no enmity between Deno and Node.js, it's really not a competition :) )

I can't possible imagine how someone is going to read all the comments in this thread and take a decision. Clearly it's good to have a public debate about this but also clearly it's practically impossible to reach a consensus. I imagine the core team will read some (or most) of the feedback and take their decisions.

@bitinn comment:

The third issue is spec change. There were instances where Fetch spec change breaks existing API. For example, reading Set-Cookie headers are still a hack in node-fetch because Headers API expect get(name) to be a single string, when you can have multiple Set-Cookie headers.

I just opened https://github.com/whatwg/fetch/issues/973 to start the discussion again about getAll.

Also, fetch has filtered response where some headers are being hidden. Which doesn't make fetch a full featured HTTP client per spec.

From development perspective introduce native fetch isn't a good idea. And performance isn't really an argument for it.

But support of the same api (fetch) for the browser and the backend in the same way (without external dependency) would be a win for entirely JS/TS ecosystem.

@gkatsanos we have discussed this in the May/June summit - here is what things were blocked on https://docs.google.com/document/d/1tn_-0S_FG_sla81wFohi8Sc8YI5PJiTopEzSA7UaLKM/edit?usp=sharing

IMO, the real win for the web, if node was to get fetch, would actually be that there's suddenly a ubiquitous cancellation mechanism via AbortSignal. RxJS would leverage that immediately. Other libraries have already started to use it.

Would love this as well.

i don't like to distinguish browser and node,
those are ecmascript (or say javascript) that are from C++ in the bottom, just like those are babies from the same mom. I dont' like to watch babies fight. we should give them the same love.
no matter what api, we can standard it.

The situation with fetch in core has not changed. There are many who would like to have it but there are a number of very-non-trivial technical challenges that make it difficult, not the least of which is the fact that, to be compatible with the implementation in browsers, core would first need an implementation of the browser's streams API, which is (in many ways) incompatible with core's existing streams and would need to be implemented from the ground up to be performant. There are also issues around the way fetch uses internal global state -- which works just fine in a browser process but not so well on the server side. Yes, there are implementations like node-fetch that come close but are not fully spec compliant (for instance, it uses Node.js streams instead). These are not impossible problems to solve, of course, they just need someone willing to put the time and effort into implementing it and taking it through the process to get landed. While I don't personally see the need for fetch in core, I'm happy to work with anyone willing to put in the time.

If anyone wants to pick this up: https://github.com/nodejs/node/pull/22352

These are not impossible problems to solve, of course, they just need someone willing to put the time and effort into implementing it and taking it through the process to get landed.

I think a good first step would be to get just parts of fetch in, namely:

  • EventTarget (or an EventTarget compatible API)
  • AbortController ( https://github.com/openjs-foundation/summit/issues/273 btw )
  • Eventually: fetch

I think streams can be handled through Symbol.asyncIterator - pulling whatwg streams into core would be a huge undertaking and would have to be done very carefully.

I'm excited about EventTarget and AbortController in Node core, which seem like important steps on the way, but there are some more.

@mcollina has articulated a plan for eventually moving WHATWG streams into core, based on interoperation with async iterators and prototyping in a user module. I am optimistic that, if someone puts in the work, then this will be possible! I see this as an achievable prerequisite for fetch in core, in addition to what @benjamingr mentions, if compatibility with web semantics is a goal.

Another possible aspect of fetch in core (that @mcollina has emphasized) is a sort of "user agent API" to explain how things like cookies, caching, and possibly proxying would work. Browsers' fetch capability is based on a built-in user agent directly, without full abilities to manipulate it, e.g., alternating between different profiles. Something more may be needed for server environments.

I think these are both doable, if someone is able to put in the (large amount of) work. My coworker @ptomato in Igalia is starting on AbortController, based on sponsorship by Bloomberg, see https://github.com/nodejs/node/issues/31971.

I'm particularly excited about WHATWG streams landing in core! Thanks to everyone working on all this stuff.

  1. i am also excited it, so can we have a roadmap?
  2. by the way, from the beginning, please don't separate browser(or something like this) and server api for the same function! those are just a "user agent". customers don't care about what browser, what server, what api that you use.
  3. i really want to contribute, but i don't write C++, can you teach me?

Another possible aspect of fetch in core is a sort of "user agent API" to explain how things like cookies, caching, and possibly proxying would work. Browsers' fetch capability supports this directly, without full abilities to manipulate it, e.g., alternating between different profiles. Something more may be needed for server environments.

Is actually the point I'm most looking forward to, so I can see if I can copy the approach for our CMS script language.

An agent might also be the point where you would decide how to handle HTTP authentication (which the browse would also do natively for eg Basic http auth) and perhaps even how and whether to reuse HTTP connections (needed for NTLM authentication which works at the connection level).

@jakearchibald slowly :] First AbortController, there would need to be a significant amount of work done before whatwg-streams land in core. Presumably fetch would land with async iterator support and not whatwg streams.

For those who wish to work on this, may I suggest the follow strategy that I've used for developing other large new features:

  1. Create a separate fork of nodejs/node (as I did for both the http2 and quic work ... e.g. https://github.com/nodejs/quic) and do the work on the feature in isolation there. This will allow the work to progress much faster. Once key milestones are hit, open PRs to merge those things in with core.

  2. Be patient. When it comes time to open those PRs to merge things into core, there may be lots of feedback/discussion/change requests. Those can take some time to get through, especially with large changes.

  3. Do not discount the possibility of vendoring in existing dependencies. We may not need to write everything from scratch. The key reasons that we would want to write something from scratch are (a) if we can achieve significantly better performance, (b) to achieve greater parity and consistency with existing Node.js API, or (c) to achieve better spec compliance.

  4. Keep in mind that adding a new top level module or global is always semver-major in Node.js and that whatever is done here will be experimental status for some time. Although experimental features are not subject to the same semver rules, unless the feature is behind a compile or runtime flag, the PR that adds the initial feature is still either semver-minor or semver-major depending on how it's done. This is important because semver-major adds, even experimental ones, cannot be backported to older versions unless those backports are done in a semver-minor way (e.g. we cannot add a new top level module or global to 13.x, 12.x, or 10.x). This is important because things like fetch and AbortController are globals in the browser and adding them as globals in Node.js would be semver-major.

  5. The bias for the Node.js implementation should always be on adhering to the standard spec and compatibility as closely as possible. We should absolutely minimize the number of Node.js specific details that developers need to keep in mind (that is, minimize the "It works this way in browsers, but this other way in Node.js") without solid justification. When in doubt, it should adhere to the spec.

4. Keep in mind that adding a new top level module or global is _always_ semver-major in Node.js

Does that mean V8 updates that add new globals are always semver-major & cannot be backported?

by the way, from the beginning, please don't separate browser(or something like this) and server api for the same function! those are just a "user agent". customers don't care about what browser, what server, what api that you use.

i really want to contribute, but i don't write C++, can you teach me?

We can't teach you C++ but there are plenty of places to contribute to fetch in core, lots of research and JavaScript tasks.

You are welcome to get involved (there is a session in the summit on AbortController in core) and get up to speed on the last session https://docs.google.com/document/d/1tn_-0S_FG_sla81wFohi8Sc8YI5PJiTopEzSA7UaLKM/edit?disco=AAAAGdykWVg&usp_dm=true&ts=5eaa5dc3

Does that mean V8 updates that add new globals are always semver-major & cannot be backported?

That's a total grey area, to be honest :-) ... specifically, Node.js changes that add new globals are always semver-major

i really want to contribute, but i don't write C++, can you teach me?

With regards to fetch, there really shouldn't be a reason to touch c++ as the entire API can exist at the JavaScript layer. In fact, a proper implementation in core will need to span and abstract away the http/1, http/2, and eventually http/3 apis in order to work consistently and correctly -- all of which are exposed at the JavaScript level.

I think that we could backport a new module or a new global to existing release lines, if it requires a command-line flag to be enabled

Another possible aspect of fetch in core (that @mcollina has emphasized) is a sort of "user agent API" to explain how things like cookies, caching, and possibly proxying would work. Browsers' fetch capability is based on a built-in user agent directly, without full abilities to manipulate it, e.g., alternating between different profiles. Something more may be needed for server environments.

Essentially this my main objection to have a full spec-compliant fetch into core. We can implement something "fetch-like", but the caching and security parts of it are extremely hard and possibly are not even wanted on the server side. Or potentially they are, but they will need a different API to be exposed.

To recap all the above, fetch() assumes that there is only one user for the entire JS vm. Node.js cannot (and shouldn't) make this assumption. node-fetch does not make that assumption either and it works extremely well on Node.js because of it. I do not think we could ever be spec-compliant in this case.

As I said several times, I'm willing to support anybody that is willing to take this major effort.

Obviously, you have many words. because you say something that just is based on existing standard, and nodejs team need more collaboration with whatwg team. from the history view, first browser, then node. node, from the beginning, don't conform to web standard to design api, so node have much pain!

  1. why not give up needless compatibility? @jasnell
  2. why not redesign fetch standard so that conform to browser and node use cases? @jakearchibald
  3. why not implement EventTarget?
  4. i review node http, https, http2 module docs that those say node do cache HTTP requests/responses, why your google document say node does not do caching of HTTP requests/responses

Compatibility is not needless. Without it there's no actual reason to implement this in core at all! We could just let users go off implement their own various incompatible versions. Further, given that fetch is being used in production, whatwg cannot and won't simply redesign it. Implementing EventTarget is reasonable but the question remains whether we should given that we already have EventEmitter.

As for caching, Node.js does not implement any part of http caching.

  1. sooner or later, you should give up compatibility, like node itself have a LTS support.
  2. Have you communicated with whatwg team?
  3. from the node view, implement standard is more easy than browser. because node is just a runtime, not like browser, customers are not always willing to upgrade their browser!
  4. EventEmitter is not so amazing but needless compared with EventTarget.

This is node itself history legacy problem. because node don't conform to standard to design api from the beginning.

Many of the people who work on node.js are also involved with standards bodies, including WHATWG. This issue is about adding fetch, not any random http api, so I think it makes sense to scope discussion here to compliant fetch.

If you think about it, the great request package only exists because using the built-in HTTP module required too much boilerplate. Let's also look at it this way:

  • Browsers: XMLHttpRequest -> fetch
  • Node.js: http/https/http2 -> fetch?

The fetch API was created as an easier way to make web requests in the browser which came as a replacement to XMLHTTPRequest. The same API should be used as a replacement to http, https and http2 in Node.js.

@Richienb

The fetch API was created as an easier way to make web requests in the browser which came as a replacement to XMLHTTPRequest

Actually, IIUC fetch was created as a _lower level API_ in browsers with _more capabilities_ and not as a higher level API that's ergonomic. Fetch makes it easy to stream responses and do low level stuff like Node's built in HTTP.

If you go to the fetch repo and read the discussions the fact fetch is about a modern API with capabilities rather than syntactic sugar is made very clear.

@benjamingr You've supported your own case here. By your explanation, window.fetch is a modern, low-level API for browsers that has more features than XMLHTTPRequest. This is the same for Node.

For example, take the redirect: 'follow' option in the window.fetch API. In Node.js, you need https://www.npmjs.com/package/follow-redirects which allows for this.

@Richienb to be clear - I am not opposed to fetch in core - I am just saying that historically fetch wasn't added to browsers because it was higher level - it was added because it was _lower level_.

Redirects are different between browsers and Node due to the agent model and redirects: follow is actually a _lower level API_ than what XHR had (always follow redirects, that is not configurable). Node's HTTP is a low level API as well so unlike XHR it doesn't always follow redirects.


To be clear fetch is currently only blocked in core because it's a _lot of work and effort_ to add to core well and there are a lot of stepping stones first (like figuring out what to do with AbortController and WHATWG streams). Fetch is not blocked in core because we don't want it in core.

What I think we'd like to avoid is a non compliant deno-like fetch situation although I think that working with standards bodies together may give us a good middle ground where we have a subset of fetch that's truly universal and makes sense. ( To be clear, this is no disrespect to Deno, where they are/were making a real effort to make a spec complaint fetch for server-side JavaScript and are hitting a bunch of walls with the spec that make it very very hard)

I encourage you to get involved in any of those efforts - they are mostly moving slowly because of the amount of effort they take and priority - this isn't one of the places where we're stuck on disagreement.

Quick update: We now have experimental EventTarget and AbortController support landed in master. In the not too distant future I'll be exploring Blob/File API support and a bit more of the fundamentals for WHATWG streams. I don't know if we'll get all the way to implementing fetch but we should be able to provide many of the required components.

deno is awesome, but it is written by rust that syntax is very difficult, oh my god. i hope nodejs support more and more ecmascript feature or nodejs will be legacy and replaced with deno. @jasnell when to release EventTarget and AbortController feature?

I don't know if we'll get all the way to implementing fetch but we should be able to provide many of the required components.

What about exploring a way to contribute to the fetch spec and asking for an API that lets us return custom types and not use whatwg streams (or negotiation)?

@anlexN deno and node are not in conflict or are enemies. Node should add features when it is helpful, compassionate and good for our users and platform health.

If anything, the fact Deno pioneered a platform built with EventTarget helped us when adding EventTarget, ditto for promises and AbortController.

Let's not take an ecosystem with more than one project which is a net-positive IMO as anything but that (a net positive).

I think the best way to deal with streams is to eventually (a long time away) switch to whatwg streams from Node streams entirely but I think for now, implementing them as a global should be a good step forward.

Most people keep focusing on APIs and on Streams. APIs and code are the least of the problems in having a standard-compliant fetch.

I don't think we can ever implement a compliant fetch as a global, mainly because the spec assumes there is only one user per JS context. Node.js as a server runtime supports multiple concurrent users. This is reflected on so many critical parts of the spec such as the security model, caching, handling of cookies, handling of keep-alive sockets, handling of HTTP/1.1 vs HTTP/2 and in the future HTTP/3. Essentially, fetch is _low level_ for the browsers but _high level_ for Node.js.

Solving those issue is a __significant amount of work__ and so far no one has been willing to sponsor it: my estimates is that it's a 3-6 months effort for an engineer proficient with standards to have something experimental done, and likely another 6-12 months of standard work to have a spec about what a "server-side fetch looks like".

The key problem is that node-fetch solve the problem really well, and there is _no perceived benefit_ to justify the development effort.

Helping out a bit with node-fetch, I know there is a lot of component involved with it, so it will most likely stick around for a bit longer before node-core gets it. Giving us access to smaller components to begin with will help in the long run

Some kind of dependency tree from lowest to higher component:

  • Blob+blobStorage

    • Files



      • FormData



    • EventTarget

    • AbortController

    • AbortSignal



      • whatwg streams



    • Headers (think this could be a separate thing that could work with other libs and even http(s))

My personal goal is really not to implement fetch itself in core but to implement the constituent parts to enable a userland implementation to be better, particularly when those parts can be used effectively for other uses in core. EventTarget and AbortController are where I'm starting but once I get a bit further down the road with a couple other items on my todo list I plan on looking at the Blob and File APIs ... in no small part because I believe Blob and File can be used to boost performance in other areas (respondWithFile APIs). There's a lot of work to do in that regard.

Is it possible to take chromium implementation and then wire it up to the node?

@wood1986 not likely due to the caching and "agency" issues outlined above. The Chrome HTTP stack and fetch implementation contain a lot of logic Node can't reuse (for security, limiting concurrency per host and caching) and it's also tightly coupled internally and unless they do a significant refactoring I don't see a way to share that code.

I think it'd be great if environment-defined proxy could be supported in the future core fetch API. There is certainly considerable demand for this feature and Node seems the only major server-side "language" that does not feature such built-in support.

Hello everyone, based on comments above I feel like fetch in node core is still a melody of far future due to amount of work involved and technical challenges.

I'd like to provide a high level data point from my side and perhaps a new (not sure) perspective. It's not purely on fetch but networking in general. (Sorry if you find my comment off-topic and if discussions like this have already inevitably happened before).

There are IMO two main usages of nodejs in the wild:

1) high traffic production servers
2) build tool chains of front-end JS devs, command line tools & code snippets

I feel that most of core contributors discussing here focus on use case 1) solely. (maybe I'm wrong)
I personally have been using node mostly for use case 2) for a good decade.

When it comes to production servers, developers can use http module or a battle-tested libraries, it doesn't make much difference because _you have a lot of code anyway, and lot of npm deps too._ I also agree with "keep core small" sentiment.

But when it comes to command line tools, having an easy way to do a _dependency-less_ HTTP GET in one line of code is IMO _highly_ desirable. Let's see what other programming languages offer:

go: http.Get("https://example.com/")
python3: urllib.request.urlopen("https://example.com/").read()
php: file_get_contents("https://example.com/")

I don't use any of those languages on a daily basis, and probably those APIs have limits and gotchas, but
you can probably admit the APIs look really simple and easy to remember.

Whereas, doing a similar thing in node with http/https requires copy-pasting a sizable boilerplate, and honestly, it's pretty frustrating.

IMO even if spec-compliant fetch is not possible to implement in node, or it's a long way to go to have it, something else (even if very limited) would still be desirable.

(Having said that I fully understand that it's not possible to ship a feature and tell people "don't use in prod servers" because the moment it's there, people _will_ start misusing it).

I'd love to one day be able to do something similar to the snippet below with node (it works with browsers) and use it in my quick throwaway scripts, PoCs, aliases etc.

node -p "(await (await fetch('https://api.example.com/stuff')).json()).list.map(o => o.id).join(',')"

without having to add asterisk "remember to npm install some-package".

Terse way to do HTTP calls makes it easier to experiment, share code over Slack, in documentation pieces etc.
The added value IMO outweighs the "keep core small" rule.

Some final points: the features like:

  • API-wise similarity to fetch,
  • Bells and whistles like accepting object as param, urlencoding it,
  • Support for streaming,

etc. are all nice to have but (IMO) not necessary. A really simple and limited convenience wrapper over http(s) would really go a long way.

I'm generically +1 to add something like that.

I think we are very close to landing fetch-without-streams-and-security-rules in terms of what code we need (with AbortController and EventTarget in and not doing streams).

I am in favour of doing node-fetch-without-streams-or-node-streams for the first iteration and adding streams support later.

I would want fetch eventually to be "real whatwg fetch" eventually and not "something called fetch but entirely different".

I want to say that we can still decide to do http/promises in addition to or instead of fetch and not have to implement whatwg streams etc. I also agree with https://github.com/nodejs/node/issues/19393#issuecomment-647133028

I think we are very close to landing fetch-without-streams-and-security-rules

I am in favour of doing node-fetch-without-streams-or-node-streams for the first iteration and adding streams support later.

I think a lower level of async iterator or a node stream would be better than nothing even if it isn't in the same track as the spec.
Then you could do either stream.Readable.from(iterable) or whatwg.ReadableStream.from(iterable) or simply just do:

for await (let chunk of res.body) { ... }

This code would work for whatever kind body it is - whether it's async iterable, node stream or a whatwg stream, since it all behaves the same with regards to being async iterable and yielding uint8arrays.

I would rather prefer a lower level of just a simpler async iterable if whatwg streams seems too far off. Then you would still have the benefits of producing & consuming large bodies using fetch.
ofc a console warning could be useful for using iterators (and letting user know that we don't yet support real streams yet) and maybe deprecated iterators later on when/if whatwg streams gets added into node. I think transforming a iterator to a stream is very easy with just x.from(iterable)

But i'm okey with node-fetch-without-streams-or-node-streams and fetch-without-streams-and-security-rules it is a good first iteration - got to start somewhere.

But I also think it could be a good thing to support both whatwg streams and node streams durning a transition period for working with both and just simply treat both of them as async iterator since it can be used the same way when using async iterator + there is still many built in node api's that use node streams.

I want to say that we can still decide to do http/promises in addition to or instead of fetch and not have to implement whatwg streams etc. I also agree with #19393 (comment)

Like this idea also, would be cool if http got some fetch like methods added like .text(), .json() and .arrayBuffer() to start with. but i still want whatwg streams in node also

The first implementation should be strictly compatibile with the existing promise-based APIs, then later, we can extend with more advanced stuff like iterator/stream (hopefully in a standardized fashion).

In my opinion, fetch internally can be built however it is best for node. As a node.js developer i do not care, if fetch internally uses streams, or whatever. Nor do i care if it is built entirely different than window.fetch().

Instead, the ONLY thing i care, as a node.js developer, is that it has the exactly same api and api behavior, as the window.fetch() api.

Example: I wanna use fetch and his promise’d api, same like window.fetch() or node-fetch. But i do not care, how internally the promise is resolved, i.e. inside a stream, an event, a callback, or whatever.

Sometimes it sounds here, like a fetch api in node.js has to „work“ exactly the same way, as the original fetch api. But i do not think this is true. It must „do“ the same things, „behave“ the same way and it should „has“ exactly the same api, but in no way it have to „work“ exactly the same way.

In my opinion.

@MBODM

exactly same api and api behavior

This is why implementing fetch in Node.js is challenging. There's plenty of context in this issue as to why.

@bnb note that they mention node-fetch and window.fetch having the same API, confirming the idea that _we do not have to be spec compliant_.

I dont want to add more salt to the wound, but yeah, for many of us, from our perspective, node-fetch does the job, it may not be perfect, but it does.

Personally, I wouldn't mind if fetch has constraints, and follows the browser spec, while the http module allows you to implement more complex situations and have more control over the networking situation that you may have.

In the worst case, we, in our application layer, will have to deal with the differences between each implementation, if we want to have a single API that swaps the HTTP client 🤷🏻

As we do today with isomorphic-fetch or any other of those packages, all we ask is for some support from the core contributors and making our life easier with abstractions that definitely pay off.

I had previously proposed embedding node-fetch in node and exposing it as http.fetch.

It might be worth exploring this idea some more.

Was this page helpful?
0 / 5 - 0 ratings