Graphql Engine returns {"path":"$","error":"resource does not exist","code":"not-found"} when using {useGETForQueries:true} Apollo Link options.
Is it possible to enable GET requests endpoints for Hasura engine?
@ontofractal Hasura does not support GET for GraphQL queries as of now. Any particular use case where you need to use GET, not POST?
In a GET request, how would the payload be sent? Through query string? That does not seem clean.
Also, we should probably return 405 Method Not Allowed instead of a 404 on GET /v1alpha1/graphql.
AFAIK GraphQL Spec does not talk about GET or POST.
But GraphQL.org has a best practices section which talks about GET and POST methods.
@shahidhk I mostly used GET queries to cache server responses
Is this on the roadmap? It would be massively useful to be able to use commodity CDN services like Cloudflare with Hasura Engine.
Seconding @ontofractal 's point. Cloudflare will not cache POST requests, they only cache based on the URL. An option to process GET requests would be helpful. Apollo Client, for example, has the option useGETForQueries, for the very reason that setting up caching is easier with GET requests.
I was about to setup Hasura in production when I noticed this is not currently possible. If anyone knows a workaround, I'd love to hear it. Thanks!
Apollo server has Automatic persisted queries, it will change POST to GET at client, and mutation is still POST.
https://www.apollographql.com/docs/apollo-server/features/apq/#cdn-integration
@rhaksw a simple/quick solution might be to put nginx or similar in front. There could be a few edge cases around the transformation of the request payload that needs to be handled with care.
location /v1/graphql {
if ($request_method = GET ) {
proxy_method POST;
// transform request payloads
proxy_set_header Content-Type "application/json";
proxy_set_body '{"some_key":"$arg_some_key"}';
}
proxy_pass http://hasura:8080;
}
Hey @abn great point! I'm using Hasura's digital ocean droplet which has Caddy. My config for GraphQL is currently the original,
api.revddit.com {
proxy / graphql-engine:8080 {
websocket
}
}
I'll look into how to configure your suggestion for Caddy.
Service workers also enable GET fetches to be intercepted, but not POST, so for the new hotness, GET is the new POST
Once GET method is supported, ETag header can be implemented: #2792
Supporting ETag header will avoid to transfer data if it matches current browser cache version.
__Use case__:
When application starts, initial state data is loaded.
If you refresh the page, or if the data has not changed since your last visit, the application is loaded immediately, without waiting for the network transfer.
Very useful for static data (or data which changes rarely), or when data is usually modified by you (your profile, your last messages sent...).
I figured out how to convert GET requests to POST in order to cache idempotent GraphQL queries via CDN!
I'm writing this out because I've received a lot of help from Hasura devs and want to give something back. There may be a better way. This is what I did,
Basic
Send a GET to /graphql-get/ which proxies to a customized openresty version of nginx. Openresty converts GET to POST and then proxies that back to hasura's /v1/graphql/. Then setup Cloudflare as normal for any API.
Details
Using Hasura's Digital Ocean droplet that comes with Caddy, I modified the Caddyfile:
# Write out the prefixes http://, https:// rather than just the domain
# This disables auto redirect from http -> https which would mistakenly convert a POST to a GET
http://api.revddit.com, https://api.revddit.com {
# 172.17.0.1 is an IP for accessing localhost outside the docker container
# 8282 is the port used below in the openresty configuration
proxy /graphql-get/ 172.17.0.1:8282 {
websocket
}
proxy / graphql-engine:8080 {
websocket
}
}
Then,
sudo docker-compose -f /etc/hasura/docker-compose.yaml restart caddy
sudo ufw allow 8282/tcp
Then, install openresty. It will fail to start on its own because it uses 8080 by default which is already used by Hasura's Caddy installation.
Move the default configuration and create a new one,
sudo mv /etc/openresty/nginx.conf /etc/openresty/nginx.conf.old
touch ~/resty.conf
sudo ln -s ~/resty.conf /etc/openresty/nginx.conf
vim ~/resty.conf
# ~/resty.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 8282;
server_name api.revddit.com;
location /graphql-get/ {
set $query $arg_query;
set $variables $arg_variables;
proxy_method POST;
rewrite_by_lua_block {
ngx.var.query = ngx.unescape_uri(ngx.var.query)
ngx.var.variables = ngx.unescape_uri(ngx.var.variables)
}
proxy_set_body '{"query":"$query","variables":$variables}';
add_header Access-Control-Allow-Headers content-type;
proxy_pass http://api.revddit.com/v1/graphql/;
set $args '';
}
}
}
Start openresty,
sudo systemctl restart openresty
Note above I am proxying to http://... not https. This is fine for me but you may want to look into it if that concerns you. Configuring ssl on nginx to use Caddy's ssl certificates seemed unnecessary. At first when I did this, requests were received by Hasura as GET requests, despite having been rewritten as POST. That confused me until I found this issue that describes how to configure Caddy to disable the auto-redirect from http to https.
Note that Caddy v2 is coming soon, and after that, support may be added so that the proxy via openresty/nginx is not necessary. I discussed that with Caddy developers in this thread.
Then, point DNS to Cloudflare and setup a page rule that looks like this. Cache Level "Cache Everything" enables caching JSON and setting "Edge Cache TTL" tells Cloudflare how long to keep a result before requesting again. By default, Browser Cache lasts 4 hours, and you can set it as low as 30 minutes.
My client-side javascript looks like this,
// disable preflight request, which can't be cached by cloudflare, by using customFetch
const customFetch = (uri, options) => {
const fetchOptions = {
credentials: 'same-origin',
method: "GET",
headers: {
accept: '*/*'
},
signal: options.signal
}
return fetch(uri, fetchOptions)
}
const apolloClient = new ApolloClient({
cache: new InMemoryCache(),
link: new HttpLink({
uri: 'https://api.revddit.com/graphql-get/',
fetch: customFetch,
useGETForQueries: true
}),
})
The above sends minimal headers which somehow tells the browser not to send an OPTIONS (preflight) request.
By the way, I first tried to convert GET requests to POST via a Cloudflare worker since they have a template for just this task. That was not working for me, perhaps due to not setting "Cache Level->Cache Everything" and "Edge Cache TTL" in the Cloudflare page rule, which I later discovered was necessary. I may try this again later to satisfy my curiosity. Anyway, workers have some limits and you can avoid these by implementing it yourself.
An easier way to cache GraphQL requests would be to hash the body, as Akamai suggests, however it looks like Cloudflare does not support caching of POST types.
That's it! Now watch Hasura unveil support for GET requests and make all this pointless =)
@rhaksw you should consider writing a blog post about this :smile:
this should be the way to achieve Server Side Caching #1964
I鈥檇 also love GET support, it鈥檇 really help to implement caching by just sticking Hasura behind a CDN.
@rhaksw : I always get error:
{"errors":[{"extensions":{"path":"$","code":"invalid-json"},"message":"Error in $: Failed reading: satisfy. Expecting ',' or '}' at 'OK}}){'"}]}
I tried with :
proxy_set_body '{"query":"$query","variables":$variables}';
proxy_set_body '{"query":"$query","variables":"$variables"}';
proxy_set_body '{"query":$query,"variables":$variables}';
Most helpful comment
Is this on the roadmap? It would be massively useful to be able to use commodity CDN services like Cloudflare with Hasura Engine.