Hello,
I tried to migrate from hasura/graphql-engine:pull2395-5107d1d1 (i wanted to try remote schema 馃構) to hasura/graphql-engine:v1.0.0-beta.8 this morning and i met this issue :
{
"internal": {
"statement": "CREATE TABLE hdb_catalog.hdb_computed_field\n (\n table_schema TEXT,\n table_name TEXT,\n computed_field_name TEXT,\n definition JSONB NOT NULL,\n comment TEXT NULL,\n\nPRIMARY KEY (table_schema, table_name, computed_field_name),\nFOREIGN KEY (table_schema, table_name) REFERENCES hdb_catalog.hdb_table(table_schema, table_name) ON UPDATE CASCADE\n );\n\nCREATE VIEW hdb_catalog.hdb_computed_field_function AS\n(\n SELECT\n table_schema,\n table_name,\n computed_field_name,\n CASE\n WHEN (definition::jsonb -> 'function')::jsonb ->> 'name' IS NULL THEN definition::jsonb ->> 'function'\n ELSE (definition::jsonb -> 'function')::jsonb ->> 'name'\n END AS function_name,\n CASE\n WHEN (definition::jsonb -> 'function')::jsonb ->> 'schema' IS NULL THEN 'public'\n ELSE (definition::jsonb -> 'function')::jsonb ->> 'schema'\n END AS function_schema\n FROM hdb_catalog.hdb_computed_field\n);\n\nCREATE OR REPLACE VIEW hdb_catalog.hdb_function_agg AS\n(\nSELECT\n p.proname::text AS function_name,\n pn.nspname::text AS function_schema,\n pd.description,\n\n CASE\n WHEN (p.provariadic = (0) :: oid) THEN false\n ELSE true\n END AS has_variadic,\n\n CASE\n WHEN (\n (p.provolatile) :: text = ('i' :: character(1)) :: text\n ) THEN 'IMMUTABLE' :: text\n WHEN (\n (p.provolatile) :: text = ('s' :: character(1)) :: text\n ) THEN 'STABLE' :: text\n WHEN (\n (p.provolatile) :: text = ('v' :: character(1)) :: text\n ) THEN 'VOLATILE' :: text\n ELSE NULL :: text\n END AS function_type,\n\n pg_get_functiondef(p.oid) AS function_definition,\n\n rtn.nspname::text as return_type_schema,\n rt.typname::text as return_type_name,\n rt.typtype::text as return_type_type,\n\n p.proretset AS returns_set,\n ( SELECT\n COALESCE(json_agg(\n json_build_object('schema', q.\"schema\",\n 'name', q.\"name\",\n 'type', q.\"type\"\n )\n ), '[]')\n FROM\n (\n SELECT\n pt.typname AS \"name\",\n pns.nspname AS \"schema\",\n pt.typtype AS \"type\",\n pat.ordinality\n FROM\n unnest(\n COALESCE(p.proallargtypes, (p.proargtypes) :: oid [])\n ) WITH ORDINALITY pat(oid, ordinality)\n LEFT JOIN pg_type pt ON ((pt.oid = pat.oid))\n LEFT JOIN pg_namespace pns ON (pt.typnamespace = pns.oid)\n ORDER BY pat.ordinality ASC\n ) q\n ) AS input_arg_types,\n to_json(COALESCE(p.proargnames, ARRAY [] :: text [])) AS input_arg_names,\n p.pronargdefaults AS default_args,\n p.oid::integer AS function_oid\nFROM\n pg_proc p\n JOIN pg_namespace pn ON (pn.oid = p.pronamespace)\n JOIN pg_type rt ON (rt.oid = p.prorettype)\n JOIN pg_namespace rtn ON (rtn.oid = rt.typnamespace)\n LEFT JOIN pg_description pd ON p.oid = pd.objoid\nWHERE\n pn.nspname :: text NOT LIKE 'pg_%'\n AND pn.nspname :: text NOT IN ('information_schema', 'hdb_catalog', 'hdb_views')\n AND (NOT EXISTS (\n SELECT\n 1\n FROM\n pg_aggregate\n WHERE\n ((pg_aggregate.aggfnoid) :: oid = p.oid)\n )\n )\n);\n",
"prepared": false,
"error": {
"exec_status": "FatalError",
"hint": null,
"message": "cannot change name of view column \"has_variadic\" to \"description\"",
"status_code": "42P16",
"description": null
},
"arguments": []
},
"path": "$",
"error": "postgres query error",
"code": "unexpected"
}
What is wrong ?
(My setup follows this order:
Thank you, have a nice day !
@Tchoupinax the remote joins feature is in a separate PR and not merged to master (and can have totally different underlying catalog schema). Migrating between them is not possible. You have to run these two in two different databases.
Oh, i was fearing about this answer... Great, my fault.
What is the best way to "transfer" my work on database and hasura to the master branch. I guess i can load my schema without any problem but about hasura tables, should i restart since the beginning ? :/
Thank you for your answer and your explanation.
@Tchoupinax You essentially have to "reset" hasura metadata.
Get an export of the metadata using the console on the Remote Joins instance: https://docs.hasura.io/1.0/graphql/manual/migrations/manage-metadata.html#exporting-hasura-metadata
Stop Hasura and drop schema hdb_views and drop schema hdb_catalog ( this essentially drops all hasura metadata) using psql or some client.
Run Hasura beta8 and then import metadata again: https://docs.hasura.io/1.0/graphql/manual/migrations/manage-metadata.html#importing-hasura-metadata
There might be few hiccups (because the exported metadata will have remote_relationships but I think it will get ignored) but let me know if this works.
Hello,
it works like a charm ! Thank you a lot.
When i had exported metadata, i edited JSON to remove the remote schema (i did not need anymore) to ensure there will be no problem. :)
Issue closed for me, thank for your qualified support. :)
Most helpful comment
@Tchoupinax You essentially have to "reset" hasura metadata.
Get an export of the metadata using the console on the Remote Joins instance: https://docs.hasura.io/1.0/graphql/manual/migrations/manage-metadata.html#exporting-hasura-metadata
Stop Hasura and
drop schema hdb_viewsanddrop schema hdb_catalog( this essentially drops all hasura metadata) using psql or some client.Run Hasura beta8 and then import metadata again: https://docs.hasura.io/1.0/graphql/manual/migrations/manage-metadata.html#importing-hasura-metadata
There might be few hiccups (because the exported metadata will have remote_relationships but I think it will get ignored) but let me know if this works.