This is on cockroachdb 2.0.0 on linux 64 bit.
These statements:
start transaction;
create table t_parent (id varchar primary key);
create table t_child (id varchar primary key, parent_id varchar);
create index idx_t_child_parent_id on t_child (parent_id);
alter table t_child add constraint fk_t_child_parent_id foreign key (parent_id) references t_parent (id);
commit;
Give this error:
pq: foreign key requires an existing index on columns ("parent_id")
I am not sure if this is an issue or a known limitation. I am working on a java application which uses automatic migration from the create scripts, but they fail when used in a transaction. I hope to avoid having only the foreign keys moved to their own transaction and separating the index code from its corresponding alter code.
Hi @ruzkant
First of all thanks for your interest on CockroachDB!
The reason you observe this error is that adding an index to an existing table is an asynchronous operation and it gets run at the end of the transaction, after the alter statement.
The workaround for this limitation is to organize your transaction as follows:
start transaction;
create table t_parent (id varchar primary key);
create table t_child (id varchar primary key, parent_id varchar, index idx_t_child_parent_id (parent_id));
alter table t_child add constraint fk_t_child_parent_id foreign key (parent_id) references t_parent (id);
commit;
Would that be a suitable solution?
@knz Thanks for the prompt response. Unfortunately this doesn't help me as much as I'd like because postgres does not support the above syntax.
At the moment because cockroach is aiming for postgres compatibility with tools, it makes it hard to differentiate. For example the java JDBC driver metadata always returns postgresql as the product and the version numbers are postgres aligned. So the tools aren't easily able to differentiate cockroachdb, which in turn means the path of least resistance at the moment seems to be to see how much mileage I can get maintaining one set of scripts and changes. I am dealing with a mix of pure sql migration and create scripts as well as liquibase changesets.
In the case of liquibase it is slightly easier to maintain both as I can move the alters to a different changeset, but I am awaiting feedback from the project maintainers. There is an option to extend liquibase to make cockroachdb a first class citizen. First would need to find a query or something to differentiate it from postgres, and a bit of poking didn't reveal a simple clean one that I could see, but I am new to cockroach. This however still makes it non-trivial in terms of needing to detect the index with foreign key situation and refactor the statements so it can be executed in one changeset.
I am glad to know the additional syntax though, thx
@ruzkant You can use SELECT version() to differentiate between pg and CockroachDB.
cc @dt @jordanlewis @vivekmenezes I think there's a possible workaround: if a CREATE INDEX is issued in the same txn as CREATE TABLE, we could modify the descriptor in-place instead of queuing a schema change. (same for other mutations possibly) What do you think?
cc @awoods187 for roadmapping
@knz Thanks for the query...
@knz agreed we should not need to run schema changers on tables when they are created in the same transaction.
@knz @vivekmenezes Is there any ideal platform for discussing project intent regarding the jdbc driver support? For example I was contemplating submitting a pull request on the postgres jdbc driver to return the correct product name, but this will have far reaching consequences for existing guys relying on the postgres name.
/**
* Retrieves the name of this database product. We hope that it is PostgreSQL, so we return that
* explicitly.
*
* @return "PostgreSQL"
*/
@Override
public String getDatabaseProductName() throws SQLException {
return "PostgreSQL";
}
On startup it receives this data:
} else if ("server_version_num".equals(name)) {
setServerVersionNum(Integer.parseInt(value));
which could be extended with:
} else if ("crdb_version".equals(name)) {
setXXX
Then the postgres driver itself doesn't have to hardcode that anymore either. Technically I guess they could choose to do that any day irrespective of any discussions here and then break a number of client tools which have been built to rely on Postgresql product name.
break a number of client tools which have been built to rely on Postgresql product name.
yes that would be my concern as well.
When creating a table with a foreign key in the create statement an implicit index is created but not with explicit alter. I am not sure if this inconsistency has a good reason, but otherwise it might be useful for the later statement to also create an implicit index if not present? I thought it might relate to this item.
@ruzkant thanks for your feedback. The reason why ALTER does not add an index is that adding an index can be a very costly operation if the table already has many rows. This concern does not exist during CREATE, since the table is still empty at that time.
@vivekmenezes moving this to the bulk io project. Needs to be triaged.
Is this limitation the reason I'm getting a "foreign keys not supported" error during CSV IMPORT when I try to add a FK to the IMPORT TABLE statement -- as described in this docs issue?
Asking because I was going to file a cockroach issue against that behavior but found this issue, which may cover that.
that's an unrelated limitation: IMPORT prepares all the data in bulk and then just slurps it in -- it doesn't read existing data during that process, so it can't check a foreign key to existing data.
I did some QA on the fix for this and I ran into this: #25663.
reopening this issue because I found some corner cases regarding FKs and interleaved tables still not covered.
Adding a comment here to aid GoogleBot and people searching for this. I was trying to find the relevant issue for transactional DDL statements (i.e. in Cockroach DDL statements are still non-transactional).
Most helpful comment
@ruzkant thanks for your feedback. The reason why ALTER does not add an index is that adding an index can be a very costly operation if the table already has many rows. This concern does not exist during CREATE, since the table is still empty at that time.