Hello, i've faced a problem with reading data from Kafka.
So, i need to transfer data from Kafka queue (JSON inside), which i created and filled with my Java script, to ClickHouse (CH).
In our CH we have a cluster with 6 replicas, data must be acceptable from each of them. That means i need to use Distibuted table 'data'.
Also i need 'queue' table (Kafka engine) and materialized view 'consumer' both on each replica (please correct me if i'm wrong).
'Data' as distibuted table must point to a source table (which it'll get data), so i pointed it to 'queue' (not sure about it).
While i'm putting data into Kafka i pretty sure that tables accept data (simple select count(*) from data (distibuted table)), but i always get this:
"Progress: 1.55 thousand rows, 1.24 MB (297.46 rows/s., 237.18 KB/s.) Received exception from server (version 18.14.17):
Code: 159. DB::Exception: Received from host:port. DB::Exception: Failed to claim consumer: .
0 rows in set. Elapsed: 5.313 sec. Processed 1.55 thousand rows, 1.24 MB (291.94 rows/s., 232.78 KB/s.)"
When i stop filling Kafka i have a short time window at which i can complete my query. But after a few seconds i recieve - 0 counts on every table i have created.
I'm kind of stuck for name and tilted about this problem. And i'm pretty sure thah i don't understand someting.
Could you please help me out or provide some idea about how i can achieve my goal?
Problem was on my side: invalid columns in materialized view 'consumer'.
Btw, if anyone will need to do the same task here's data map:
1) Create 'local' tables on all hosts in cluster;
2) Create distributed tables on all hosts in cluster;
3) Create Kafka engine table 'queue' + materialized view 'consumer' on one host.
Most helpful comment
Problem was on my side: invalid columns in materialized view 'consumer'.
Btw, if anyone will need to do the same task here's data map:
1) Create 'local' tables on all hosts in cluster;
2) Create distributed tables on all hosts in cluster;
3) Create Kafka engine table 'queue' + materialized view 'consumer' on one host.