Clickhouse: DB::Exception: Cannot read all data

Created on 17 Feb 2017  Β·  24Comments  Β·  Source: ClickHouse/ClickHouse

I cann't read some introduction from https://clickhouse.yandex/reference_en.html#
have some document about building a distributed multiple servers environment ?

All 24 comments

You can find an example in Quick start guide -> paragraph "ClickHouse deployment to cluster".

Let me know If you will have difficulties with that guide; we can enhance it.

Can you add to this manual, case when couple of shard looks into the same server?

In all our installations, we have only one shard per server.
Having multiple shards on same server is possible, but more unusual.

You should locate tables for these shards on different databases at the server.
Then you should specify <default_database> (near <host>, <port>) with the name of corresponding database in your cluster configuration. And while creating distributed table, specify empty string '' as the database name - it means "use default_database in the cluster configuration".
So, different shards could be located on same server within different databases.

Hi thank you for reply. I'm tried today to configure that.
I have five "fast" server and two "slow" - "slow" server contain by five replicas of "fast" servers.

ΠΌΠ΅Π΄Π»Π΅Π½Π½Ρ‹Π΅ сСрвСра содСрТат ΠΏΡΡ‚ΡŒ Ρ€Π΅ΠΏΠ»ΠΈΠΊ ΠΎΡ‚ быстрых сСрвСров. Π½Π° ΠΌΠ΅Π΄Π»Π΅Π½Π½Ρ‹Ρ… соотвСтсовСнно ΠΏΡΡ‚ΡŒ Π±Π°Π·. Π° имя Ρ‚Π°Π±Π»ΠΈΡ†ΠΈ всСгда ΠΎΠ΄ΠΈΠ½Π°ΠΊΠΎΠ²ΠΎΠ΅.

there are an example.

<shard>
                                <!β€”  ###########     SHARD 01           β€”>
                                <internal_replication>true</internal_replication>
                                <replica>
                                        <host>clickhouse-hw-01.app.com</host>
                                        <port>9000</port>
                                        <default_database>dbclick</default_database>
                                </replica>
                                <replica>
                                        <host>historical03.app.com</host>
                                        <port>9090</port>
                                        <default_database>dbclick_1</default_database>
                                </replica>
                                <replica>
                                        <host>historical04.app.com</host>
                                        <port>9090</port>
                                        <default_database>dbclickc_1</default_database>
                                </replica>
                        </shard>

deault_database for whole config is commented.

in log i get this error, this on server where distributed table is

2017.02.27 11:37:00.923840 [ 5 ] <Warning> ConnectionPoolWithFailover: Connection failed at try β„–3, reason: Code: 210, e.displayText() = DB::NetException: Connection refused: (192.168.0.85:9090), e.what() = DB::NetException
2017.02.27 11:37:00.923864 [ 5 ] <Trace> Connection (192.168.1.151:9000): Connecting. Database: (not specified). User: default
2017.02.27 11:37:00.924113 [ 5 ] <Warning> ConnectionPoolWithFailover: Connection failed at try β„–3, reason: Code: 210, e.displayText() = DB::NetException: Connection refused: (192.168.1.151:9000), e.what() = DB::NetException
2017.02.27 11:37:00.930614 [ 4 ] <Trace> events_tv11.Distributed.DirectoryMonitor: Started processing `/opt/clickhouse/data/admanic/events_tv11/default@192%2E168%2E1%2E149:9000,default@192%2E168%2E0%2E83:9090,default@192%2E168%2E0%2E85:9090/1.bin`
2017.02.27 11:37:00.930643 [ 4 ] <Trace> Connection (192.168.0.83:9090): Connecting. Database: (not specified). User: default
2017.02.27 11:37:00.930883 [ 4 ] <Warning> ConnectionPoolWithFailover: Connection failed at try β„–1, reason: Code: 210, e.displayText() = DB::NetException: Connection refused: (192.168.0.83:9090), e.what() = DB::NetException

If need i'll attach additional information - configs or logs

I found issue. o forgot correct 192.168.xxx.xxx.

But i got another issue. I run select count() from dbclick.events (its a my distributed table)
i got always different value

And i found this issue.

i wrong make this section when create replicated table

Distributed(calcs, default, hits[, sharding_key])

I faced, that through time count of columns in distributed table increasing

:) select count() from events_tv11

SELECT count()
FROM events_tv11 

β”Œβ”€β”€count()─┐
β”‚ 10614198 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1 rows in set. Elapsed: 0.012 sec. Processed 10.61 million rows, 10.61 MB (908.67 million rows/s., 908.67 MB/s.) 

:) select count() from events_tv11

SELECT count()
FROM events_tv11 

β”Œβ”€β”€count()─┐
β”‚ 10757530 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1 rows in set. Elapsed: 0.013 sec. Processed 10.76 million rows, 10.76 MB (814.93 million rows/s., 814.93 MB/s.) 

:) select count() from events_tv11

SELECT count()
FROM events_tv11 

β”Œβ”€β”€count()─┐
β”‚ 10757530 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1 rows in set. Elapsed: 0.011 sec. Processed 10.76 million rows, 10.76 MB (959.33 million rows/s., 959.33 MB/s.) 

:) select count() from events_tv11

SELECT count()
FROM events_tv11 

β”Œβ”€β”€count()─┐
β”‚ 10759026 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1 rows in set. Elapsed: 0.012 sec. Processed 10.76 million rows, 10.76 MB (910.45 million rows/s., 910.45 MB/s.) 

:) use admanic_1

USE admanic_1

Ok.

0 rows in set. Elapsed: 0.001 sec. 

:) select count() from events_tv11

SELECT count()
FROM events_tv11 

β”Œβ”€β”€count()─┐
β”‚ 10820929 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1 rows in set. Elapsed: 0.010 sec. Processed 10.82 million rows, 10.82 MB (1.13 billion rows/s., 1.13 GB/s.) 

:) 
:) select count() from events_tv11

SELECT count()
FROM events_tv11 

β”Œβ”€β”€count()─┐
β”‚ 10820929 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1 rows in set. Elapsed: 0.031 sec. Processed 10.82 million rows, 10.82 MB (352.98 million rows/s., 352.98 MB/s.) 

:) select count() from events_tv11

SELECT count()
FROM events_tv11 

β”Œβ”€β”€count()─┐
β”‚ 10839811 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1 rows in set. Elapsed: 0.017 sec. Processed 10.84 million rows, 10.84 MB (631.21 million rows/s., 631.21 MB/s.) 

:) select count() from events_tv11

SELECT count()
FROM events_tv11 

β”Œβ”€β”€count()─┐
β”‚ 10858850 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1 rows in set. Elapsed: 0.043 sec. Processed 10.86 million rows, 10.86 MB (255.24 million rows/s., 255.24 MB/s.) 

Maybe i should open new thread

host1

:] desc ontime_local

DESCRIBE TABLE ontime_local

β”Œβ”€name───────┬─type───┬─default_type─┬─default_expression─┐
β”‚ FlightDate β”‚ Date   β”‚              β”‚                    β”‚
β”‚ ID         β”‚ UInt32 β”‚              β”‚                    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β†— Progress: 2.00 rows, 94.00 B (905.59 rows/s., 42.56 KB/s.) 
2 rows in set. Elapsed: 0.002 sec. 

:] desc ontime_all

DESCRIBE TABLE ontime_all

β”Œβ”€name───────┬─type───┬─default_type─┬─default_expression─┐
β”‚ FlightDate β”‚ Date   β”‚              β”‚                    β”‚
β”‚ ID         β”‚ UInt32 β”‚              β”‚                    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β†’ Progress: 2.00 rows, 94.00 B (733.85 rows/s., 34.49 KB/s.) 
2 rows in set. Elapsed: 0.003 sec. 

:] select * from ontime_local;

SELECT *
FROM ontime_local 

β”Œβ”€FlightDate─┬──ID─┐
β”‚ 2017-03-05 β”‚ 111 β”‚
β”‚ 2017-03-05 β”‚ 333 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
β†˜ Progress: 2.00 rows, 12.00 B (573.72 rows/s., 3.44 KB/s.) 
2 rows in set. Elapsed: 0.004 sec. 

:] 

host2

:) desc ontime_local

DESCRIBE TABLE ontime_local

β”Œβ”€name───────┬─type───┬─default_type─┬─default_expression─┐
β”‚ FlightDate β”‚ Date   β”‚              β”‚                    β”‚
β”‚ ID         β”‚ UInt32 β”‚              β”‚                    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β†’ Progress: 2.00 rows, 94.00 B (1.94 thousand rows/s., 91.02 KB/s.) 
2 rows in set. Elapsed: 0.001 sec. 

:) desc ontime_all

DESCRIBE TABLE ontime_all

β”Œβ”€name───────┬─type───┬─default_type─┬─default_expression─┐
β”‚ FlightDate β”‚ Date   β”‚              β”‚                    β”‚
β”‚ ID         β”‚ UInt32 β”‚              β”‚                    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β†˜ Progress: 2.00 rows, 94.00 B (1.54 thousand rows/s., 72.29 KB/s.) 
2 rows in set. Elapsed: 0.001 sec. 

:) select * from ontime_local;

SELECT *
FROM ontime_local 

β”Œβ”€FlightDate─┬────ID─┐
β”‚ 2017-03-01 β”‚ 44444 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
β”Œβ”€FlightDate─┬───ID─┐
β”‚ 2017-03-04 β”‚ 2222 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
↓ Progress: 2.00 rows, 12.00 B (799.41 rows/s., 4.80 KB/s.) 
2 rows in set. Elapsed: 0.003 sec. 

:) 

select ontime_all

:) select * from ontime_all;

SELECT *
FROM ontime_all 

β”Œβ”€FlightDate─┬────ID─┐
β”‚ 2017-03-01 β”‚ 44444 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
β”Œβ”€FlightDate─┬───ID─┐
β”‚ 2017-03-04 β”‚ 2222 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
β”Œβ”€FlightDate─┬──ID─┐
β”‚ 2017-03-05 β”‚ 111 β”‚
β”‚ 2017-03-05 β”‚ 333 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
↙ Progress: 4.00 rows, 24.00 B (460.15 rows/s., 2.76 KB/s.) 
4 rows in set. Elapsed: 0.009 sec. 

:) 

config.xml

 <remote_servers>
                <my_test>
                        <shard>
                                <replica>
                                        <host>120.132.42.190</host>
                                        <port>9000</port>
                                </replica>
                        </shard>
                        <shard>
                                <replica>
                                        <host>120.132.42.189</host>
                                        <port>9000</port>
                                </replica>
                        </shard>
                </my_test>
        </remote_servers>

I have problem,If I stop host2,is this error

:] select * from ontime_all;

SELECT *
FROM ontime_all 

β”Œβ”€FlightDate─┬──ID─┐
β”‚ 2017-03-05 β”‚ 111 β”‚
β”‚ 2017-03-05 β”‚ 333 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
Received exception from server:
Code: 279. DB::Exception: Received from localhost:9000, ::1. DB::NetException. DB::NetException: All connection tries failed. Log: 

Code: 210, e.displayText() = DB::NetException: Connection refused: (120.132.42.189:9000), e.what() = DB::NetException
Code: 210, e.displayText() = DB::NetException: Connection refused: (120.132.42.189:9000), e.what() = DB::NetException
Code: 210, e.displayText() = DB::NetException: Connection refused: (120.132.42.189:9000), e.what() = DB::NetException

. 

2 rows in set. Elapsed: 0.097 sec. 

This is not HA,How to use ClickHouse build HA system and data is Distributed

Sharding is not used for HA, but to utilize resources of many machines. For high availability you need to replicate your data (keep several copies of the same table on different nodes). So instead of two shards, each with one replica, you need one shard with two replicas.

For replication you need to set up a ZooKeeper cluster and create a ReplicatedMergeTree table on each of the two nodes (relevant section in the docs).

could you give me about ReplacingMergeTree example?

host1
config.xml

        <zookeeper>
                <node index="1">
                        <host>120.132.42.190</host>
                        <port>2181</port>
                </node>
                <node index="2">
                        <host>120.132.42.189</host>
                        <port>2181</port>
                </node>
        </zookeeper>

  <macros>
                <layer>05</layer>
                <shard>02</shard>
                <replica>120.132.42.189</replica>
        </macros>

host2
config.xml

<zookeeper>
                <node index="1">
                        <host>120.132.42.189</host>
                        <port>2181</port>
                </node>
                <node index="2">
                        <host>120.132.42.190</host>
                        <port>2181</port>
                </node>
        </zookeeper>

 <macros>
                <layer>05</layer>
                <shard>02</shard>
                <replica>120.132.42.190</replica>
        </macros>

host1

:] select * from tx;

SELECT *
FROM tx

β”Œβ”€β”€EventDate─┬─UserID─┬─CounterID─┬───────────EventTime─┐
β”‚ 2018-01-10 β”‚     22 β”‚        34 β”‚ 2018-01-03 15:01:12 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”Œβ”€β”€EventDate─┬─UserID─┬─CounterID─┬───────────EventTime─┐
β”‚ 2017-01-02 β”‚     22 β”‚        13 β”‚ 2018-01-01 12:33:22 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”Œβ”€β”€EventDate─┬─UserID─┬─CounterID─┬───────────EventTime─┐
β”‚ 2017-01-01 β”‚     11 β”‚        12 β”‚ 2017-01-03 12:22:12 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

3 rows in set. Elapsed: 0.007 sec.

:]

host2

) select * from tx;

SELECT *
FROM tx

β”Œβ”€β”€EventDate─┬─UserID─┬─CounterID─┬───────────EventTime─┐
β”‚ 2018-01-10 β”‚     22 β”‚        34 β”‚ 2018-01-03 15:01:12 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1 rows in set. Elapsed: 0.005 sec.

:)
[root@localhost zookeeper]# netstat -an |grep 2181
tcp        0      0 120.132.42.190:44791    120.132.42.190:2181     ESTABLISHED
tcp6       0      0 :::2181                 :::*                    LISTEN
tcp6       0      0 120.132.42.190:2181     120.132.42.190:44791    ESTABLISHED
[root@localhost zookeeper]#
[root@localhost bin]# netstat -an |grep 2181
tcp        0      0 120.132.42.189:45211    120.132.42.189:2181     ESTABLISHED
tcp6       0      0 :::2181                 :::*                    LISTEN
tcp6       0      0 120.132.42.189:2181     120.132.42.189:45211    ESTABLISHED

I can't modify zookeeper config .

I assume you mean ReplicatedMergeTree, not ReplacingMergeTree.

A table should have a unique path in ZooKeeper and each table replica should have a unique id. So, first you need to configure a replica id in macros:

<macros>
    <replica>120.132.42.189</replica>
</macros>

(and a different replica id on a different node)

Then you can create a table

CREATE TABLE tx ... Engine = ReplicatedMergeTree('/clickhouse/tables/tx', '{replica}', ...)

Then if you insert data into one tx table, it should appear in all tx tables on different nodes.

If this doesn't work, please send the output of SHOW CREATE TABLE tx.

Note: it is better to run a 3-node ZooKeeper cluster. A 2-node cluster will lose quorum and will be unable to process write requests if 1 node is lost. It is also better to run ZooKeeper on different nodes than ClickHouse servers.

@ztlpn

:] SHOW CREATE TABLE tx;

SHOW CREATE TABLE tx

β”Œβ”€statement────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
β”‚ CREATE TABLE default.tx ( EventDate Date,  UserID UInt32,  CounterID UInt32,  EventTime DateTime) ENGINE = ReplicatedMergeTree(\'/clickhouse/tables/{layer}-{shard}/hits\', \'{replica}\', EventDate, intHash32(UserID), (CounterID, EventDate, intHash32(UserID), EventTime), 8192) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1 rows in set. Elapsed: 0.005 sec.

:]

This is test,is not online system.so, I use 2 node cluster

what happen in this error:

2017.03.07 09:56:20.898209 [ 3 ] <Error> default.tx (StorageReplicatedMergeTree, RestartingThread): Couldn't start replication: DB::Exception, DB::Exception: Replica /clickhouse/tables/tx/replicas/120.132.42.189 appears to be already active. If you're sure it's not, try again in a minute or remove znode /clickhouse/tables/tx/replicas/120.132.42.189/is_active manually, stack trace:
0. clickhouse-server(StackTrace::StackTrace()+0x16) [0x1176496]
1. clickhouse-server(DB::Exception::Exception(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)+0x1f) [0xf7ad9f]
2. clickhouse-server(DB::ReplicatedMergeTreeRestartingThread::activateReplica()+0xd02) [0x12cdbb2]
3. clickhouse-server(DB::ReplicatedMergeTreeRestartingThread::tryStartup()+0x24) [0x12cdd34]
4. clickhouse-server(DB::ReplicatedMergeTreeRestartingThread::run()+0x470) [0x12ced40]
5. clickhouse-server() [0x2ff3f0f]
6. /lib64/libpthread.so.0(+0x7dc5) [0x7f7ac6d9cdc5]
7. /lib64/libc.so.6(clone+0x6d) [0x7f7ac65c573d]

@sangli00 It means that you have configured tables on different nodes with the same replica parameter (second parameter to the ReplicatedMergeTree engine). Please check the <replica> macro (it should be different on two nodes) and recreate the tables.

@ztlpn
node1 macro need equal node2 macro?

@sangli00 replica macro values should be different for each node.

You can use hostnames, IP addresses (120.132.42.189 and 120.132.42.190 Π°s you have originally configured), or just node1 and node2. Doesn't matter, they just have to be unique for each node.

2017.03.08 10:00:09.892721 [ 20 ] <Error> DB::StorageReplicatedMergeTree::queueTask()::<lambda(DB::StorageReplicatedMergeTree::LogEntryPtr&)>: Code: 33, e.displayText() = DB::Exception: Cannot read all data, e.what() = DB::Exception, Stack trace:

0. clickhouse-server(StackTrace::StackTrace()+0x16) [0x1176496]
1. clickhouse-server(DB::Exception::Exception(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)+0x1f) [0xf7ad9f]
2. clickhouse-server(DB::DataPartsExchange::Fetcher::fetchPartImpl(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)+0x19d3) [0x12488d3]
3. clickhouse-server(DB::DataPartsExchange::Fetcher::fetchPart(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, bool)+0x67) [0x1249157]
4. clickhouse-server(DB::StorageReplicatedMergeTree::fetchPart(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool, unsigned long)+0x1ff) [0x121b72f]
5. clickhouse-server(DB::StorageReplicatedMergeTree::executeLogEntry(DB::ReplicatedMergeTreeLogEntry const&)+0x7c7) [0x121ca27]
6. clickhouse-server() [0x121f99e]
7. clickhouse-server(DB::ReplicatedMergeTreeQueue::processEntry(std::function<std::shared_ptr<zkutil::ZooKeeper> ()>, std::shared_ptr<DB::ReplicatedMergeTreeLogEntry>&, std::function<bool (std::shared_ptr<DB::ReplicatedMergeTreeLogEntry>&)>)+0x3b) [0x12c3ccb]
8. clickhouse-server(DB::StorageReplicatedMergeTree::queueTask()+0x132) [0x1200272]
9. clickhouse-server(DB::BackgroundProcessingPool::threadFunction()+0x3cc) [0x1243b5c]
10. clickhouse-server() [0x2ff3f0f]
11. /lib64/libpthread.so.0(+0x7dc5) [0x7f7699d99dc5]
12. /lib64/libc.so.6(clone+0x6d) [0x7f76995c273d]

@ztlpn

IP 120.132.42.190 config

<zookeeper>
                <node index="1">
                        <host>120.132.42.190</host>
                        <port>2181</port>
                </node>
                <node index="2">
                        <host>120.132.42.189</host>
                        <port>2181</port>
                </node>
        </zookeeper>
 <macros>
        <!--    <layer>05</layer>
            <shard>02</shard> -->
            <replica>120.132.42.189</replica>
    </macros>

IP 120.132.42.189 config

<zookeeper>
                <node index="1">
                        <host>120.132.42.190</host>
                        <port>2181</port>
                </node>
                <node index="2">
                        <host>120.132.42.189</host>
                        <port>2181</port>
                </node>
        </zookeeper>
<macros>
                <!-- <layer>05</layer>
                <shard>02</shard> -->
                <replica>120.132.42.190</replica>
        </macros>

I executor SQL

:] insert into tx values('2017-01-01',12,2,'2017-01-01 12-12-11');

INSERT INTO tx VALUES

Ok.

1 rows in set. Elapsed: 0.022 sec.

:] select * from tx;

SELECT *
FROM tx

β”Œβ”€β”€EventDate─┬─UserID─┬─CounterID─┬───────────EventTime─┐
β”‚ 2017-01-01 β”‚     12 β”‚         2 β”‚ 2017-01-01 12:12:11 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1 rows in set. Elapsed: 0.006 sec.

IP 120.132.42.189 is error.
How to modify config?
Thanks.

:] show create tx

SHOW CREATE TABLE tx

β”Œβ”€statement──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
β”‚ CREATE TABLE default.tx ( EventDate Date,  UserID UInt32,  CounterID UInt32,  EventTime DateTime) ENGINE = ReplicatedMergeTree(\'/clickhouse/tables/tx\', \'{replica}\', EventDate, intHash32(UserID), (CounterID, EventDate, intHash32(UserID), EventTime), 8192) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

1 rows in set. Elapsed: 0.003 sec.

table tx is OK?
ReplicatedMergeTree(\'/clickhouse/tables/tx\

I have the exact same problem:
<Error> DB::StorageReplicatedMergeTree::queueTask()::<lambda(DB::StorageReplicatedMergeTree::LogEntryPtr&)>: Code: 33, e.displayText() = DB::Exception: Cannot read all data, e.what() = DB::Exception, Stack trace:
with a very similar setup (I have 3 nodes for both clickhouse and zookeeper)

Can someone have a look and help?

I am noticing this as well on the latest stable build (1.1.54310)

You can get rare "Cannot read all data" errors when fetching data from a replica due to network issues.
All other reasons are considered to be fixed as of now.

Please reopen if you experience recurring "Cannot read all data" errors in the latest stable version.

According to my experiences, this issue typically indicates the configuration is incorrect for node "interserver_http_host" in config.xml. ;-)

Hi,

Want to have 1 shard and 2 replica setup with ENGINE = ReplicatedMergeTree.

I created attached configs and when i fire create table query i can see the tables created on both the node.
When i insert data from one node to table and query the table from other node it doesn't show the contents.

Could you please help me with the issue of why inserted data is not replicating?

clickhouse_query.txt

Thanks

Was this page helpful?
0 / 5 - 0 ratings

Related issues

vixa2012 picture vixa2012  Β·  3Comments

fizerkhan picture fizerkhan  Β·  3Comments

jimmykuo picture jimmykuo  Β·  3Comments

innerr picture innerr  Β·  3Comments

healiseu picture healiseu  Β·  3Comments