Clickhouse: Some merges may stuck

Created on 20 Apr 2020  路  4Comments  路  Source: ClickHouse/ClickHouse

This bug is discovered on Yandex.Metrica servers.

If there is assigned merge but some parts in between of the range of parts to merge get lost on all replicas, the merge cannot proceed and the following messages will be printed in log:

Executing log entry to merge parts ...
Don't have all parts for merge ...; will try to fetch it instead
No active replica has part ...
Checking part ...
Checking if anyone has a part covering ...
Found parts with the same min block and with the same max block as the missing part ... Hoping that it will eventually appear as a result of a merge.

And in system.replication_log you will see the following entries:

Not executing log entry for part ... because it is covered by part ... that is currently executing
No active replica has part ... or covering part

Actually this logic exists as a safety measure when automated action is not possible: https://github.com/ClickHouse/ClickHouse/pull/1251

And it's unclear how to fix it in code.

bug comp-replication

Most helpful comment

How to fix it manually:

  1. SELECT * FROM system.replication_queue WHERE create_time < now() - INTERVAL 1 DAY AND type = 'MERGE_PARTS' AND last_exception LIKE '%No active replica has part%' \G
  2. SELECT replica_path || '/queue/' || node_name FROM system.replication_queue JOIN system.replicas USING (database, table) WHERE create_time < now() - INTERVAL 1 DAY AND type = 'MERGE_PARTS' AND last_exception LIKE '%No active replica has part%'
  3. Delete these nodes from ZooKeeper.
    Example:
clickhouse-client --query "SELECT replica_path || '/queue/' || node_name FROM system.replication_queue JOIN system.replicas USING (database, table) WHERE create_time < now() - INTERVAL 1 DAY AND type = 'MERGE_PARTS' AND last_exception LIKE '%No active replica has part%'" | while read i; do zk-cli.py --host ... -n $i rm; done
  1. SYSTEM RESTART REPLICAS

Do all these steps on all replicas.
Repeat until the issue disappear.

All 4 comments

Maybe simply check all parts participating in merge?

How to fix it manually:

  1. SELECT * FROM system.replication_queue WHERE create_time < now() - INTERVAL 1 DAY AND type = 'MERGE_PARTS' AND last_exception LIKE '%No active replica has part%' \G
  2. SELECT replica_path || '/queue/' || node_name FROM system.replication_queue JOIN system.replicas USING (database, table) WHERE create_time < now() - INTERVAL 1 DAY AND type = 'MERGE_PARTS' AND last_exception LIKE '%No active replica has part%'
  3. Delete these nodes from ZooKeeper.
    Example:
clickhouse-client --query "SELECT replica_path || '/queue/' || node_name FROM system.replication_queue JOIN system.replicas USING (database, table) WHERE create_time < now() - INTERVAL 1 DAY AND type = 'MERGE_PARTS' AND last_exception LIKE '%No active replica has part%'" | while read i; do zk-cli.py --host ... -n $i rm; done
  1. SYSTEM RESTART REPLICAS

Do all these steps on all replicas.
Repeat until the issue disappear.

will try to fetch it instead -> No active replica has part

a) Introduce some counter fetch_try_no_active and give up after 10 (100 ( configurable)) tries?
b) Without automation: alter table drop part xxx force / optimize table partition yyyy force

related #2755

Was this page helpful?
0 / 5 - 0 ratings