Clickhouse: max_execution_time does not work when filtering a distributed table

Created on 30 Nov 2016  Β·  3Comments  Β·  Source: ClickHouse/ClickHouse

In this example fact_event is a distributed table built over fact_event_shard.
A query from fact_event_shard with a filter gets interrupted.
A query from fact_event with no filter gets interrupted.
A query from fact_event with a filter continues to run.

:) select * from system.settings where name='max_execution_time'

β”Œβ”€name───────────────┬─value─┬─changed─┐
β”‚ max_execution_time β”‚ 10    β”‚       1 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

:) SELECT uniq(user_key) FROM fact_event WHERE hash > 0

β†’ Progress: 102.27 billion rows, 1.64 TB (1.41 billion rows/s., 22.48 GB/s.) 40%
Cancelling query.
Ok.
Query was cancelled.

0 rows in set. Elapsed: 73.504 sec. Processed 102.27 billion rows, 1.64 TB (1.39 billion rows/s., 22.26 GB/s.)

:) SELECT uniq(user_key) FROM fact_event

β†’ Progress: 161.71 billion rows, 1.29 TB (11.87 billion rows/s., 94.99 GB/s.) 64%
Received exception from server:
Code: 159. DB::Exception: Received from localhost:9000, ::1. DB::Exception: Timeout exceeded: elapsed 13.708067533 seconds, maximum: 10.

0 rows in set. Elapsed: 13.723 sec. Processed 161.71 billion rows, 1.29 TB (11.78 billion rows/s., 94.27 GB/s.)

:) SELECT uniq(user_key) FROM fact_event_shard WHERE hash > 0

β†’ Progress: 5.06 billion rows, 81.00 GB (510.53 million rows/s., 8.17 GB/s.) 26%
Received exception from server:
Code: 159. DB::Exception: Received from localhost:9000, ::1. DB::Exception: Timeout exceeded: elapsed 10.000049294 seconds, maximum: 10.

0 rows in set. Elapsed: 10.036 sec. Processed 5.06 billion rows, 81.00 GB (504.47 million rows/s., 8.07 GB/s.)
bug

Most helpful comment

This bug has regressed to the point where the max_execution_time does not work for any queries against a distributed table:

event_rep is a replicated table
event_dist is a distributed table over event_rep

select * from system.settings where name='max_execution_time'
β”Œβ”€name───────────────┬─value─┬─changed─┐
β”‚ max_execution_time β”‚ 5     β”‚       1 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

SELECT uniq(UID)
FROM event_rep

β†’ Progress: 79.22 million rows, 18.99 GB (15.99 million rows/s., 3.83 GB/s.) β–ˆβ–‹  1%                                                                                                      Received exception from server:
Code: 159. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: Timeout exceeded: elapsed 5.003876393 seconds, maximum: 5.

SELECT uniq(UID)
FROM event_dist 
↑ Progress: 538.66 million rows, 127.35 GB (28.43 million rows/s., 6.72 GB/s.) β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–5%                                                                                                Cancelling query.
Ok.
Query was cancelled.

0 rows in set. Elapsed: 19.164 sec. Processed 538.66 million rows, 127.35 GB (28.11 million rows/s., 6.65 GB/s.)

All 3 comments

This bug has regressed to the point where the max_execution_time does not work for any queries against a distributed table:

event_rep is a replicated table
event_dist is a distributed table over event_rep

select * from system.settings where name='max_execution_time'
β”Œβ”€name───────────────┬─value─┬─changed─┐
β”‚ max_execution_time β”‚ 5     β”‚       1 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

SELECT uniq(UID)
FROM event_rep

β†’ Progress: 79.22 million rows, 18.99 GB (15.99 million rows/s., 3.83 GB/s.) β–ˆβ–‹  1%                                                                                                      Received exception from server:
Code: 159. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: Timeout exceeded: elapsed 5.003876393 seconds, maximum: 5.

SELECT uniq(UID)
FROM event_dist 
↑ Progress: 538.66 million rows, 127.35 GB (28.43 million rows/s., 6.72 GB/s.) β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–5%                                                                                                Cancelling query.
Ok.
Query was cancelled.

0 rows in set. Elapsed: 19.164 sec. Processed 538.66 million rows, 127.35 GB (28.11 million rows/s., 6.65 GB/s.)

How to reproduce:

SET max_execution_time = 3;
SELECT * FROM remote('127.0.0.{2,3}', system.numbers) WHERE number < 10;

Fixed in master.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jangorecki picture jangorecki  Β·  3Comments

bseng picture bseng  Β·  3Comments

vvp83 picture vvp83  Β·  3Comments

innerr picture innerr  Β·  3Comments

goranc picture goranc  Β·  3Comments