Etcd: Defrag without compaction increases DB size

Created on 21 Mar 2017  路  9Comments  路  Source: etcd-io/etcd

Hi guys!

We have faced that making defrag at non-compacted cluster doubled our DB size.

ETCDCTL_API=3 etcdctl -w=table endpoint status
+----------------+------------------+---------+---------+-----------+-----------+------------+
|    ENDPOINT    |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------+------------------+---------+---------+-----------+-----------+------------+
| 127.0.0.1:2379 | 345c0e51d9cde9e3 | 3.0.15  | 785 MB  | false     |        84 |  295342344 |
+----------------+------------------+---------+---------+-----------+-----------+------------+
ETCDCTL_API=3 etcdctl defrag
Failed to defragment etcd member[127.0.0.1:2379] (context deadline exceeded)
... after defrag finished ...
ETCDCTL_API=3 etcdctl -w=table endpoint status
+----------------+------------------+---------+---------+-----------+-----------+------------+
|    ENDPOINT    |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------+------------------+---------+---------+-----------+-----------+------------+
| 127.0.0.1:2379 | 345c0e51d9cde9e3 | 3.0.15  | 1.5 GB  | false     |        84 |  295351367 |
+----------------+------------------+---------+---------+-----------+-----------+------------+

Is it an expected behaviour?

Some logs:

2017-03-20T15:34:51.890866+00:00 <host> etcd[28823]: starting to defragment the storage backend...
2017-03-20T15:34:59.072544+00:00 <host> etcd[28823]: got unexpected response error (etcdserver: request timed out)
2017-03-20T15:35:00.652672+00:00 <host> etcd[28823]: got unexpected response error (etcdserver: request timed out) [merged 11 repeated lines in 1.58s]
2017-03-20T15:35:02.649353+00:00 <host> etcd[28823]: got unexpected response error (etcdserver: request timed out) [merged 30 repeated lines in 2s]
2017-03-20T15:35:03.649335+00:00 <host> etcd[28823]: got unexpected response error (etcdserver: request timed out) [merged 9 repeated lines in 1s]
2017-03-20T15:35:04.649353+00:00 <host> etcd[28823]: got unexpected response error (etcdserver: request timed out) [merged 15 repeated lines in 1s]
2017-03-20T15:35:06.649316+00:00 <host> etcd[28823]: got unexpected response error (etcdserver: request timed out) [merged 8 repeated lines in 2s]
2017-03-20T15:35:07.649386+00:00 <host> etcd[28823]: got unexpected response error (etcdserver: request timed out) [merged 9 repeated lines in 1s]
2017-03-20T15:35:08.649326+00:00 <host> etcd[28823]: got unexpected response error (etcdserver: request timed out) [merged 4 repeated lines in 1s]
2017-03-20T15:35:09.649349+00:00 <host> etcd[28823]: got unexpected response error (etcdserver: request timed out) [merged 11 repeated lines in 1s]
2017-03-20T15:35:11.650874+00:00 <host> etcd[28823]: got unexpected response error (etcdserver: request timed out) [merged 26 repeated lines in 2s]
2017-03-20T15:35:13.649332+00:00 <host> etcd[28823]: got unexpected response error (etcdserver: request timed out) [merged 15 repeated lines in 2s]
2017-03-20T15:35:14.492284+00:00 <host> etcd[28823]: finished defragmenting the storage backend
2017-03-20T15:35:14.492698+00:00 <host> etcd[28823]: apply entries took too long [22.567389203s for 1 entries]
2017-03-20T15:35:14.492979+00:00 <host> etcd[28823]: avoid queries with large range/delete range!
2017-03-20T15:35:14.649339+00:00 <host> etcd[28823]: got unexpected response error (etcdserver: request timed out) [merged 6 repeated lines in 1s]
2017-03-20T15:36:22.051540+00:00 <host> etcd[28823]: start to snapshot (applied: 295352265, lastsnap: 295342263)
2017-03-20T15:36:22.275259+00:00 <host> etcd[28823]: saved snapshot at index 295352265
2017-03-20T15:36:22.275561+00:00 <host> etcd[28823]: compacted raft log at 295347265
2017-03-20T15:36:38.952759+00:00 <host> etcd[28823]: purged file /data/etcd/member/snap/0000000000000054-000000001199f473.snap successfully

etcd version 3.0.15

arebug

All 9 comments

Failed to defragment etcd member[127.0.0.1:2379] (context deadline exceeded)

The defrag request didn't succeed?

Did you run this command while the etcd server is getting got unexpected response error?

The defrag request didn't succeed?

etcdctl is frosen for 5 seconds and fails with timeout.

time ETCDCTL_API=3 etcdctl defrag
Failed to defragment etcd member[127.0.0.1:2379] (context deadline exceeded)

real    0m5.092s
user    0m0.019s
sys 0m0.022s

But logs say that the defrag is finished in some time :)

2017-03-20T15:34:51.890866+00:00 <host> etcd[28823]: starting to defragment the storage backend...
2017-03-20T15:35:14.492284+00:00 <host> etcd[28823]: finished defragmenting the storage backend

~ 24 seconds for defrag.

Did you run this command while the etcd server is getting got unexpected response error?

Yeap, I run this command and etcd server logs it.

@mwf We did not set fillPercentage in key defrag code explicitly, which might lead to db growth. I am going to fix it today.

@mwf But, it seems that your disk is very slow. Writing 1.5GB = 24 second?

@xiang90 could you please also have a look at auto-compaction param from YAML config?

We set auto-compaction-retention: 24, but we don't observe compaction messages in logs.

I've tried auto-compaction in etcd docker image (the same 3.0.15 version), with command-line parameter - and it worked OK. There were messages:

2017-03-16 08:31:44.457840 N | etcdserver: Starting auto-compaction at revision 1689
2017-03-16 08:31:44.489054 I | mvcc: store.index: compact 1689
2017-03-16 08:31:44.490835 W | etcdserver: apply entries took too long [23.181202ms for 1 entries]
2017-03-16 08:31:44.490949 W | etcdserver: avoid queries with large range/delete range!
2017-03-16 08:31:44.492409 I | mvcc: finished scheduled compaction at 1689 (took 1.108457ms)
2017-03-16 08:31:44.492652 N | etcdserver: Finished auto-compaction at revision 1689

And no such messages at production server :(
Just in case when you're doing compact manually.

Have you ever faced it?

@mwf Can you create a separate issue for the auto compaction issue?

@mwf Can you create a separate issue for the auto compaction issue?

Sure.
Done in #7555

@mwf But, it seems that your disk is very slow. Writing 1.5GB = 24 second?

Maybe :) It's staging server, it could share disc with other processes. I don't know the details.

@xiang90 will it appear in 3.1.4 release?

@mwf Yes. v3.1.4 will be released this week.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

ghost picture ghost  路  3Comments

itnikita picture itnikita  路  3Comments

jefurry picture jefurry  路  4Comments

suresh-chaudhari picture suresh-chaudhari  路  3Comments

kghost picture kghost  路  4Comments