Origin: Device-mapper storage config uses only half of dedicated space

Created on 11 Aug 2017  Â·  5Comments  Â·  Source: openshift/origin

Nodes run out of space in thin pool, which still has much space unused. May be it is docker issue, not openshift, but I am not sure and ask for advice.
Maybe you can show me url describing how to setup automatic cleanup of nodes thin pool space.
Thank you very much for any help!

Version

oc v1.5.1

Steps To Reproduce
  1. Configure Docker storage setup with device mapper with default config
  2. Start pulling images
Current Result

Pulling 1 Gb images shows thay my 12Gb docker-pool will have ~16% more used space

Expected Result

Pulling 1 Gb images shows thay my 12Gb docker-pool will have ~8% more used space

Additional Information

initial setup with /usr/lib/docker-storage-setup/docker-storage-setup:

STORAGE_DRIVER=devicemapper
VG=docker
DATA_SIZE=50%FREE
pvs
  PV         VG     Fmt  Attr PSize  PFree
  /dev/sda2  cl     lvm2 a--  14.00g    0
  /dev/sda3  docker lvm2 a--  25.00g    0
lvs
  LV          VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root        cl     -wi-ao---- 12.00g
  swap        cl     -wi-a-----  2.00g
  docker-pool docker twi-aot--- 12.44g             27.35  3.92
lsblk
NAME                                                                                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                            8:0    0   40G  0 disk
├─sda1                                                                                         8:1    0    1G  0 part /boot
├─sda2                                                                                         8:2    0   14G  0 part
│ ├─cl-root                                                                                  253:0    0   12G  0 lvm  /
│ └─cl-swap                                                                                  253:1    0    2G  0 lvm
└─sda3                                                                                         8:3    0   25G  0 part
  ├─docker-docker--pool_tmeta                                                                253:2    0   28M  0 lvm
  │ └─docker-docker--pool                                                                    253:4    0 12.5G  0 lvm
  │   ├─docker-253:0-131993-6a76e53e26a79f8d754cc8c22791d43c214d3b34d211551a722b8b473dbe595f 253:8    0   10G  0 dm
  │   ├─docker-253:0-131993-577b9b67ecffb5b4b198fa09fb5924920b631ebd48da2a4439fe410b4712be2e 253:9    0   10G  0 dm
  │   ├─docker-253:0-131993-80de6fbc9609f11d2a0849561e5d93ce2ced074efd52331695dbd9b55fd8582c 253:10   0   10G  0 dm
  │   ├─docker-253:0-131993-2b4f03ce3c40ce4e84034e5958381c57dad3f36266daeff094071716990a86ee 253:11   0   10G  0 dm
  │   ├─docker-253:0-131993-702f502754aa740835fd380d5a49db1bcf0d54a556e5b1681a0c86696e4df675 253:12   0   10G  0 dm
  │   └─docker-253:0-131993-f9abfac9ea9772ea1c55422f3ee2c8022336a5689358688ab6fd67a71e360f0d 253:13   0   10G  0 dm
  ├─docker-docker--pool_tdata                                                                253:3    0 12.5G  0 lvm
  │ └─docker-docker--pool                                                                    253:4    0 12.5G  0 lvm
  │   ├─docker-253:0-131993-6a76e53e26a79f8d754cc8c22791d43c214d3b34d211551a722b8b473dbe595f 253:8    0   10G  0 dm
  │   ├─docker-253:0-131993-577b9b67ecffb5b4b198fa09fb5924920b631ebd48da2a4439fe410b4712be2e 253:9    0   10G  0 dm
  │   ├─docker-253:0-131993-80de6fbc9609f11d2a0849561e5d93ce2ced074efd52331695dbd9b55fd8582c 253:10   0   10G  0 dm
  │   ├─docker-253:0-131993-2b4f03ce3c40ce4e84034e5958381c57dad3f36266daeff094071716990a86ee 253:11   0   10G  0 dm
  │   ├─docker-253:0-131993-702f502754aa740835fd380d5a49db1bcf0d54a556e5b1681a0c86696e4df675 253:12   0   10G  0 dm
  │   └─docker-253:0-131993-f9abfac9ea9772ea1c55422f3ee2c8022336a5689358688ab6fd67a71e360f0d 253:13   0   10G  0 dm
  └─docker-gluster                                                                           253:5    0 12.5G  0 lvm
sr0                                                                                           11:0    1 1024M  0 rom
docker info

Containers: 6
 Running: 6
 Paused: 0
 Stopped: 0
Images: 9
Server Version: 1.12.6
Storage Driver: devicemapper
 Pool Name: docker-docker--pool
 Pool Blocksize: 524.3 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file:
 Metadata file:
 Data Space Used: 3.938 GB
 Data Space Total: 13.36 GB
 Data Space Available: 9.42 GB
 Metadata Space Used: 1.249 MB
 Metadata Space Total: 29.36 MB
 Metadata Space Available: 28.11 MB
 Thin Pool Minimum Free Space: 1.336 GB
componencontainers kinbug prioritP2

All 5 comments

@rhvgoyal suggestions?

I am not sure what's the problem. Initial description seems to suggest that there is free space in volume group but thin pool does not grow.

And later problem description becomes that pulling 1Gb images consumes data usage as 16% of size but it should have been 8% bump.

Lets first try to make problem statement more concrete. So what's the problem actually?

Fair enough, may be I was not clear with the task. As I see it - there are 4 problems (3 of them is docker's tools, may be you can give me a hand with it):

1) The documentation of device-mapper (https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/) is a little bit confusing, so please explain. When I make this settings in /usr/lib/docker-storage-setup/docker-storage-setup

VG=docker
DATA_SIZE=50%FREE
GROWPART=false
AUTO_EXTEND_POOL=yes

I expect having 12Gb lv from my 24GB pv with 24GB vg on it. Is this lv - my thin pool? What is root size and data size for it (may be you can show me link to explanation of it)?

2) lvs shows me that the size of it is 12 Gb (is it a size of my thin pool?) The problem here - is that I don't have any tools that shows me what is using which amount of space in my pool?

3) I don't really know what amount of space is really used:

  • lvs shows that I am using 30% of 12.5GBb lvm;
  • docker info shows that I am using 5Gb of 12.5 Gb lvm (which is 40%);
  • lsblk does not give any useful info about space usage.

4) Finally the most important problem, which come out of previous 3 - when I pull absolutely different image (not built on any that I pulled to node) which weights 1Gb (8% from my 12.5 Gb lvm), I see that lvs will show me that I will have not 8% but ~16% more space used.

So to summarize this question: I can't see what is using space, so I can't predict needed vg/lv size, and I can't predict the progress of it's usage with pulling images.
Any advices or help is very much appreciated!

Note: Please edit /etc/sysconfig/docker-storage-setup and not /usr/lib/docker-storage-setup/docker-storage-setup

  1. Yes 12GB is size of your thin pool. Ad thin pool is built on top of a data lv and a metadata lv. And DATA_SIZE is practically controlling how big your data lv should be. That in turn will translate into your thin pool size. So if you have a volume group of size 24GB and you specify to use 50%FREE, that means thin pool should be of size around 12G and rest of the space should be free in volume group.

  2. lvs is right tool to figure out the size of your thin pool.

  3. I think "lvs" gives right information about how much space is used. And docker info should be close too. If they are not, may be there is a bug somewhere. Make sure you are running latest version of software and if problem is still there, we can look into it.

  4. Does this happen on overlay2 graph driver also. I suspect that 1GB is size of image compressed and after extraction, it grow in size to 2GB. How did you determine size of image?

@rhvgoyal thank you for detailed answere. As I found out - lvs is really closer to the truth than all other tools. I am not sure how to check this with "overlay2 graph driver". So playing with lvms sizes showed that they are not very helpful with thin pool size <15 Gb.
When I have ~20 Gb and more: lvs is becoming more accurate.

The only question left: how would you reccomend monitoring node's docker thin pool usage? Is there any cli command or api request? Parsing lvs output is OK, but maybe there are any openshift integrated capabilities for this?

Was this page helpful?
0 / 5 - 0 ratings