Gocryptfs: "No space left" errors on Btrfs

Created on 17 Apr 2019  路  22Comments  路  Source: rfjakob/gocryptfs

HI

I checked everywhere to find out if there's a file size limit for gocryptfs containers, but didn't find any such detail. But still, I can't move a 4GB file to my container. It even stops at 1.1GB before throwing an error mentioning that there is not sufficient space. The drive has 80 GB of free space left though

Filesystem: BTRFS
Gocryptfs version: 1.7

Most helpful comment

I have written a reproducer for the one half of the problem ( https://github.com/rfjakob/fallocate_write ).

I have posted to the linux-btrfs mailing list: fallocate does not prevent ENOSPC on write

All 22 comments

Hi! There should be no limit to either individual files or the whole gocryptfs filesystem. I wonder if we have a problem with Btrfs. Could you check journalctl -f if you see messages appear when this happens?

Here's the records that I saw appearing in the log:

apr 18 17:54:42 aardbol gocryptfs[2907]: OpenDir ".": invalid entry ".owncloudsync.log": illegal base64 data at input byte 0
apr 18 17:54:42 aardbol kdeinit5[2957]: Seeking in video failed
apr 18 17:54:43 aardbol kdeinit5[2957]: Seeking in video failed
apr 18 17:54:50 aardbol plasmashell[1435]: org.kde.plasmaquick: Applet "Meldingen" loaded after 0 msec
apr 18 17:54:50 aardbol plasmashell[1435]: org.kde.plasmaquick: Increasing score for "Meldingen" to 79
apr 18 17:54:55 aardbol gocryptfs[2907]: ino14925 fh14: doWrite: WriteAt off=1128430098 len=33024 failed: write 7QnKvV-ekSdDbaoxQHAdofZQtcdBVKJub7K6UzluVQ4: no space left on device
apr 18 17:54:56 aardbol gocryptfs[2907]: OpenDir ".": invalid entry "._sync_5e1e0c89252d.db": illegal base64 data at input byte 0
apr 18 17:54:56 aardbol gocryptfs[2907]: OpenDir ".": invalid entry ".owncloudsync.log": illegal base64 data at input byte 0
apr 18 17:55:01 aardbol gocryptfs[2907]: OpenDir ".": invalid entry "._sync_5e1e0c89252d.db": illegal base64 data at input byte 0
apr 18 17:55:01 aardbol gocryptfs[2907]: OpenDir ".": invalid entry ".owncloudsync.log": illegal base64 data at input byte 0

Thanks, this is interesting:

apr 18 17:54:55 aardbol gocryptfs[2907]: ino14925 fh14: doWrite: WriteAt off=1128430098 len=33024 failed: write 7QnKvV-ekSdDbaoxQHAdofZQtcdBVKJub7K6UzluVQ4: no space left on device

This means that btrfs told gocryptfs that there is no space left.

Copying this file outside gocryptfs works?

Yes, I tried different files too. Copied them to the same drive, in the gocryptfs container folder. Everything works, just not inside the gocryptfs mount point.

Just FYI, the fstab line:

UUID=06f4f0f9-eb1a-41e2-9ec7-33dc637cf32b /mnt/256GBBTRFS btrfs defaults,compress=zstd,space_cache,ssd,noatime,nofail 0 2

The space left on the drive:
/dev/sdb1 239G 156G 83G 66% /mnt/256GBBTRFS

Space left on the gocryptfs mount point:
gocryptfs@/mnt/256GBBTRFS/zo 239G 156G 83G 65% /home/ik/.SiriKali/zo

I see you are using a compression option with btrfs.

Are the plaintext files highly compressible?

The ciphertext files definitely are not, and that could explain you get a
"no space left" when copying them to the gocryptfs mount, but not outside
that mount.

I cannot give a proper answer to that question because I don't know. But what I can tell you are the estimated compression results for the zo folder:

Processed 318 files, 250851 regular extents (250883 refs), 33 inline.
Type       Perc     Disk Usage   Uncompressed Referenced  
TOTAL       99%       30G          30G          30G       
none       100%       30G          30G          30G       
zstd        36%      148K         404K         128K     

It's not that much. Even for the whole drive it has very limited effect, because the compression setting I use is a soft setting that doesn't for compression, and most data is already compressed (videos, pictures)

I doubt it's that option though (although I'm no expert), because the btrfs compression algorithm just checks a chunk of data in the beginning of the file and then decides whether compression is interesting or not. As you can see in the stats above, it's mostly ignored.
And the space exceeded error always happens after 1.1GB transferred data.

I just tested with multiple files that are each a few hundred MB in size but total to +1.1GB. That works. So the issue is limited to single files of +1.1GB in size

Can you try mounting gocryptfs with -noprealloc ?

Also, what is your kernel version (uname -a)?

That solves it yes, although it copies a part of the data, freezes for a short time and then copies the rest, instead of it being one fluent process.

Linux aardbol 5.0.7-arch1-1-ARCH #1 SMP PREEMPT Mon Apr 8 10:37:08 UTC 2019 x86_64 GNU/Linux

Good. Looks like Btrfs still has problems with preallocation. Fun fact: the reason that -noprealloc exists were problems on Btrfs:
https://github.com/rfjakob/gocryptfs/commit/0f8d3318a321bf19f92e0872d741266cd0431463

Does that mean it's a BTRFS bug? If so, I should report it then.

Thanks for the help!

At the moment it looks like a btrfs bug, yes. However, before reporting it: I will try to reproduce it with a small test program outside gocryptfs. I'll get to it next week, hopefully.

It's still possible that gocryptfs is doing something stupid, or could do things differently to avoid the problem.

Good idea. Because it reminds me about torrent applications that can also preallocate the space before downloading the file (which I've also used before) and I never had any troubles likes that with BTRFS file systems. Perhaps that could be a good source for you check how they do it?

I created a 10GB file, formatted it as btrfs and mounted like this via fstab:

/var/tmp/btrfs10g.img /var/tmp/btrfs10g.mnt btrfs defaults,compress=zstd,space_cache,ssd,noatime,nofail 0 2

Mounted a gocryptfs there, and copied a few 2GB files, but did not see any errors. My kernel:
Linux brikett 5.0.4-200.fc29.x86_64 #1 SMP Mon Mar 25 02:27:33 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Could you try as well with a fresh btrfs filesystem?

I tried again on a different BTRFS drive. I created a gocrypt container there and copied a file of 6.5GB inside it. It worked until 5.3GB and then the same error popped up.

Secondly I copied the original zo map to a different BTRFS drive (third one, unrelated to first and second), made sure -noprealloc wasn't set, copied the same 6.5GB file to it and transfer was successful.

fstab of second and third drives:

UUID=616a00c0-6b76-44ee-8397-8a2a03a6c437 / btrfs defaults,noatime,compress=zstd,ssd,space_cache,subvol=_current/@ 0 1
UUID=46af6e6b-02cd-4225-b404-4fda4587cd8f /mnt/256GBEVO850 btrfs defaults,compress=zstd,ssd,space_cache,noatime 0 2

Linux aardbol 5.0.8-arch1-1-ARCH #1 SMP PREEMPT Wed Apr 17 14:56:15 UTC 2019 x86_64 GNU/Linux

This is my /tmp (not sure if it's related to this):
/dev/nvme0n1p5 11G 748M 9,6G 8% /tmp

I could now reproduce this by writing more data. I got the error when btrfs got 79% full:

gocryptfs.mnt$ pv < /dev/zero > zero
5.45GiB 0:01:38 [ 106MiB/s] [                          <=>                ]
pv: write failed: No space left on device

$ df -Th /var/tmp/btrfs10g.mnt
Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/loop0     btrfs   10G  6.4G  1.8G  79% /var/tmp/btrfs10g.mnt

Directly on btrfs I managed to fill it to 100%. Note that I use /dev/urandom here, because the zeros from /dev/zero will compress to nothing.

btrfs10g.mnt$ pv < /dev/urandom > blob
7.45GiB 0:01:59 [2.44MiB/s] [                <=>          ]
pv: write failed: No space left on device

$ df -Th /var/tmp/btrfs10g.mnt
Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/loop0     btrfs   10G  8.2G  4.0K 100% /var/tmp/btrfs10g.mnt

I have written a reproducer for the one half of the problem ( https://github.com/rfjakob/fallocate_write ).

I have posted to the linux-btrfs mailing list: fallocate does not prevent ENOSPC on write

So the post to linux-btrfs sparked an interesting discussion, my conclusion is that fallocate has many problems on btrfs, and we should automatically set -noprealloc when we detect btrfs.

I read the discussion too, it's interesting even though I don't understand much of the technical chitchat in the end.

So the permanent solution is that mount option? Doesn't it have any negative impact? Or is the "negative" impact limited to not being able to check if there's enough free space left in advance before copying?

is the "negative" impact limited to not being able to check if there's enough free space left in advance before copying?

Yes, exactly. And this does not work on btrfs anyway, as we can still get out of space errors on write.

Alright then. Thanks for all the effort and help!

Pushed commit https://github.com/rfjakob/gocryptfs/commit/13055278f56b941be0ea1ff4eb4840d88fba7e37 , we now automatically set -noprealloc when Btrfs is detected.

Was this page helpful?
0 / 5 - 0 ratings