DietPi-Config | Add ZRAM-Swap Configure

Created on 16 Oct 2015  路  42Comments  路  Source: MichaIng/DietPi

Currently only supports swapfile want to support zram-swap

see https://www.raspberrypi.org/forums/viewtopic.php?t=19255&p=189838

Feature Request Solution available

Most helpful comment

@thierryzoller
Yeah you are right, "disksize" is definitely the uncompressed disk size. The actual RAM usage is not visible. I would not define mem_limit actually but use a disksize that would not cause problems when being full with no compression. This assures that one is safe and writing to this disk has no chance to run into "no space left on device". E.g. no sure what would happen to a zswap if compression for some reason is not as good as usual (files with high randomness), so even that "diskspace" still has free space left but memory limit is reached.

And instead of running a script on boot, I would implement it via /etc/modprobe.d (load module with defined amount of zram devices) and udev rules to set disksize and apply swap on device creation.

The more I think about it, the more I like it. Will add to v6.27 milestone.

All 42 comments

Thanks Septs, will take a look into your request when i can.

?

@septs
Not a priority at the moment.
As DietPi optimizes most installations based on a split of the memory you have, zram would only benefit <=256MB RAM systems. The benefit could also be a negative due to cpu time required for compression/decompression, especially on a single core cpu.

Request rejected.
DietPi will not benefit from Zram.

DietPI would benefit greatly from zram. Swapping is much slower than ram compression.

The sad thing is it would be such a no-effort inclusion.

Since the new releases of RPI (Cores + Ram), can we reconsider ZRAM ?

@thierryzoller
Indeed low priority, since the benefit is limited to special use cases: Constant high RAM usage but rare cases where high amount of data would need to be swapped.

See here where I discussed pros/cons a bid: https://github.com/MichaIng/DietPi/issues/2738
The big problem is indeed that you need to sacrifice the swap size from RAM right from the beginning. So your system needs to swap earlier, where it would survive without swapping at all without zRam. And since, if RAM is rare already, the zRam size cannot be large as well, in RAM usage peak times, swapping to disk still needs to be done, so you have those data fragmented over multiple zRams and disk. So IMO using an external drive for a regular swapfile, e.g. an SSD for speed and to avoid regular spinning, is the best solution. And of course: Do not setup your system a way, that > 50% RAM is used on average. Swapping should be a worst case event only, on high peak times.

@mrred128
The effort to do this correctly is larger then you might think:

  • As of above, this is nothing we can or want to ship by default with DietPi.
  • It would not make much sense to to implement a menu entry that only installs zram-tools. Everyone can do that with a single command and will have 10% of RAM used for a CPU-core-amount of zRam devices, easily adjustable via /etc/default/zram-tools.
  • If there was any benefit to implement, then only with a GUI where you can configure amount, sizes, priority, compression algorithm, optional mount point and such things.
  • And most importantly it needs a proper documentation and reasonable defaults based on current RAM usage. Otherwise we'll have users like "sounds good, lets do" that enable zRam to only increase their CPU usage without any benefit,

However I will run some tests by times, to compare write speeds, CPU usage and if really the zRam space is pre-allocated on RAM (like swapfiles do on disk) or if there is a chance it is only used on demand, which would render some contra arguments wrong 馃槈.

Understand, maybe a customised version with a default set of settings ?

In my use case I completely deactived the swap and only have ZRAM running, which allows me to have a buffer in case for whatever reason the system would need to swap.

@thierryzoller

Understand, maybe a customised version with a default set of settings ?

Definitely not a customised version, that would cause a lot of extra work. The APT package is fine and it can be easily configured. Additionally there is zramctl which allows some extra fine tuning.
It is not so much about the tools/methods, but indeed about which values are good defaults in which cases.

E.g. the 10% real RAM that are AFAIK default, IMO are not very useful. I mean it depends as well on the compression ratio. Do you have some values about this? But lets say is 0.5, then the system can store 10% more in RAM... Not enough as a failsafe backup for high usage peaks, so a swapfile would still be required. And the system starts to swap earlier. Not sure about how the priorities are used, but AFAIK even that the zRam swap is filled first, writes to the disk swap will be done as well, even if theoretically everything would still fit in RAM.

ARMbian uses 50% by default with their own zRam implementation. But this means that the system is swapping practically all the time, causing an unnecessary large CPU usage and slowing down RAM reads/writes in general.

Something in the middle sounds good to me, but again this depends on the general RAM usage level, as well as on theoretical max peaks that can occur.

If you find time faster then me to test, or know it already:

  • How large is your zRam swapfile?
  • Is this size fixed allocated in RAM, or is it free until something has actually been written to swap?
    E.g. you increase the zRam swapsize, does your physical RAM usage raise immediately with this? free -m
  • And what is your compression ratio?
cat /sys/block/zram0/compr_data_size
cat /sys/block/zram0/orig_data_size

Thank you for the explanations and reply - My use cases do not require tons of CPU and the focus is on resilience. Before going the ZRAM route I did a bit of research.

  • ZRAM manual claims 2:1 ratio
  • You can choose from a range of compression algorithms (speed vs compression)
  • Gentoo Wiki [1] :" I found it to vary from 1.5:1 for a 1.5G disk with only 5% space used, to over 3:1 when nearly full. It also is much faster at swapping pages than typical hard disk swap."

Setup :

  • the recommended amount of devices for swap is 1 per cpu core for kernels prior to 3.15.

As to the stats, I choose 250MB per core resulting in 1GB Zram (which I believe to be too much)

NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lzo         243.9M  2.3M  490B   28K       4 [SWAP]
/dev/zram1 lzo         243.9M  2.6M   78B   12K       4 [SWAP]
/dev/zram2 lzo         243.9M  2.8M   78B   12K       4 [SWAP]
/dev/zram3 lzo         243.9M  2.4M  4.3K  100K       4 [SWAP]
---------------------------------------------------------------------------------

There is no compr_data_size or orig_data_size in /sys/block/zram0/

[1] https://wiki.gentoo.org/wiki/Zram

Here is the bash script used to set-it up :

#!/bin/bash
cores=$(nproc --all)
modprobe zram num_devices=$cores

swapoff -a

totalmem=`free | grep -e "^Mem:" | awk '{print $2}'`
mem=$(( ($totalmem / $cores)* 1024 ))

core=0
while [ $core -lt $cores ]; do
  echo $mem > /sys/block/zram$core/disksize
  mkswap /dev/zram$core
  swapon -p 5 /dev/zram$core
  let core=core+1
done

Here is the "free" output :

              total        used        free      shared  buff/cache   available
Mem:         999036      217172      145016       20192      636848      696652
Swap:        999024       11520      987504

Source: https://github.com/StuartIanNaylor/zram-config

Compressor name | Ratio | Compression | Decompress.
zstd 1.3.4 -1     | 2.877 | 470 MB/s | 1380 MB/s
zlib 1.2.11 -1    | 2.743 | 110 MB/s | 400 MB/s
brotli 1.0.2 -0   | 2.701 | 410 MB/s | 430 MB/s
quicklz 1.5.0 -1 | 2.238 | 550 MB/s | 710 MB/s
lzo1x 2.09 -1     | 2.108 | 650 MB/s | 830 MB/s
lz4 1.8.1            | 2.101 | 750 MB/s | 3700 MB/s
snappy 1.1.4     | 2.091 | 530 MB/s | 1800 MB/s
lzf 3.6 -1           | 2.077 | 400 MB/s | 860 MB/s

@Fourdee request reopen the issue

@thierryzoller
Many thanks for sharing. So you have 1G physical RAM and as well allow 1G zRam for swap. So theoretical all RAM can be occupied by the swap. Of course that doesn't make sense 馃槈.
BUT: This also means the allocation is dynamic, since the 1G is not used from RAM directly. So in theory the RAM can still be near to full used conventionally, based on swappiness.

Could you paste:

free -m
swapoff -a
free -m
swapon -a
free -m

If we could configure it a way, e.g. with 50% physical size as swap (which effectively then raises RAM size by ~50%), so that it does not start to swap before ~80% RAM is actually used, then actually this doesn't sound bad.

Okay will reopen this, feel free to go on discussion and collect info and testing data, however I will now concentrate on getting v6.26 ready, there is still things to do, gonna have a look here again after this is done.

I just made changes to it reducing it to 500MB split over 4. I run 7ZIP on a large backup to get it to swap.

free -m
              total        used        free      shared  buff/cache   available
Mem:            975         179         584          13         211         730
Swap:           487          12         475

sudo swapoff -a
free -m
              total        used        free      shared  buff/cache   available
Mem:            975         180         572          25         223         718
Swap:             0           0           0

sudo swapon -a
free -m
              total        used        free      shared  buff/cache   available
Mem:            975         179         572          25         223         718
Swap:             0           0           0

A lot of tests have been done by Stuart Naylor : https://github.com/StuartIanNaylor/zram-config/tree/master/swap-performance

Also using Stuart's tools may make things easier.

@thierryzoller
Looks fine. swapon -a has no effect, AFAIK since you do not create them via fstab entry, but with the tool.
It should be possible to define the zram sizes via udev rules and then have the swapfile definitions in fstab, so no tool needs to be executed on boot.
Looks like indeed we'll go with an own implementation 馃槃.
Hmm amount of zram devices is possible via modprobe.d entries, probably the sizes can be defined as well via kernel module options?

You can obviously do that, or you can look at how stuart configured a way to have logs compressed in ZRAM "Drive" but rotated and synced to disk (or not). Neat feature.
No need for "Ram Log" either.

I choose a higher text compresssion algo for my logs, performance doesn't matter there.

@thierryzoller
Yeah, ARMbian also has their ramlog as zram with ext4 file system. However this leads to some suboptimal behaviour/overhead. zram cannot recognise when ext4 pages are freed, so the space stays occupied until being overwritten. Also the overhead of a real file system with things like lost+found and such are not something I am keen about.

It does not make much sense actually:

  • If you have a simply tmpfs for /var/log, and that fills the RAM, the most unused data is anyway swapped.
  • As long as there is plenty of free RAM, why would you do the overhead of compressing and decompressing any logging by default? Just leave the decision to the swap system WHAT and WHEN to swap+compress something.
  • Also with hourly log clearing, /var/log should not grow that much anyway, and I plan to switch most logging to journal by default, where you can more easily search and filter about what is going on, and which is in RAM anyway.

So zram file systems do not make sense to me at all, when zram swapping works well, which can much more intelligent decide what to swap to zram and when, based on actual RAM usage and data access.

@MichaIng : I agree, though in my case I disabled swapping completely, so all logs go to RAM (compressed) and wiped every hour. The compression allows me to reduce allocated RAM for logs by the factor of 10-15, thus freeing up more "real" RAM. See my point ?

@thierryzoller
Jep got it. Also when changing the compression method to be optimised for text files, this can even be enhanced.

However this just makes sense if within the hour the log files grow very large to lets say > 10M. Otherwise even it compression ratio is great, saving a < 10 MiBs is not worth the effort, taking the downsides into account.
And if logging is >10M in one hour, then I would say it is very bad configured. No software should log that verbose, or lets say you must have very strong reasons to have that detailed logs, and then most probably you don't want hourly clean.

Thank you for the time spend on this, however I cannot resist reacting to something you said, being :
taking the downsides into account.

I am not so sure about that, unless I see measurements that the downsides overweight the benefits I will stick with my experience : The benefits overweigh the downside in most of the scenarios I use the PI with.

@thierryzoller
Yeah as said, my first assumption was that a zram swap will occupy the size immediately. But since this does not seem to be the case, the downsides are not that large after all. But as said, I need to run tests, will include some benchmarks tmpfs vs zramfs as well, respectively tmpfs with zram swap disabled vs tmpfs with swap enabled and full RAM, so that it is forced to swap.

Also how a zram swap plays together with a swapfile, although this for sure can be controlled via swap priorities.

Here are a few things you need to understand and that will save you time.
Quoting [1]

echo "lz4" > /sys/block/zram0/comp_algorithm
echo "1000M" > /sys/block/zram0/disksize
echo "333M" > /sys/block/zram0/mem_limit
lz4 usually doesn't dip below 300% compression and the effective disksize is always a bit of guesswork and I err low than high.
If the alg is changed to "deflate" prob will not dip below 400% but you could have highly compressed files that get minimal compression so zram hard control is done by actual mem_limit even if it does seem slightly back to front.

So what you see when you do TOP or FREE is the disk_size (which is the uncompressed amount of data - a guesstimate) and mem is the "real" size it takes in RAM. This is important and will save you time.

[1] https://www.raspberrypi.org/forums/viewtopic.php?f=66&t=173063&p=1464110&hilit=zram#p1464110

Here is my example on how I split the MEM and Disksize per core :

#!/bin/bash
cores=$(nproc --all)
modprobe zram num_devices=$cores

swapoff -a

totalmem=`free | grep -e "^Mem:" | awk '{print $2}'`

disk_size=$(( ($totalmem / $cores)* 250 * 3 ))

mem=$(( ($totalmem / $cores)* 250 ))
#disc=$(($mem/3))

echo "Start"
echo $mem
echo $disk_size

core=0
while [ $core -lt $cores ]; do
  echo lz4> /sys/block/zram$core/comp_algorithm
  echo $mem > /sys/block/zram$core/mem_limit
  echo $disk_size > /sys/block/zram$core/disksize
  mkswap /dev/zram$core
  swapon -p 5 /dev/zram$core
  sysctl vm.swappiness=70
  #was 5
  let core=core+1
done

@thierryzoller
Thanks for sharing, jep your methods match the ones from zram-tools and the ARMbian zram swap implementation.

But your assumption about diskspace vs mem_limit does NOT match what one can derive from certain guides and zram-tools. The latter has the setting:

# Specifies the amount of RAM that should be used for zram
# based on a percentage the total amount of available memory
#PERCENTAGE=10

The start script indeed uses this percentage to split across disksize and not mem_limit:

    if [ -n "$PERCENTAGE" ]; then
        totalmemory=$(awk '/MemTotal/{print $2}' /proc/meminfo) # in KiB
        ALLOCATION=$((totalmemory * 1024 * $PERCENTAGE / 100))
    fi

    # Assign memory to zram devices, initialize swap and activate
    # Decrementing $CORE, because cores start counting at 0
    for CORE in $(seq 0 $(($CORES - 1))); do
        echo $(($ALLOCATION / $CORES)) > /sys/block/zram$CORE/disksize
        mkswap /dev/zram$CORE
        swapon -p $PRIORITY /dev/zram$CORE
    done

So since you say it is a guess, I would currently assume disksize is indeed the max compressed size, thus RAM it can use.
However if this was true, mem_limit would be not reasonable, just doubling disksize in meaning, so perhaps you are right and the zram-tools devs interpret it wrong.
Or things depend on kernel (module) version... We need to read some documentation and at best testing to assure things, and in case file a bug report to zram-tools devs for implementing their feature wrong, respectively having a misleading info about what PERCENTAGE does.

Explain this : My disksize is 1.6GB while I only have 1GB of RAM.
I believe stuart is right:
https://www.raspberrypi.org/forums/viewtopic.php?f=66&t=173063&p=1464110&hilit=zram#p1464110

@thierryzoller
Yeah you are right, "disksize" is definitely the uncompressed disk size. The actual RAM usage is not visible. I would not define mem_limit actually but use a disksize that would not cause problems when being full with no compression. This assures that one is safe and writing to this disk has no chance to run into "no space left on device". E.g. no sure what would happen to a zswap if compression for some reason is not as good as usual (files with high randomness), so even that "diskspace" still has free space left but memory limit is reached.

And instead of running a script on boot, I would implement it via /etc/modprobe.d (load module with defined amount of zram devices) and udev rules to set disksize and apply swap on device creation.

The more I think about it, the more I like it. Will add to v6.27 milestone.

There seems to be room to dig deeper into the zram logic - stuarts noticed:
"zram hard control is done by actual mem_limit even if it does seem slightly back to front." So at this point I am confused.

I did some digging for you, if you add the -o parameter to zramctl, you can choose further details. Note that I followed your advice and have set no mem-limit in this example

pi@luxsmarthome:/var/www/html $ zramctl --o NAME,DISKSIZE,DATA,COMPR,TOTAL,MEM-LIMIT,MEM-USED,MIGRATED
NAME       DISKSIZE  DATA COMPR TOTAL MEM-LIMIT MEM-USED MIGRATED
/dev/zram0   119.1M    3M  446B    8K        0B       8K       0B
/dev/zram1   119.1M    3M  4.4K  100K        0B     100K       0B
/dev/zram2   119.1M  2.9M 66.1K  168K        0B     168K       0B
/dev/zram3   119.1M  2.9M 55.4K   92K        0B      92K       0B

        NAME  zram device name
    DISKSIZE  limit on the uncompressed amount of data
        DATA  uncompressed size of stored data
       COMPR  compressed size of stored data
   ALGORITHM  the selected compression algorithm
     STREAMS  number of concurrent compress operations
  ZERO-PAGES  empty pages with no allocated memory
       TOTAL  all memory including allocator fragmentation and metadata overhead
   MEM-LIMIT  memory limit used to store compressed data
    MEM-USED  memory zram have been consumed to store compressed data
    MIGRATED  number of objects migrated by compaction
  MOUNTPOINT  where the device is mounted

Imressive compression ratios for my use case (at this moment)

@thierryzoller
The legend is part of zramctl output?

DISKSIZE limit on the uncompressed amount of data

Pretty clear then 馃槂.

Imressive compression ratios for my use case (at this moment)

Indeed, actually too good, so this cannot be correct, probably because of the overall small size so that some meta data interfere. I mean:

3M 446B

This would be a compression ratio of > 7000 馃槃.

yes part of zramctl

NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4         119.1M  8.3M 477.2K  976K       4 [SWAP]
/dev/zram1 lz4         119.1M  8.2M 484.3K  984K       4 [SWAP]
/dev/zram2 lz4         119.1M  7.7M 423.7K  976K       4 [SWAP]
/dev/zram3 lz4         119.1M  7.9M 552.2K 1008K       4 [SWAP]

@thierryzoller
Ah, that looks more reasonable. Ratio > 16 is still fantastic.

Stats Update:

NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4         178.7M   15M  1.8M  2.3M       4 [SWAP]
/dev/zram1 lz4         178.7M 15.4M    2M  2.5M       4 [SWAP]
/dev/zram2 lz4         178.7M 15.1M  1.7M  2.2M       4 [SWAP]
/dev/zram3 lz4         178.7M 14.5M  1.7M  2.2M       4 [SWAP]

here an script for rpi https://github.com/ple91/rpi_zram

This will never happen :P

@thierryzoller
Indeed it might take a while unless someone else is doing a start. From my end there are more urgent and more voted tasks/features, which I would start to work on first.

Nagging you to get this more to the top of the list.

zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram3 lzo 243,9M 23,4M 9,1M 9,9M 4 [SWAP]
/dev/zram2 lzo 243,9M 23,3M 9,1M 10M 4 [SWAP]
/dev/zram1 lzo 243,9M 23,3M 9,3M 10,2M 4 [SWAP]
/dev/zram0 lzo 243,9M 22,7M 8,8M 9,7M 4 [SWAP]

since https://github.com/MichaIng/DietPi/issues/94#issuecomment-559101447

Offered PR https://github.com/MichaIng/DietPi/pull/3705 to address this issue

Was this page helpful?
0 / 5 - 0 ratings