Dietpi: USB drive doesn't follow the global idle duration timer, never spins down

Created on 3 Jan 2020  路  16Comments  路  Source: MichaIng/DietPi

Creating a bug report/issue

Required Information

  • DietPi version |
    G_DIETPI_VERSION_CORE=6
    G_DIETPI_VERSION_SUB=27
    G_DIETPI_VERSION_RC=2
    G_GITBRANCH='master'
    G_GITOWNER='MichaIng'
  • Distro version | stretch
  • Kernel version | 4.19.66-v7+ #1253 SMP
  • SBC device | RPi 3 Model B+ (armv7l)
  • Power supply used | 5V 1A Generic Chinese power adapter
  • SDcard used | Sandisk

Additional Information (if applicable)

  • Software title | dietpi-drive_manager

Steps to reproduce

NTFS USB drive.
Wait the global idle duration timer

Expected behaviour

Drive spins down

Actual behaviour

Driver doesn't spins down

My USB drive, a SAMSUNG HM500JI, is actually capable of using APM, so a call to hdparm fixes the issue, which suggests that somehow the auto-mount facility in dietpi is simply not calling it on the drive.

Debian Buster External Bug Solution available

All 16 comments

@tesseract241
Many thanks for your report.

The hdparm values are applied here: https://github.com/MichaIng/DietPi/blob/master/dietpi/dietpi-drive_manager#L990-L991
After reboot, hdparm should apply those for all drives via /etc/hdparm.conf entries.

So drive spin down on same session does not work? If so does it work after reboot? How do you call hdparm to make it work?

@MichaIng I think I got one step further into this.
First off, the answers to your questons:

  • Drive spin down on the same session does not work
  • It does not work after reboot
  • I call hdparm -B and then hdparm -S, just like the script you linked
  • It might be obvious, but that doesn't keep after reboot either

Now, what I've realized by looking at /etc/hdparm.conf is that it has generated an entry for a /dev/sdb drive, while my drive appears under the /dev/sda1 name.
Maybe there's some problem with whatever generates the names in the list.

@tesseract241

Now, what I've realized by looking at /etc/hdparm.conf is that it has generated an entry for a /dev/sdb drive, while my drive appears under the /dev/sda1 name.

Strange, dietpi-drive_manager should go through all detected drives and add entries for all of them. Does the script detect your /dev/sda1 correctly?

@MichaIng If by script you mean dietpi-drive_manager then yes, it's what I use to mount the drive, it sees it correctly as sda1 and mounts it.
My guess is that at some point dietpi-drive_manager and whatever generates the hdparm.conf differentiate in how they get those names, and that's where the difference arises.

@tesseract241
Hmm it works fine here:

[  OK  ] DietPi-Drive_Manager | hdparm -B 127 /dev/sda
[  OK  ] DietPi-Drive_Manager | hdparm -S 241 /dev/sda
[  OK  ] DietPi-Drive_Manager | hdparm -B 127 /dev/sdb
[  OK  ] DietPi-Drive_Manager | hdparm -S 241 /dev/sdb
root@VM-Buster:~# cat /etc/hdparm.conf
/dev/sda
{
   apm = 127
   spindown_time = 241
}
/dev/sdb
{
   apm = 127
   spindown_time = 241
}
root@VM-Buster:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0    8G  0 disk
鈹斺攢sda1   8:1    0    8G  0 part /
sdb      8:16   0    1G  0 disk
鈹斺攢sdb1   8:17   0 1022M  0 part

In your case it is a regular block device, right?

ls -l /sys/block/sda

This is what is checked to detect a block device as physical instead of e.g. network drive and such. For the latter, applying spindown times is skipped.

@MichaIng

 lsblk

Gives me this

NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0 465,8G  0 disk
鈹斺攢sda1        8:1    0 465,8G  0 part /mnt/HDD_Esterno 
mmcblk0     179:0    0  29,7G  0 disk
鈹溾攢mmcblk0p1 179:1    0  41,8M  0 part /boot
鈹斺攢mmcblk0p2 179:2    0  29,7G  0 part /

While

ls -l /sys/block/sda

Gives me this

lrwxrwxrwx 1 root root 0 gen  4 10:26 /sys/block/sda -> ../devices/platform/soc/3f980000.usb/usb1/1-1/1-1.3/1-1.3:1.0/host0/target0:0:0/0:0:0:0/block/sda

Where do you get the log from dietpi-drive_manager? I've looked for it in journalctl but couldn't find it, is there a dedicated dietpi logging utility?

@tesseract241
Okay, I see no reason currently why it's skipped.

Where do you get the log from dietpi-drive_manager?

It is not a daemon but a foreground program, hence all it's output are going to console. Since v6.27 it should be assured that everything is preserved in scrollback buffer (if enabled), hence you can scroll up via mouse wheel or shift+pageUp/Down.

However there is a debug mode which makes it loop through all drives and print what it found (with all details it reads/uses) and exit:

G_DEBUG=1 dietpi-drive_manager

@MichaIng Good news is that now, after forcing it to set the global spindown timer from the dietpi-drive_manager it actually does it, and the hdparm,conf now lists the correct drive, sda.
But it still doesn't apply automatically at boot, it resets to apm OFF.
No idea honestly.

@tesseract241
I checked bug reports and found some interesting ones:

There was a bug about how hdparm.conf was parsed which was fixed meanwhile. A backport for Debian Buster is available: https://packages.debian.org/buster-backports/hdparm

Luckily your system is ARMv7, hence you can try it:

cd /tmp
wget https://deb.debian.org/debian/pool/main/h/hdparm/hdparm_9.58+ds-4~bpo10+1_armhf.deb
dpkg -i hdparm_9.58+ds-4~bpo10+1_armhf.deb
rm hdparm_9.58+ds-4~bpo10+1_armhf.deb

What one also gets from these reports is that APM in general seem to be not supported much any more. If so, both settings we add to hdparm.conf are ignored. There is a new (?) option which forces spindown being applied (hdparm -S called) even if APM is not supported: force_spindown_time
This is the relevant function of the dedicated APM script /usr/lib/pm-utils/power.d/95hdparm-apm:

resume_hdparm_spindown()
{
    for dev in /dev/sd? /dev/hd? ; do
        # Check for force_spindown_time option
        # If defined apply hdaprm -S even if APM is not supported
        # See also #758988
        ignore_apm=
        options=$(hdparm_options $dev)
        case $options in
            (*'force_spindown_time'*)
                ignore_apm='true'
                ;;
        esac
        apm_opt=
        if [ -b $dev ]; then
            if hdparm_try_apm $dev || [ "$ignore_apm" = true ] ; then
                for option in $(hdparm_options $dev); do
                    # Convert manually introduced option "force_spindown_time"
                    # back to "-S" understandable by hdparm
                    option=$(echo $option| sed 's/force_spindown_time/-S/')
                    case $option in
                        -S*)
                            apm_opt=$option
                            ;;
                        *)
                            ;;
                    esac
                done
                if [ -n "$apm_opt" ]; then
                    hdparm $apm_opt $dev
                fi
            fi
        fi
    done
}

Sadly this new option is not documented anywhere so far: https://manpages.debian.org/testing/hdparm/index.html


It would be great if you could go through the following steps:

  • Upgrade hdparm to the backports package as above
  • Reboot
  • Check if drive spins down now without manual hdparm -S call
  • If not, do sed -i 's/[[:blank:]]spindown_time/ force_spindown_time/' /etc/hdparm.conf
  • Reboot
  • Check if drive spins down now without manual hdparm -S call

As from what I can see, force_spindown_time is not supported on Debian Stretch, hence we can only switch from Buster upwards. Worse, since RPi has no backports repo, we can only switch for non-RPi Buster+ after installing/upgrading hdparm from backports. Of course RPis can install the package manually, but this is not true for RPi1/Zero (ARMv6) which are not compatible with Debian armhf...

Damn thing why the old spindown option is not simply applied in every case, regardless of APM support or not. I mean one can set it or leave it, nothing is applied by default. Having such an clearly named option which only has effect in fading rare cases, but could actually have an effect in all cases, doesn't make much sense to me.

@MichaIng Actually good news! It seems to be fixed!
I updated hdparm to the backports package like you said, and it seemed to be keeping the config over reboots. But, as the apm value was not the one I wanted, I used dietpi-drive_manager to change it.
That broke it.
I then tried changing hdparm.conf like you said, but that didn't seem to fix it.
I've then removed hdparm and reinstalled it from the backports package, adding to the conf manually this:

/dev/sda {
        apm = 127
        spindown_time = 241
 }

And not it keeps the config at the correct values.
No force_spindown_time necessary apparently.
I've gone back and forth repeating this steps to ensure this was exactly what was happening, and I can confirm it.
The only other line in the config that it's not commented is 'quiet', so I don't know why the one generated by dietpi-drive_manager wouldn't work, they're exactly the same as far as the USB drive is concerned.

@tesseract241
So far so good. Strange that dietpi-drive_manager broke it, since the entry should match 100%. What was the result of it after changing the value there? cat /etc/hdparm.conf

A pain that this bug fix, which turns hdparm for the most usual use case (auto spindown) from broken to functional, is not merged into Buster stable repo, which makes it unavailable for Raspbian Buster. I'll check and ask for this in case, as it seems important enough for me and it is no upstream update/version string raise, which is usually a reason to not merge to prevent regressions.

What we can do for now in dietpi-drive_manager:

  • Make spindown time selection a drive-specific feature. Add it to partition menus of rotating drives but add a note that this applies for the whole drive and all its partitions (obviously 馃槈).
  • Check whether APM is supported or not: hdparm -B 127 /dev/sdX returns APM_level = not supported if not
  • On Stretch, if APM is not supported, simply hide the option, since then there is currently no way to enable automated spindown via hdparm, sadly.

    • What could be implemented instead at a later date, is calling hdparm -S <choice> for those drives manually on boot (DietPi-Boot script), which should actually work. Or add a custom udev rule for this... Actually custom udev rules is what we could generally add, instead of using hdparm.conf, but that bears the risk of conflicts/race conditions/doubling with custom hdparm.conf entries then.

  • On Buster

    • On non-RPi, install/upgrade the hdparm package from buster-backports repo. If APM is supported, add settings as before, if APM is not supported, add force_spindown_time instead and skip apm setting.

    • On RPi, do the same and hope the fix will be merged into Raspbian Stable soon? Or hide the option for now (sounds more reasonable to me)? If I understood the bug correctly, force_spindown_time is not handled correctly on current Buster and when adding multiple settings, white spaces are missing. Hence a single spindown_time entry should work, although only with APM = 127 and below and I don't know the default value. Finally some testing on RPi is required to verify the result of those entries.

@MichaIng
I've attached both configurations, the one that I manually edited and the one generated from dietpi-drive_manager. The first one works, the second one doesn't.

I thought about it a little more and decided to investigate whether it was actually strictly something to do with the configuration, removed the backported hdparm and reinstalled the one from the repository, and...it still works with my manually edited one.
Of course, this is with a drive that does support APM, but I still don't understand what's happening here. Hopefully you can see something that I don't in those conf files.

hdparm_drive_manager.zip
hdparm_manual.zip

@tesseract241
Okay that is strange, I do not see a syntax or settings difference.

  • Did you remove the hdparm package only or purged it? There is a little chance that some (fixed) scripts remained as config files.
  • An empty line is required between two drive blocks?
  • The scripts exits if applying settings to a drive fails, omitting settings for the later declared drives? Of course it does not make sense to add a block for the SDcard (mmcblk0), but I would have guessed it then simply continues without effect on that drive.

I'll run some tests. /lib/udev/hdparm btw is the script that calls hdparm command with options based on /etc/hdparm.conf, when a drive is added. Hence this can be called manually for testing. Since the quiet option is translated into hdparm -q command option, it suppresses non-error output. Hence for testing this should be removed.

@tesseract241
As of #3420 the additional newline in our hdparm.conf just be the issue in your case as well, right? The special handling for devices which do not support APM with hdparm versions which support force_spindown_time is still a topic, but for now the syntax fix should resolve things in most cases.

I found a better solution compared to adding a dedicated block for every drive, checking it it is even a spinning drive etc: https://github.com/MichaIng/DietPi/commit/8f5fe825d62aae43e7b479afb641ee0e40553555
If settings are added outside a block, they are interpreted as defaults to be used for all drives for which no block is present. Much cleaner, exactly what we want and compatible with all Debian versions.

This solves the force_spindown_time issue on Bullseye as well:

  • On Buster this setting has been implemented, but is interpreted correctly only by the pm-utils script, which is not called by the plain dev rules. The udev rules script would interpret force_spindown_time wrong and spindown_time is applied regardless of APM support, the same like it was on Stretch.
  • Since Bullseye, the pm-utils script is used by the udev rules as well, hence spindown_time is only effective if the drive supports APM. force_spindown_time needs to be used there to override this behaviour, but it is luckily interpreted/translated by the udev script as well.
  • So basically we now use force_spindown_time from Bullseye on and spindown_time on Buster and Stretch, which guarantees spindown in every case, as long as the drive supports it at all, but regardless of APM support.

Changelog: https://github.com/MichaIng/DietPi/commit/809077463dc55006e66dc81170d381c225b42b54

Problem with this in general is that hdparm does not work with all externall hdd's. I have 2 Western Digital's (from 2019 an 2020) and a Seagate (from 2019) and all 3 are not supported by hdparm for some reason.
After reading the guide linked below I just installed hd-idle (with hdparm still installed since dietpi-drove_manager needs it) and out of the box without any additional config (just apt install hd-idle) now all 3 hdd's, spread over 2 Intel mini-pc's spin down after 10min. 10min is fine for me so I didnt bother trying to adjust this. Also, it doest seem to interfere with the excisting hdparm installation.
Perhaps dietpi-drive_manager should migrate to hd-idle?

guide used: https://www.htpcguides.com/spin-down-and-manage-hard-drive-power-on-raspberry-pi/

Was this page helpful?
0 / 5 - 0 ratings