2

I have 2 drives that I normally use with my Raspberry Pi, but I usually only have one plugged in, though I may need to use the other one at times, they have 500 GB and 4 TB, I created the mount points and added these lines to /etc/fstab:

UUID=0e399206-35fc-4ef2-bc90-925db7c34270 /mnt/4TB ext4 defaults,nofail,x-systemd.device-timeout=4 0 0
UUID=575A-EC15  /mnt/500GB exfat defaults,nofail,x-systemd.device-timeout=4,uid=1000,gid=1000,umask=003 0 0

The last time I booted it up the 500 GB disk was attached and was mounted at boot, and the system started up properly without the 4 TB one because of nofail and x-systemd.device-timeout.

However, today I had to plug it in and was surprised to see that it was automatically mounted according to the fstab.Even though I haven't set up any automount I wouldn't mind this behaviour, but after checking the journal I found that apparently systemd has been kept trying to mount the disk after boot at variable intervals until it was actually available, which is definitely not what I want.

The last lines from the journal regarding this:

    ago 01 20:58:55 Gawain systemd[1]: mnt-4TB.mount: Job mnt-4TB.mount/start failed with result 'dependency'.
-- Subject: Unit mnt-4TB.mount has failed
-- Unit mnt-4TB.mount has failed.
ago 02 00:00:05 Gawain systemd[1]: mnt-4TB.mount: Job mnt-4TB.mount/start failed with result 'dependency'.
-- Subject: Unit mnt-4TB.mount has failed
-- Unit mnt-4TB.mount has failed.
ago 02 00:20:03 Gawain systemd[1]: mnt-4TB.mount: Job mnt-4TB.mount/start failed with result 'dependency'.
-- Subject: Unit mnt-4TB.mount has failed
-- Unit mnt-4TB.mount has failed.
ago 02 11:27:35 Gawain systemd[1]: mnt-4TB.mount: Job mnt-4TB.mount/start failed with result 'dependency'.
-- Subject: Unit mnt-4TB.mount has begun start-up
-- Unit mnt-4TB.mount has begun starting up.
-- Subject: Unit mnt-4TB.mount has finished start-up
-- Unit mnt-4TB.mount has finished starting up.

And from dmesg:

[Wed Aug  2 14:01:52 2017] sd 1:0:0:0: [sdb] 7814037167 512-byte logical blocks: (4.00 TB/3.64 TiB)
[Wed Aug  2 14:01:52 2017] sd 1:0:0:0: [sdb] 4096-byte physical blocks
[Wed Aug  2 14:01:52 2017] sd 1:0:0:0: [sdb] Write Protect is off
[Wed Aug  2 14:01:52 2017] sd 1:0:0:0: [sdb] Mode Sense: 47 00 00 08
[Wed Aug  2 14:01:52 2017] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[Wed Aug  2 14:01:52 2017] sd 1:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
[Wed Aug  2 14:01:52 2017]  sdb: sdb1 sdb2
[Wed Aug  2 14:01:52 2017] sd 1:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
[Wed Aug  2 14:01:52 2017] sd 1:0:0:0: [sdb] Attached SCSI disk
[Wed Aug  2 14:02:24 2017] EXT4-fs (sdb2): mounted filesystem with ordered data mode. Opts: (null)
user2859982
  • 305
  • 1
  • 2
  • 13

1 Answers1

1

The default option auto is assumed, and this creates a dependency for local-fs.target which might be wanted by something. You can override with noauto and then try adding x-systemd.automount to have it mounted when you refer to the mount point.

meuh
  • 49,672
  • 2
  • 52
  • 114
  • 1
    `local-fs.target` is wanted by `sysinit.target` which is in turn wanted by quite a couple of other units.However, `local-fs.target` is "active" (because of `nofail` I guess).So I'd say it's probably systemd itself still trying to start the remaining dependency anyway periodically.Would that be documented somewhere? Haven't had much luck searching. – user2859982 Aug 02 '17 at 18:05
  • It's true that `auto` is the default, and _also_ that adding `noauto` would remove this behaviour. It then wouldn't mount during boot either, which is where you want some friendly mounter software. `x-systemd.automount` is one such option. In general the problem without `noauto` is the device unit gets `Wants=foo.mount` which causes the mount to be activated on hotplug. This should be documented under the automatic dependencies in `man systemd.mount`, but is not. I also thought I'd seen a developer explain this somewhere but I can't find it. – sourcejedi Feb 06 '18 at 17:01