Setup
I have USB enclosure (Buffalo DriveStation Quad) containing four drives connected to my nas server (ubuntu server 14.04). The enclosure is configured to JBOD mode, so I'll see all the disks in in Linux.
Two of the disks (sdb and sdc) are configured with software raid as /dev/md0 (raid1). And /dev/md0 is mounted as single partition (/mnt/part1) with ext4 filesystem without journalling.
The other two disks (sdd and sde) are set up with LVM as one volume group, from where I have mounted two logical partitions. One of which is 90% of whole volume group capacity (/mnt/part2), and one that is 10% (/mnt/part3). Both are also ext4 without journalling.
APM Issues
My problems started with default APM modes, as I noticed that the hard drives head parked quite aggressively every couple of minutes. After researching the topic for a bit, I ended up using hdparm -B198 /dev/sd[bcde]. This seems to allow some level of power saving, but without really doing any head parking.
Any sleep?
I'm sort of happy with current situation, but I'd still like the drives go to sleep if there is no activity. Especially the sdb and sdc (/mnt/part1) that doesn't really get any activity for 95% of time. Whatever I've tried, the problem seems to be that the drives don't sleep longer than a minute or two.
Unmounting all partitions, and issuing hdparm -y /dev/sd[bcde] will put the drives into sleep mode, but only for a few minutes. After that they will all wake up one by one. I've tried to debug the issue by enabling block_dump (echo 1 > /proc/sys/vm/block_dump), but don't see any access to the disks.
I also tried to disable APM with hdparm -B255 /dev/sd[bcde], and command them to sleep after that, but same thing. Still the drives wake up after couple of minutes.
I don't have mdadm running in daemon mode (just a single check once a day), nor should there be anything else probing the drives. So any ideas on what to try next? Is the Buffalo USB enclosure just crappy (and does this on its own)?
Update #1
I took time for how long does it take for the disks to wake up after issuing hdparm -y /dev/sd[bc]. Following timestamps illustrates the pattern:
00:00 hdparm -y /dev/sd[bc]
00:40 disks start to wake up
00:59 disks fully awake
01:00 hdparm -y /dev/sd[bc]
03:40 disks start to wake up
03:59 disks fully awake
04:00 hdparm -y /dev/sd[bc]
06:40 disks start to wake up
06:59 disks fully awake
I.e. it seems that something checks/wakes the disks every 3 minutes. First command to go standby mode just happened to be 40 seconds from the checkpoint.
Update #2
Rebooted the machine with acpi=off apm=off. Did not help either. Btw, the machine is Lenovo L520 laptop. Just in case someone finds that relevant.