2

Leaving out many details, I need to create a read/write file system on a device with the following main goals:

  • Eliminate all writes while data is not being explicitly written.
  • Reduce all indirect writes when data is written.
  • Run fsck on boot after unclean unmount.

Currently I am using ext3, mounted with noatime. I am not familiar with the details of ext3. In particular, is data written to an ext3 system during "idle" time when no programs are explicitly writing data (specifically, I'm thinking of kjournald and the commit= mount option)?

If I switch to ext2, will that meet all the above requirements? In particular, do I have to set anything up to force an fsck after a sudden power cut?

My options are fat32, ext, ext2, and ext3, plus all of the settings available via mount. Performance is not critical, neither is robustness wrt bad sectors developing over time.

Jason C
  • 1,341
  • 3
  • 13
  • 29

1 Answers1

2

You don't need to switch to ext2, you can tune ext3.

  • You can change fsck requirements of a filesystem using tune2fs. A quick look tells me the correct command is tune2fs -c <mount-count>, but see the man page for the details.
  • You can change how data will be written to the ext3 filesystem during mounting. You want either data=journal or data=ordered. You can further optimize journal commits via other options. Please see this page.

Last but not least, on big drives fsck can take a long time while using ext3. Why don't you consider ext4 as an option?

Please comment this answer if I left anything in dark.

bayindirh
  • 949
  • 1
  • 8
  • 19
  • Thanks! During the periodic metadata/data syncs, will any writes to the device still occur even if no data has been modified? Basically, its a read-write file system that is very, very rarely written to, and I need to be certain that file system management related writes aren't occurring during times when nothing is writing data on the device. – Jason C Oct 17 '13 at 20:24
  • The back story is its on an SD card, and we just had a large number of devices fail within a week of each other due to eventual SD card write failures. However, I need to leave an area on the card writeable for rare but inevitable remote software updates and configuration changes and such. I need the card to not be written to at all outside of these situations. (The root filesystem is also on the SD card, and we've taken the paranoid approach of mounting everything read-only, then fixing broken things case-by-case). – Jason C Oct 17 '13 at 20:28
  • 1
    You can bypass idle journal to disk commits with `data=ordered` mount option, but none of these will override the disk caching mechanisms of Linux, which is a different world that I have no experience. You may try to drop the caches periodically, but this doesn't sound correct at all. – bayindirh Oct 17 '13 at 20:28
  • Great; thanks. I am also going to hunt around to see if I can find a tool that can monitor disk writes so I can verify. Thanks again. – Jason C Oct 17 '13 at 20:31
  • 1
    You're welcome! First of all, you need to use professional grade SD cards for such jobs. Second you really should use `noatime` while mounting. Third, you can partition the card and put the writeable files here. Professional flash drives have better write leveling and move static data around to ensure endurance. Try Sandisk Extreme (Pro), Toshiba Exceria or Lexar Professional. – bayindirh Oct 17 '13 at 20:34
  • There are many tools for disk write monitoring. `sar` and `iotop` are the ones that I can remember in a flash. `sar` can collect statistics, `iotop` can show realtime data. – bayindirh Oct 17 '13 at 20:37
  • Yup, I'm already mounting with noatime, and we are using the Sandisk Extreme Pro (switched to it after this last round of failures), and right now I'm putting the final tweaks on the writable partition. You just gave me a lot of confidence that I'm doing the right thing. :) Just tried `iotop -oa`, it's exactly the info I was looking for. – Jason C Oct 17 '13 at 20:47
  • 1
    I'm glad to hear that helped. Good luck and oh, why don't you ask to the Raspberry Pi guys too? They're running over SD cards too! – bayindirh Oct 17 '13 at 20:55
  • 1
    I thought I'd post back here with an update. Ultimately, I took your last bit of advice and switched both filesystems to ext4, because I have a lot of tuning control and also it supports trim (a major downside to ext2 on a flash drive). I ended up disabling the journal on both the ro and rw file system, and I mount the rw system with `discard`, and of course both with `noatime`. Disabling the journal lets me not worry about tuning it. Since I clone these SD cards from images, I run `fstrim` on each filesystem initially immediately after writing the image. I also disable filesystem checks ... – Jason C Oct 19 '13 at 19:35
  • 1
    ... on boot. The caveat is I have to make copious use of sync on the rare occasions I modify data, but I can confirm *no* unsolicited writes, *and* I have the bonus of effective wear leveling. Performance is not 100% of potential, there are of course risks (fsck would have to be run manually after a power failure during a large write, this is a potential risk but is extremely rare, still I may ultimately sacrifice the 6 seconds of boot time and re-enable it), and I certainly wouldn't recommend this setup outside of a very controlled situation, but for our usage here it's perfect. Thanks again. – Jason C Oct 19 '13 at 19:41
  • 1
    @JasonC, you are most definitely welcome! Your setup feels very tuned and robust. In some cases, performance can go out of window if stability is an absolute requirement. I can suggest another bit of polish about `fsck`. If I were you, I would enable `fsck`but set the `maximum mount`to something very big (even to unlimited if possible). In that case the system will only `fsck` the card if it's not clean (i.e. power-loss during write). This will make system more maintenance free and dependable. Again, this is a pure suggestion (feel no pressure please). :) Congrats again! – bayindirh Oct 20 '13 at 09:44