We write software that runs in third party devices. On one of the devices we support, the manufacture tells us not to write to the flash drive, or we risk using up the limited write operations it supports. Unfortunately, one of the requirements of our application is to persist some data across boots and we have no other alternative.
I don't know exactly what the drive inside the device is, nor how it is configured, so one question is how can I go about finding this information? Some information I have managed to find:
bash-3.2$ df | grep mtd
/dev/mtdblock5 65536 7824 57712 12% /apps
bash-3.2$ dmesg | grep -i mtd
Kernel command line: root=/dev/mtdblock4 rootfstype=jffs2 rw ip=none console= mem=128M init=/sbin/init mtdparts=mtd:512k(bootloader),512k(env),2M(kernel_a),2M(kernel_b),59M(filesystem),64M(user) loglevel=3 panic=5 reboot=h
6 cmdlinepart partitions found on MTD device <NULL>
Creating 6 MTD partitions on "<NULL>":
I've had a look in proc and sysfs and didn't find anything useful. The device environment doesn't have any useful tools installed such as hdparam, lshw, etc that I can find.
Another question is whether there are any heuristics software could use to detect whether the 'write limit' is approaching?
Finally, is there any best practices that could be observed while writing to the disk to limit the negative effects? For example, are small bursts of writing better than sustained write operations? Is it data throughput that's the problem or is it a file-system thing? If I open a file without closing it and continue to stream data there, is it better than if I open, write and close for each new piece of data?
Many thanks for any help you can provide, Dan.