For the last ten years, I have been making inexpensive servers for small businesses. Each server has two identical drives mirrored with raid1. Later, SATA became available, then eSATA. In the past, when a drive failure occurred, the policy was to just replace both drives. Since the drives were internal, there wasn't much way to escape downtime or hourly charges to replace the drives.
I have just recently changed to having a small xi3 server with two external SATA drives. I couldn't do it with squeeze, but it seems that with wheezy and beyond, the external drives are easily hot pluggable. The largest reason to replace both drives before is that the cost of the second drive didn't incur enough of an expense with respect to the work to down the box and work on the drives. The other reason was that by the time one drive died, there were no more identical drives available on the market. With eSATA drives, the possibility to just replace a single drive on a small, cheap server has been realized.
I have been reading up on how LVM is now capable of supporting RAID1 mirroring, however, I have also been reading where the technology is still too new for there to be much information written about it. The answer here was really informative, but doesn't really help me determine how easily I can replace a failed drive. The option I am still most comfortable with is creating the md partitions, then making the md device a PV. Today, I can replace a failed drive with an identical drive. The new drive only has to be partitioned like the first one (I am thinking about eliminating partitions entirely and placing /boot in the lvm, which is also a recent feature). If the new drive is larger than the failed drive, I have been assured that I can create a partition that is at least as big as the md partitions on the running drive, and that will work fine, just leaving extra space on the new drive.
Please keep in mind that the goal of the servers is to provide as much functionality and reliability for the cheapest price in an environment where there is absolutely no in-house IT. There will be no RAID other than type 1 mirroring. Primarily, what I am looking for is minimal service hours after a drive failure, which I can achieve with external hotplug drives. Secondarily, I would like to allocate the available duplicated space to file storage, which is more difficult when using preset md partitions. I can read from the answer linked above that lvextend will work when the drives get bigger, however the answer was unclear about reassembly, especially with a simple raid1. This is what is more problematic with traditional raid over lvm, and is why I am investigating using lvm directly with mdraid underneath. The question "Is it ready yet? If so, where should I go to RTFM?"
To cut a long story short:
- Is LVM's RAID1 reliable?
- Can I convert a volume currently using LVM over dm-raid RAID1 to LVM RAID-1?
- Alternatively, can I enlarge the dm-raid RAID1 volume when I replace a disk by a larger one?