Free Hosting Online for WorkStations

< Previous | Contents | Next >

If you have more than one harddrive5 in your computer, you can use mdcfg to set up your drives for increased performance and/or better reliability of your data. The result is called Multidisk Device (or after its most famous variant software RAID).

MD is basically a bunch of partitions located on different disks and combined together to form a logical device. This device can then be used like an ordinary partition (i.e. in partman you can format it, assign a mountpoint, etc.).

What benefits this brings depends on the type of MD device you are creating. Currently supported are:


5. To be honest, you can construct an MD device even from partitions residing on single physical drive, but that won’t give any benefits.


RAID0

Is mainly aimed at performance. RAID0 splits all incoming data into stripes and distributes them equally over each disk in the array. This can increase the speed of read/write operations, but when one of the disks fails, you will lose everything (part of the information is still on the healthy disk(s), the other part was on the failed disk).

The typical use for RAID0 is a partition for video editing.


RAID1

Is suitable for setups where reliability is the first concern. It consists of several (usually two) equally-sized partitions where every partition contains exactly the same data. This essentially means three things. First, if one of your disks fails, you still have the data mirrored on the remaining disks. Second, you can use only a fraction of the available capacity (more precisely, it is the size of the smallest partition in the RAID). Third, file-reads are load-balanced among the disks, which can improve performance on a server, such as a file server, that tends to be loaded with more disk reads than writes.

Optionally you can have a spare disk in the array which will take the place of the failed disk in the case of failure.


RAID5

Is a good compromise between speed, reliability and data redundancy. RAID5 splits all incoming data into stripes and distributes them equally on all but one disk (similar to RAID0). Unlike RAID0, RAID5 also computes parity information, which gets written on the remaining disk. The parity disk is not static (that would be called RAID4), but is changing periodically, so the parity information is distributed equally on all disks. When one of the disks fails, the missing part of information can be computed from remaining data and its parity. RAID5 must consist of at least three active partitions. Optionally you can have a spare disk in the array which will take the place of the failed disk in the case of failure.

As you can see, RAID5 has a similar degree of reliability to RAID1 while achieving less re- dundancy. On the other hand, it might be a bit slower on write operations than RAID0 due to computation of parity information.


RAID6

Is similar to RAID5 except that it uses two parity devices instead of one. A RAID6 array can survive up to two disk failures.

RAID10

RAID10 combines striping (as in RAID0) and mirroring (as in RAID1). It creates n copies of incoming data and distributes them across the partitions so that none of the copies of the same data are on the same device. The default value of n is 2, but it can be set to something else in expert mode. The number of partitions used must be at least n. RAID10 has different layouts for distributing the copies. The default is near copies. Near copies have all of the copies at about the same offset on all of the disks. Far copies have the copies at different offsets on the disks. Offset copies copy the stripe, not the individual copies.

RAID10 can be used to achieve reliability and redundancy without the drawback of having to calculate parity.

To sum it up:


Type

Minimum Devices

Spare Device

Survives disk failure?

Available Space

RAID0

2

no

no

Size of the smallest partition multiplied by number of devices in RAID

RAID1

2

optional

yes

Size of the smallest partition in RAID

RAID5

3

optional

yes

Size of the smallest partition multiplied by (number of devices in RAID minus one)

RAID6

4

optional

yes

Size of the smallest partition multiplied by (number of devices in RAID minus two)

RAID10

2

optional

yes

Total of all partitions divided by the number of chunk copies (defaults to two)


If you want to know more about Software RAID, have a look at Software RAID HOWTO (http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html).

To create an MD device, you need to have the desired partitions it should consist of marked for use in a RAID. (This is done in partman in the Partition settings menu where you should select Use as:−→physical volume for RAID.)


Note: Make sure that the system can be booted with the partitioning scheme you are planning. In general it will be necessary to create a separate file system for /boot when using RAID for the root (/) file system. Most boot loaders do support mirrored (not striped!) RAID1, so using for example RAID5 for / and RAID1 for /boot can be an option.


Next, you should choose Configure software RAID from the main partman menu. (The menu will only appear after you mark at least one partition for use as physical volume for RAID.) On the first screen of mdcfg simply select Create MD device. You will be presented with a list of supported types of MD devices, from which you should choose one (e.g. RAID1). What follows depends on the type of MD you selected.


• RAID0 is simple — you will be issued with the list of available RAID partitions and your only task is to select the partitions which will form the MD.


RAID1 is a bit more tricky. First, you will be asked to enter the number of active devices and the number of spare devices which will form the MD. Next, you need to select from the list of available RAID partitions those that will be active and then those that will be spare. The count of selected partitions must be equal to the number provided earlier. Don’t worry. If you make a mistake and select a different number of partitions, debian-installer won’t let you continue until you correct the issue.

• RAID5 has a setup procedure similar to RAID1 with the exception that you need to use at least

three active partitions.

RAID6 also has a setup procedure similar to RAID1 except that at least four active partitions are required.

RAID10 again has a setup procedure similar to RAID1 except in expert mode. In expert mode, debian-installer will ask you for the layout. The layout has two parts. The first part is the layout type. It is either n (for near copies), f (for far copies), or o (for offset copies). The second part is the number of copies to make of the data. There must be at least that many active devices so that all of the copies can be distributed onto different disks.

It is perfectly possible to have several types of MD at once. For example, if you have three 200 GB hard drives dedicated to MD, each containing two 100 GB partitions, you can combine the first partitions on all three disks into the RAID0 (fast 300 GB video editing partition) and use the other three partitions (2 active and 1 spare) for RAID1 (quite reliable 100 GB partition for /home).

After you set up MD devices to your liking, you can Finish mdcfg to return back to the partman to create filesystems on your new MD devices and assign them the usual attributes like mountpoints.


6.3.3.5. Configuring the Logical Volume Manager (LVM)

Top OS Cloud Computing at OnWorks: