Question:
what is the difference between RAID 0 AND RAID 1?
arunkn
2006-03-17 03:28:21 UTC
what is the difference between RAID 0 AND RAID 1?
Three answers:
must_zen
2006-03-17 03:51:27 UTC
RAID 1

A RAID 1 creates an exact copy (or mirror) of a set of data on two or more disks. This is useful when read performance is more important than minimizing the storage capacity used for redundancy. The array can only be as big as the smallest member disk, however. A classic RAID 1 mirrored pair contains two disks, which increases reliability by a factor of two over a single disk, but it is possible to have many more than two copies. Since each member can be addressed independently if the other fails, reliability is a linear multiple of the number of members. To truly get the full redundancy benefits of RAID 1, independent disk controllers are recommended, one for each disk. Some refer to this practice as splitting or duplexing.



When reading, both disks can be accessed independently. Like RAID 0 the average seek time is reduced by half when randomly reading but because each disk has the exact same data the requested sectors can always be split evenly between the disks and the seek time remains low. The transfer rate would also be doubled. For three disks the seek time would be a third and the transfer rate would be tripled. The only limit is how many disks can be connected to the controller and its maximum transfer speed. Many older IDE RAID 1 cards read from one disk in the pair, so their read performance is that of a single disk. Some older RAID 1 implementations would also read both disks simultaneously and compare the data to catch errors. The error detection and correction on modern disks makes this less useful in environments requiring normal commercial availability. When writing, the array performs like a single disk as all mirrors must be written with the data.



RAID 1 has many administrative advantages. For instance, in some 365*24 environments, it is possible to "Split the Mirror": declare one disk as inactive, do a backup of that disk, and then "rebuild" the mirror. This requires that the application support recovery from the image of data on the disk at the point of the mirror split. This procedure is less critical in the presence of the "snapshot" feature of some filesystems, in which some space is reserved for changes, presenting a static point-in-time view of the filesystem. Alternatively, a set of disks can be kept in much the same way as traditional backup tapes are.



Also, one common practice is to create an extra mirror of a volume (also known as a Business Continuance Volume or BCV) which is meant to be split from the source RAID set and used independently. In some implementations, these extra mirrors can be split and then incrementally re-established, instead of requiring a complete RAID set rebuild.



Traditional

RAID 1

A1 A1

A2 A2

A3 A3

A4 A4



Note: A1, A2, et cetera each represent one data block; each column represents one disk.





RAID 2

A RAID 2 stripes data at the bit (rather than block) level, and uses a Hamming code for error correction. The disks are synchronized by the controller to run in perfect tandem. This is the only original level of RAID that is not currently used. Extremely high data transfer rates are possible.





RAID 3

A RAID 3 uses byte-level striping with a dedicated parity disk. RAID 3 is very rare in practice. One of the side effects of RAID 3 is that it generally cannot service multiple requests simultaneously. This comes about because any single block of data will by definition be spread across all members of the set and will reside in the same location, so any I/O operation requires activity on every disk.



In our example below, a request for block "A" consisting of bytes A1-A9, would require all three data disks to seek to the beginning(A1) and reply with their contents. A simultaneous request for block B would have to wait.



Traditional

RAID 3

A1 A2 A3 Ap(1-3)

A4 A5 A6 Ap(4-6)

A7 A8 A9 Ap(7-9)

B1 B2 B3 Bp(1-3)



Note: A1, B1, etcetera each represent one data byte; each column represents one disk.





RAID 4

A RAID 4 uses block-level striping with a dedicated parity disk. RAID 4 looks similar to RAID 3 except that it stripes at the block, rather than the byte level. This allows each member of the set to act independently when only a single block is requested. If the disk controller allows it, a RAID 4 set can service multiple read requests simultaneously.



In our example below, a request for block "A1" would be serviced by disk 1. A simultaneous request for block B1 would have to wait, but a request for B2 could be serviced concurrently.



Traditional

RAID 4

A1 A2 A3 Ap

B1 B2 B3 Bp

C1 C2 C3 Cp

D1 D2 D3 Dp



Note: A1, B1, et cetera each represent one data block; each column represents one disk.







RAID 5

A RAID 5 uses block-level striping with parity data distributed across all member disks. RAID 5 has achieved popularity due to its low cost of redundancy. Generally RAID 5 is implemented with hardware support for parity calculations.



In the example below, a read request for block "A1" would be serviced by disk 1. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently.



Traditional

RAID 5

A1 A2 A3 Ap

B1 B2 Bp B3

C1 Cp C2 C3

Dp D1 D2 D3



Note: A1, B1, et cetera each represent one data block; each column represents one disk.



Every time a block is written to a disk in a RAID 5, a parity block is generated within the same stripe. A block is often composed of many consecutive sectors on a disk. A series of blocks (a block from each of the disks in an array) is collectively called a "stripe". If another block, or some portion of a block, is written on that same stripe the parity block (or some portion of the parity block) is recalculated and rewritten. For small writes, this requires reading the old parity, reading the old data, writing the new parity, and writing the new data. The disk used for the parity block is staggered from one stripe to the next, hence the term "distributed parity blocks". RAID 5 writes are expensive in terms of disk operations and traffic between the disks and the controller.



The parity blocks are not read on data reads, since this would be unnecessary overhead and would diminish performance. The parity blocks are read, however, when a read of a data sector results in a cyclic redundancy check (CRC) error. In this case, the sector in the same relative position within each of the remaining data blocks in the stripe and within the parity block in the stripe are used to reconstruct the errant sector. The CRC error is thus hidden from the main computer. Likewise, should a disk fail in the array, the parity blocks from the surviving disks are combined mathematically with the data blocks from the surviving disks to reconstruct the data on the failed drive "on the fly".



This is sometimes called Interim Data Recovery Mode. The computer knows that a disk drive has failed, but this is only so that the operating system can notify the administrator that a drive needs replacement; applications running on the computer are unaware of the failure. Reading and writing to the drive array continues seamlessly, though with some performance degradation. The difference between RAID 4 and RAID 5 is that, in interim data recovery mode, RAID 5 might be slightly faster than RAID 4, because, when the CRC and parity are in the disk that failed, the calculation does not have to be performed, while with RAID 4, if one of the data disks fails, the calculations have to be performed with each access.



In RAID 5, where there is a single parity block per stripe, the failure of a second drive results in total data loss.



The maximum number of drives in a RAID 5 redundancy group is theoretically unlimited, but it is common practice to limit the number of drives. The tradeoffs of larger redundancy groups are greater probability of a simultaneous double disk failure, the increased time to rebuild a redundancy group, and the greater probability of encountering an unrecoverable sector during RAID reconstruction. As the number of disks in a RAID 5 group increases, the MTBF can become lower than that of a single disk. This happens when the likelihood of a second disk failing out of (N-1) dependent disks, within the time it takes to detect, replace and recreate a first failed disk, becomes larger than the likelihood of a single disk failing. RAID 6 is an alternative that provides dual parity protection thus enabling larger numbers of disks per RAID group.



Some RAID vendors will avoid placing disks from the same manufacturing run in a redundancy group to minimize the odds of simultaneous early life and end of life failures as evidenced by the bathtub curve.



RAID 5 implementations suffer from poor performance when faced with a workload which includes many writes which are smaller than the capacity of a single stripe; this is because parity must be updated on each write, requiring read-modify-write sequences for both the data block and the parity block. More complex implementations often include non-volatile write back cache to reduce the performance impact of incremental parity updates.



In the event of a system failure while there are active writes, the parity of a stripe may become inconsistent with the data; if this is not detected and repaired before a disk or block fails, data loss may ensue as incorrect parity will be used to reconstruct the missing block in that stripe; this potential vulnerability is sometimes known as the "write hole". Battery-backed cache and other techniques are commonly used to reduce the window of vulnerability of this occurring.





RAID 6

A RAID 6 extends RAID 5 by adding an additional parity block, thus it uses block-level striping with two parity blocks distributed across all member disks. It was not one of the original RAID levels.



RAID 5 can be seen as a special case of a Reed-Solomon code where the syndrome used is the one built from generator 1 [1]. Thus RAID 5 only requires addition in the galois field. Since we are operating on bytes, the field used is a binary galois field (GF\left(2^m\right)), typically of order 8. In binary galois fields, addition is computed by a simple XOR.



After understanding RAID 5 as a special case of a Reed-Solomon code, it is easy to see that it is possible to extend the approach to produce redundancy simply by producing another syndrome using a different generator; for example, 2 in GF\left(2^8\right). By adding additional generators it is possible to achieve any number of redundant disks, and recover from the failure of that many drives anywhere in the array.



Like RAID 5 the parity is distributed in stripes, with the parity blocks in a different place in each stripe.



Traditional Typical

RAID 5 RAID 6

A1 A2 A3 Ap A1 A2 A3 Ap Aq

B1 B2 Bp B3 B1 B2 Bp Bq B3

C1 Cp C2 C3 C1 Cp Cq C2 C3

Dp D1 D2 D3 Dp Dq D1 D2 D3



Note: A1, B1, et cetera each represent one data block; each column represents one disk;

p and q represent the two Reed-Solomon syndromes.



RAID 6 is inefficient when used with a small number of drives but as arrays become bigger and have more drives the loss in storage capacity becomes less important and the probability of two disks failing at once becomes greater. RAID 6 provides protection against double disk failures and failures while a single disk is rebuilding. In the case where there is only one array it makes more sense than having a "hot spare" disk.
2006-03-17 03:41:43 UTC
Raid zero came before and is not as advanced as Raid 1.

Raid one is meaner, kills more efectively and at a greater distance.it also has newer chemicals in it because the roaches where getting used to Raid one ,they watch T,V too you know.They will also get used to raid 1 but dont worry the company is working very hard on this and they will come out with Raid 2.Someone told me that there is actually a Raid 6 already.but i am not sure of that.
willbanks
2016-12-18 14:52:38 UTC
RAID 0 isn't strictly RAID in any respect yet striping - the information is unfold between 2 or greater drives theoretically combining the technique of each and every. RAID one million is mirroring the place same copies of your information are saved on each and each force. you purely get the technique of somebody force however the information remains obtainable in spite of if a force breaks down. be careful of any preparation you receive based approximately overall performance. traditionally RAID is applied with sensible controllers that are definitely specific-purpose computers of their own suitable and are able to blistering ranges of overall performance. that's the set up it is theory by maximum analyze analyzing RAID. although, firmware based RAID, regularly coated on even concern-loose cutting-part motherboards, is starting to be an increasing form of common. those are based on the host workstation to coordinate all disk interest and as a consequence overall performance is regularly little greater constructive than a single force.


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...