Software Raid 0 Linux
Posted : admin On 17.12.2019- Software Raid 0
- Software Raid 0 Performance Linux
- Linux Software Raid 1+0
- Linux Raid 0
- Linux Software Raid 10
Original author(s) | Neil Brown |
---|---|
Developer(s) | Jes Sorensen |
Initial release | 2001 |
Stable release | |
Repository | git.kernel.org/pub/scm/utils/mdadm/mdadm.git/ |
Written in | C |
Operating system | Linux |
Available in | English |
Type | Disk utility |
License | GNU GPL |
Website | neil.brown.name/blog/mdadm |
- Software RAID 0 Configuration in linux RAID is one of the heavily used technology for data performance and redundancy. Based on the requirement and functionality they are classified into different levels. Selecting a level for your requirement is always dependent on the kind of operation that you want to perform on the disk.
- Software RAID in Linux - overview. This article focuses on managing software RAID level 1 (RAID1) in Linux, but similar approach could be used to other RAID levels. Software RAID in Linux we use can be managed with mdadm tool. Devices used by RAID are /dev/mdX, X being the number of a RAID device, for example /dev/md0 or /dev/md1.
About software RAID. As the name implies, this is a RAID (Redundant Array of Inexpensive Disks) setup that is done completely in software instead of using a dedicated hardware card. The main advantage of such a thing is cost, as this dedicated card is an added premium to the base configuration of the system.
mdadm is a Linux utility used to manage and monitor software RAID devices. It is used in modern GNU/Linux distributions in place of older software RAID utilities such as raidtools2 or raidtools.[2][3][4]
mdadm is free software maintained by, and copyrighted to, Neil Brown of SUSE, and licensed under the terms of version 2 or later of the GNU General Public License.
- 2Overview
- 3Features
Name[edit]
The name is derived from the md (multiple device) device nodes it administers or manages, and it replaced a previous utility mdctl.[citation needed] The original name was 'Mirror Disk', but was changed as more functions were added.[citation needed] The name is now understood to be short for Multiple Disk and Device Management.[2]
Overview[edit]
Linux software RAID configurations can include anything presented to the Linux kernel as a block device. This includes whole hard drives (for example, /dev/sda), and their partitions (for example, /dev/sda1).
RAID configurations[edit]
- RAID 0 – Block level striping. MD can handle devices of different lengths, the extra space on the larger device is then not striped.
- RAID 1 – Mirror.
- RAID 4 – Like RAID 0, but with an extra device for the parity.
- RAID 5 – Like RAID 4, but with the parity distributed across all devices.
- RAID 6 – Like RAID 5, but with two parity segments per stripe.
- RAID 10 – Take a number of RAID 1 mirrorsets and stripe across them RAID 0 style.
RAID 10 is distinct from RAID 0+1, which consists of a top-level RAID 1 mirror composed of high-performance RAID 0 stripes directly across the physical hard disks. A single-drive failure in a RAID 10 configuration results in one of the lower-level mirrors entering degraded mode, but the top-level stripe performing normally (except for the performance hit). A single-drive failure in a RAID 0+1 configuration results in one of the lower-level stripes completely failing, and the top-level mirror entering degraded mode. Which of the two setups is preferable depends on the details of the application in question, such as whether or not spare disks are available, and how they should be spun up.
Non-RAID configurations[edit]
- Linear – concatenates a number of devices into a single large MD device.
- Multipath – provides multiple paths with failover to a single device.
- Faulty – a single device which emulates a number of disk-fault scenarios for testing and development.
- Container – a group of devices managed as a single device, in which one can build RAID systems.
Features[edit]
The original (standard) form of names for md devices is /dev/md<n>, where <n> is a number between 0 and 99. More recent kernels have support for names such as /dev/md/Home. Under 2.4.x kernels and earlier these two were the only options. Both of them are non-partitionable.
Since 2.6.x kernels, a new type of MD device was introduced, a partitionable array. The device names were modified by changing md to md_d. The partitions were identified by adding p<n>, where <n> is the partition number; thus /dev/md/md_d2p3 for example. Since version 2.6.28 of the Linux kernel mainline, non-partitionable arrays can be partitioned, the partitions being referred to in the same way as for partitionable arrays – for example, /dev/md/md1p2.
Since version 3.7 of the Linux kernel mainline, md supports TRIM operations for the underlying solid-state drives (SSDs), for linear, RAID 0, RAID 1, RAID 5 and RAID 10 layouts.[5]
Booting[edit]
Since support for MD is found in the kernel, there is an issue with using it before the kernel is running. Specifically it will not be present if the boot loader is either (e)LiLo or GRUB legacy. Although normally present, it may not be present for GRUB 2. In order to circumvent this problem a /boot filesystem must be used either without md support, or else with RAID1. In the latter case the system will boot by treating the RAID1 device as a normal filesystem, and once the system is running it can be remounted as md and the second disk added to it. This will result in a catch-up, but /boot filesystems are usually small.
With more recent bootloaders it is possible to load the MD support as a kernel module through the initramfs mechanism. This approach allows the /boot filesystem to be inside any RAID system without the need of a complex manual configuration.
External metadata[edit]
Besides its own formats for RAID volumes metadata, Linux software RAID also supports external metadata formats, since version 2.6.27 of the Linux kernel and version 3.0 of the mdadm userspace utility. This allows Linux to use various firmware- or driver-based RAID volumes, also known as 'fake RAID'.[6]
As of October 2013, there are two supported formats of the external metadata:
- DDF (Disk Data Format), an industry standard defined by the Storage Networking Industry Association for increased interoperability.[7]
- Volume metadata format used by the Intel Matrix RAID, implemented on many consumer-level motherboards.[6]
mdmpd[edit]
mdmpd is a daemon used for monitoring MD multipath devices, developed by Red Hat as part of the mdadm package.[8] The program is used to monitor multipath (RAID) devices, and is usually started at boot time as a service, and afterwards running as a daemon.
Enterprise storage requirements often include the desire to have more than one way to talk to a single disk drive so that in the event of some failure to talk to a disk drive via one controller, the system can automatically switch to another controller and keep going. This is called multipath disk access. The linux kernel implements multipath disk access via the software RAID stack known as the md (Multiple Devices) driver. The kernel portion of the md multipath driver only handles routing I/O requests to the proper device and handling failures on the active path. It does not try to find out if a path that has previously failed might be working again. That's what this daemon does. Upon startup, it reads the current state of the md raid arrays, saves that state, and then waits for the kernel to tell it something interesting has happened. It then wakes up, checks to see if any paths on a multipath device have failed, and if they have then it starts to poll the failed path once every 15 seconds until it starts working again. Once it starts working again, the daemon will then add the path back into the multipath md device it was originally part of as a new spare path.
If one is using the /proc filesystem, /proc/mdstat lists all active md devices with information about them. Mdmpd requires this to find arrays to monitor paths on and to get notification of interesting events.
See also[edit]
- bioctl on OpenBSD/NetBSD
References[edit]
- ^Sorensen, Jes (2017-01-09). 'ANNOUNCE: mdadm 4.0 - A tool for managing md Soft RAID under Linux'. Retrieved 2017-12-26.
- ^ abBresnahan, Christine; Blum, Richard (2016). LPIC-2: Linux Professional Institute Certification Study Guide. John Wiley & Sons. pp. 206–221. ISBN9781119150817.
- ^Vadala, Derek (2003). Managing RAID on Linux. O'Reilly Media, Inc. ISBN9781565927308.
- ^Nemeth, Evi (2011). UNIX and Linux System Administration Handbook. Pearson Education. pp. 242–245. ISBN9780131480056.
- ^'Linux kernel 3.7, Section 5. Block'. kernelnewbies.org. 2012-12-10. Retrieved 2014-09-21.
- ^ ab'External Metadata'. RAID Setup. kernel.org. 2013-10-05. Retrieved 2014-01-01.
- ^'DDF Fake RAID'. RAID Setup. kernel.org. 2013-09-12. Retrieved 2014-01-01.
- ^'Updated mdadm package includes multi-path device enhancements'. RHEA-2003:397-06. Redhat. 2004-01-16.
Software Raid 0
External links[edit]
Software Raid 0 Performance Linux
Wikiversity has learning resources about mdadm quick reference |
- Krafft, Martin F. 'mdadm recipes'. Debian. Archived from the original on 2013-07-04.
- 'Quick HOWTO: Ch26: Linux Software RAID'. linuxhomenetworking.com.
- 'Installation/SoftwareRAID'. Ubuntu Community Documentation. 2012-03-01.
- Lonezor (2011-11-13). 'Setting up a RAID volume in Linux with >2TB disks'. Archived from the original on 2011-11-19.
For a long time, I've been thinking about switching to RAID 10 on a few servers. Now that Ubuntu 10.04 LTS is live, it's time for an upgrade. The servers I'm using are HP Proliant ML115 (very good value). It has four internal 3.5' slots. I'm currently using one drive for the system and a RAID5 array (software) for the remaining three disks.
The problem is that this creates a single-point-of-failure on the boot drive. Wpa cracking software. Hence I'd like to switch to a RAID10 array, as it would give me both better I/O performance and more reliability. The problem is only that good controller cards that supports RAID10 (such as 3Ware) cost almost as much as the server itself. Moreover software-RAID10 does not seem to work very well with Grub.
What is your advice? Should I just keep running RAID5? Have anyone been able to successfully install a software RAID10 without boot issues?
Linux Software Raid 1+0
3 Answers
I would be inclined to go for RAID10 in this instance, unless you needed the extra space offered by the single+RAID5 arrangement. You get the same guaranteed redundancy (any one drive can fail and the array will survive) and slightly better redundancy in worse cases (RAID10 can survive 4 of the 6 'two drives failed at once' scenarios), and don't have the write penalty often experienced with RAID5.
You are likely to have trouble booting off RAID10, either implemented as a traditional nested array (two RAID1s in a RAID0) or using Linux's recent all-in-one RAID10 driver as both LILO and GRUB expect to have all the information needed to boot on one drive which it may not be with RAID0 or 10 (or software RAID5 for that matter - it works in hardware as the boot loader only sees one drive and the controller deals with where the data it actually spread amongst the drives).
There is an easy way around this though: just have a small partition (128MB should be more than enough - you only need room for a few kernel images and associated initrd files) at the beginning of each of the drives and set these up as a RAID1 array which is mounted as /boot
. You just need to make sure that the boot loader is correctly installed on each drive, and all will work fine (once the kernel and initrd are loaded, they will cope with finding the main array and dealing with it properly).
The software RAID10 driver has a number of options for tweaking block layout that can bring further performance benefits depending on your I/O load pattern (see here for some simple benchmarks) though I'm not aware of any distributions that support this for of RAID 10 from install yet (only the more traditional nested arrangement). If you want to try the RAID10 driver, and your distro doesn't support it at install time, you could install the entire base system into a RAID1 array as described for /boot above and build the RAID10 array with the rest of the disk space once booted into that.
For up to 4 drives, or as many SATA-drives you can connect to the motherboard, you are in many cases better served by using the motherboard SATA connectors and Linux MD software RAID than HW raid. For one thing, the on-board SATA connections go directly to the southbridge, with a speed of about 20 Gbit/s. Many HW controllers are slower. And then Linux MD RAID software is often faster and much more flexible and versatile than HW RAID. For example the Linux MD RAID10-far layout gives you almost RAID0 reading speed. And you can have multiple partitions of different RAID types with Linux MD RAID, for example a /boot with RAID1, and then /root and other partitions in raid10-far for speed, or RAID5 for space. A further argument is cost - buying an extra RAID controller is often more costly than just using the on-board SATA connections;-)
A setup with /boot on raid can be found on https://raid.wiki.kernel.org/index.php/Preventing_against_a_failing_disk .
More info on Linux RAID can be found on the Linux RAID kernel group wiki at https://raid.wiki.kernel.org/
Well, you pretty much answered your own question.
- You don't have the money for a hardware raid10 card
- Grub don't support booting on a RAID10 software raid
This implies that you can't use RAID10.
How about using RAID5 across all disks? This doesn't sound like a high-end (or traffic) server to me, so the performance penalty probably won't be that hard.
Edit: I just googled a bit, and it seems like Grub can't read software raid. It needs a bootloader on every disk that you want to boot up (in RAID5: every disk). This seems extremely clumsy to me, have you considered buying a used raid5 card from ebay?
pauskapauska