Logical Volume Manager

Logical Volume Management is a method of partitioning hard disk drives that provides more flexibility in managing storage space than the traditional method of disk partitioning. The Linux version, Logical Volume Manager or LVM, has been a feature of the Linux kernel since about 1999, and was contributed (to the Linux kernel) by Sistina Software, Inc, a company that was later acquired by Red Hat.

What is LVM?

Other UNIX-like operating systems – AIX, HP-UX, and Sun Solaris – have their own implementation of logical volume management. Until recently, there was no equivalent technology – feature-wise – in the BSD distributions. FreeBSD only recently added experimental support for Zone File System (ZFS), a recent Sun Solaris technology with LVM-like features as a subset of its capability.

Features of LVM

  • Use and allocate disk space more efficiently and flexibly

  • Move logical volumes between different physical devices

  • Have very large logical volumes span a number of physical devices

  • Take snap shots of whole filesystems easily, allowing on-line back up of those filesystems

  • Replace on-line drives without interrupting services

New and Changed Features for Red Hat Enterprise Linux 6.0

Red Hat Enterprise Linux 6.3 includes the following documentation and feature updates and changes.

  1. You can define how a mirrored logical volume behaves in the event of a device failure with the mirror_image_fault_policy and mirror_log_fault_policy parameters in the activation section of the lvm.conf file. When this parameter is set to remove, the system attempts to remove the faulty device and run without it. When this parameter is set to allocate, the system attempts to remove the faulty device and tries to allocate space on a new device to be a replacement for the failed device; this policy acts like the remove policy if no suitable device and space can be allocated for the replacement.
  1. For the Red Hat Enterprise Linux 6 release, the Linux I/O stack has been enhanced to process vendor- provided I/O limit information. This allows storage management tools, including LVM, to optimize data placement and access. This support can be disabled by changing the default values of data_alignment_detection and data_alignment_offset_detection in the lvm.conffile, although disabling this support is not recommended.

For information on data alignment in LVM as well as information on changing the default values of data_alignment_detection and data_alignment_offset_detection, see the inline documentation for the /etc/lvm/lvm.conf file, which is also documented in Appendix B, The LVM Configuration Files. For general information on support for the I/O Stack and I/O limits.

  1. In Red Hat Enterprise Linux 6, the Device Mapper provides direct support for udev integration. This synchronizes the Device Mapper with all udev processing related to Device Mapper devices, including LVM devices.

  2. For the Red Hat Enterprise Linux 6 release, you can use the lv convert –repair command to repair a mirror after disk failure. This brings the mirror back into a consistent state.

  3. As of the Red Hat Enterprise Linux 6 release, you can use the –merge option of the lv convert command to merge a snapshot into its origin volume.

  4. As of the Red Hat Enterprise Linux 6 release, you can use the –split mirrors argument of the lv convert command to split off a redundant image of a mirrored logical volume to form a new logical volume.

Disadvantages

  • The levels of indirection that volume managers introduce can complicate the boot process and make disaster recovery difficult, especially when the base operating-system and other essential tools are themselves on an LV.

  • Logical volumes can suffer from external fragmentation when the underlying storage devices do not allocate their PEs contiguously. This can reduce I/O performance on slow- seeking media (such as magnetic disks), which have to seek over the gaps between extents during large sequential reads or writes. Volume managers which use fixed-size PEs, however, typically make PEs relatively large (a default of 4 MB on the Linux LVM, for example) in order to amortize the cost of these seeks.

RAIDLEVELS what is RAID?

  • RAID allows information to access several disks. RAID uses techniques such as disk striping(RAID Level 0), disk mirroring (RAID Level 1), and disk striping with parity (RAID Level 5) to achieve redundancy, lower latency, increased bandwidth, and maximized ability to recover from hard disk crashes.

  • RAID consistently distributes data across each drive in the array. RAID then breaks down thedata into consistently-sized chunks (commonly 32K or 64k, although other values are acceptable).

  • Each chunk is then written to a hard drive in the RAID array according to the RAID level employed. When the data is read, the process is reversed, giving the illusion that the multiple drives in the array are actually one large drive.

Who Should Use RAID?

System Administrators and others who manage large amounts of data would benefit from usingRAID technology. Primary reasons to deploy RAID include:

  • Enhances speed

  • Increases storage capacity using a single virtual disk

  • Minimizes disk failure

There are two possible RAID approaches: Hardware RAID and Software RAID.

Hardware RAID

  • The hardware-based array manages the RAID subsystem independently from the host. It presents a single disk per RAID array to the host.

  • A Hardware RAID device connects to the SCSI controller and presents the RAID arrays as a single SCSI drive.

  • An external RAID system moves all RAIDS handling intelligence into a controller located in the external disk subsystem. The whole subsystem is connected to the host via a normal SCSI controller and appears to the host as a single disk.

  • RAID controller cards function like a SCSI controller to the operating system, and handle all the actual drive communications.

  • The user plugs the drives into the RAID controller (just like a normal SCSI controller) and then adds them to the RAID controllers configuration, and the operating system won’t know the difference.

Software RAID

  • Software RAID implements the various RAID levels in the kernel disk (block device) code. It offers the cheapest possible solution, as expensive disk controller cards or hot-swap chassis 1 are not required.

  • Software RAID also works with cheaper IDE disks as well as SCSI disks. With today’s faster CPUs, Software RAID outperforms Hardware RAID.

  • The Linux kernel contains an MD driver that allows the RAID solution to be completely hardware independent. The performance of a software-based array depends on the server CPU performance and load.

To learn more about Software RAID, here are the key features:

  • Threaded rebuild process

  • Kernel-based configuration

  • Portability of arrays between Linux machines without reconstruction

  • Backgrounded array reconstruction using idle system resources

  • Hot-swappable drive support

  • Automatic CPU detection to take advantage of certain CPU optimizations

Configuring Software RAID

Users can configure Software RAID during the graphical installation process (Disk Druid), thetext-based installation process, or during a kickstart installation.

  • Apply software RAID partitions to the physical hard drives.

To add a boot partition (/boot/) to a RAID partition, ensure it is on a RAID1 partiton.

  • Creating RAID devices from the software RAID partitions.

  • Optional: Configuring LVM from the RAID devices.

  • Creating file systems from the RAID devices.

Linear mode

Ok, so you have two or more partitions which are not necessarily the same size (but of course can be), which you want to append to each other.

Spare-disks are not supported here. If a disk dies, the array dies with it. There’s no information to put on a spare disk. Using mdadm, a single command like

mdadm –create –verbose /dev/md0 –level=linear –raid-devices=2 /dev/sdb6 /dev/sdc5

The out- put might look like this mdadm: chunk size defaults to 64K

mdadm: array /dev/md0 started.

Have a look in /proc/mdstat. You should see that the array is running.

Now, you can create a filesystem, just like you would on any other device, mount it, and include it in your/etc/fstab and so on.

Subscribe For More Content