Home Technical Articles LVM – An Introduction

LVM – An Introduction

by SupportPRO Admin

LVM HOWTO

An Introduction

This is an attempt to collect everything needed to know to get LVM up and running. The entire process of getting, compiling, installing, and setting up LVM will be covered. Pointers to LVM configurations that have been tested will also be included. This version of the HowTo is for LVM 2 with device-mapper and LVM 1.0.8. All previous versions of LVM are considered obsolete and are only kept for historical reasons. This document makes no attempt to explain or describe the workings or use of those versions.

What is LVM?

LVM is a Logical Volume Manager for the Linux operating system. There are two versions of LVM for Linux:

* LVM 2 – The latest and greatest version of LVM for Linux.

LVM 2 is almost completely backward-compatible with volumes created with LVM 1. The exception to this is snapshots (You must remove snapshot volumes before upgrading to LVM 2)

LVM 2 uses the device-mapper kernel driver. Device-mapper support is in the 2.6 kernel tree and there are patches available for current 2.4 kernels.

* LVM 1 – The version that is in the 2.4 series kernel, LVM 1 is a mature product that has been considered stable for a couple of years. The kernel driver for LVM 1 is included in the 2.4 series kernels, but this does not mean that your 2.4.x kernel is up to date with the latest version of LVM. Look at the README for the latest information about which kernels have the current code in them.

Why Logical Volume Management?

* Logical volume management provides a higher-level view of the disk storage on a computer system than the traditional view of disks and partitions. This gives the system administrator much more flexibility in allocating storage to applications and users. Storage volumes created under the control of the logical volume manager can be resized and moved around almost at will, although this may need some upgrading of file system tools.
The logical volume manager also allows the management of storage volumes in user-defined groups, allowing the system administrator to deal with sensibly named volume groups such as development and sales rather than physical disk names such as sda and sdb.

* Logical volume management is traditionally associated with large installations containing many disks but it is equally suited to small systems with a single disk or maybe two.

Benefits of Logical Volume Management on a Small System

One of the difficult decisions facing a new user installing Linux for the first time is how to partition the disk drive. The need to estimate just how much space is likely to be needed for system files and user files make the installation more complex than is necessary and some users simply opt to put all their data into one large partition in an attempt to avoid the issue. Once the user has guessed how much space is needed for /home /usr / (or has let the installation program do it) then is quite common for one of these partitions to fill up even if there is plenty of disk space in one of the other partitions.

With logical volume management, the whole disk would be allocated to a single volume group and logical volumes created to hold the / /usr and /home file systems. If, for example, the /home logical volume later filled up but there was still space available on /usr then it would be possible to shrink /usr by a few megabytes and reallocate that space to /home.
Another alternative would be to allocate minimal amounts of space for each logical volume and leave some of the disk unallocated. Then, when the partitions start to fill up, they can be expanded as necessary.

As an example: Joe buys a PC with an 8.4 Gigabyte disk on it and installs Linux using the following partitioning system:

/boot /dev/hda1 10 Megabytes
swap /dev/hda2 256 Megabytes
/ /dev/hda3 2 Gigabytes
/home /dev/hda4 6 Gigabytes

This, he thinks, will maximize the amount of space available for all his MP3 files.

Sometime later Joe decides that he want to install the latest office suite and desktop UI available but realizes that the root partition isn’t large enough. But, having archived all his MP3s onto a new writable DVD drive there is plenty of space on /home.

His options are not good:

1. Reformat the disk, change the partitioning scheme and reinstall.

2. Buy a new disk and figure out some new partitioning scheme that will require the minimum of data movement.

3. Set up a symlink farm on / pointing to /home and install the new software on /home.

With LVM this becomes much easier:

Jane buys a similar PC but uses LVM to divide up the disk in a similar manner:

/boot /dev/hda1 10 Megabytes
swap /dev/vg00/swap 256 Megabytes
/ /dev/vg00/root 2 Gigabytes
/home /dev/vg00/home 6 Gigabytes

Note: boot is not included in the LV because bootloaders don’t understand LVM volumes yet. Its possible boot on LVM will work, but you run the risk of having an unbootable system.

root on LV should be used by advanced users only
root on LVM requires an initrd image that activates the root LV. If a kernel is upgraded without building the necessary initrd image, that kernel will be unbootable. Newer distributions support LVM in their mkinitrd scripts as well as their packaged initrd images, so this becomes less of an issue over time.

When she hits a similar problem she can reduce the size of /home by a gigabyte and add that space to the root partition.

Suppose that Joe and Jane then manage to fill up the /home partition as well and decide to add a new 20 Gigabyte disk to their systems. Joe formats the whole disk as one partition (/dev/hdb1) and moves his existing /home data onto it and uses the new disk as /home. But he has 6 gigabytes unused or has to use symlinks to make that disk appear as an extension of /home, say /home/joe/old-mp3s. Jane simply adds the new disk to her existing volume group and extends her /home logical volume to include the new disk. Or, in fact, she could move the data from /home on the old disk to the new disk and then extend the existing root volume to cover all of the old disks.

Benefits of Logical Volume Management on a Large System

The benefits of logical volume management are more obvious on large systems with many disk drives.

* Managing a large disk farm is a time-consuming job, made particularly complex if the system contains many disks of different sizes. Balancing the (often conflicting) storage requirements of various users can be a nightmare.

* User groups can be allocated to volume groups and logical volumes and these can be grown as required. It is possible for the system administrator to hold back disk storage until it is required.

* It can then be added to the volume(user) group that has the most pressing need.
When new drives are added to the system, it is no longer necessary to move user’s files around to make the best use of the new storage; simply add the new disk into an existing volume group or groups and extend the logical volumes as necessary.

* It is also easy to take old drives out of service by moving the data from them onto newer drives -this can be done online, without disrupting user service.

Anatomy of LVM

* volume group (VG) – The Volume Group is the highest level abstraction used within the LVM. It gathers together a collection of Logical Volumes and Physical Volumes into one administrative unit.

* physical volume (PV) – A physical volume is typically a hard disk, though it may well just be a device that looks like a hard disk (eg. a software raid device).

* logical volume (LV) – The equivalent of a disk partition in a non-LVM system. The LV is visible as a standard block device; as such, the LV can contain a file system (eg. /home).

* physical extent (PE) – Each physical volume is divided chunks of data, known as physical extents, these extents have the same size as the logical extents for the volume group.

* logical extent (LE) – Each logical volume is split into chunks of data, known as logical extents. The extent size is the same for all logical volumes in the volume group.

Tying it all together:

A concrete example will help:

Let us suppose we have a volume group called VG1, this volume group has a physical extent size of 4MB. Into this volume group we introduce 2 hard disk partitions, /dev/hda1 and /dev/hdb1.These partitions will become physical volumes PV1 and PV2 (more meaningful names can be given at the administrator’s discretion). The PVs are divided up into 4MB chunks since this is the extent size for the volume group. The disks are different sizes and we get 99 extents in PV1 and 248 extents in PV2. We now can create a logical volume, this can be any size between 1 and 347 (248 + 99) extents. When the logical volume is created a mapping is defined between logical extents and physical extents, eg. logical extent 1 could map onto physical extent 51 of PV1, data written to the first 4 MB of the logical volume, in fact, be written to the 51st extent of PV1.

mapping modes (linear/striped):

The administrator can choose between a couple of general strategies for mapping logical extents onto physical extents:

1. Linear mapping will assign a range of PEs to an area of an LV in order eg., LE 1 – 99 map to PV1 and LE 100 – 347 map onto PV2.

2. Striped mapping will interleave the chunks of the logical extends across a number of physical volumes eg.,

1st chunk of LE[1] -> PV1[1],
2nd chunk of LE[1] -> PV2[1],
3rd chunk of LE[1] -> PV3[1],
4th chunk of LE[1] -> PV1[2],

3. and so on. In certain situations, this strategy can improve the performance of the logical volume.

4. In LVM 2, striped LVs can be extended by concatenating another set of devices onto the end of the first set. So you can get into a situation where your LV is a 2 stripe set concatenated with a linear set concatenated with a 4 stripe set. Are you confused yet?

Snapshots

A wonderful facility provided by LVM is snapshots. This allows the administrator to create a new block device which is an exact copy of a logical volume, frozen at some point in time. Typically this would be used when some batch processing, a backup, for instance, needs to be performed on the logical volume, but you don’t want to halt a live system that is changing the data. When the snapshot device has been finished with the system administrator can just remove the device. This facility does require that the snapshot be made at a time when the data on the logical volume is in a consistent state, later sections of this document give some examples of this.

Acquiring LVM

The first thing you need to do is get a copy of LVM.

* Download via FTP a tarball of LVM.

* Download the source that is under active development via CVS

Download the source

* DeviceMapper
* LVM2

Download the development source vi via CVS

To build LVM from the CVS sources, you must have several GNU tools:

the CVS client version 1.9 or better
GCC 2.95.2
GNU make 3.79
autoconf, version 2.13 or better

Initial Setup

To make life easier in the future with regards to updating the CVS tree create the file
$HOME/.cvsrc and insert the following lines. This configures useful defaults for the three most commonly used CVS commands. Do this now before proceeding any further.

diff -u -b -B
checkout -P
update -d -P

Checking Out Source Code

Device Mapper library and tools

The device-mapper library is required to build LVM 2.

The first time you download from cvs, you must log in
# cvs -d :pserver:cvs@sources.redhat.com:/cvs/dm login cvs

The password is `cvs. The command outputs nothing if successful and an error message if it fails. Only an initial login is required. All subsequent CVS commands read the password stored in the file $HOME/.cvspass for authentication.

Use the following to check out a copy of the code

# cvs -d :pserver:cvs@sources.redhat.com:/cvs/dm checkout device device-mapper

This will create a new directory device-mapper in your current directory containing the latest, up-to-the-minute device mapper code.

The first time you download from cvs, you must log in

# cvs -d :pserver:cvs@sources sources.redhat.com:/cvs/lvm2 login cvs

The password is `cvs. The command outputs nothing if successful and an error message if it fails. Only an initial login is required. All subsequent CVS commands read the password stored in the file $HOME/.cvspass for authentication.

Use the following to check out a copy of the code

# cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm checkout LVM

This will create a new directory LVM in your current directory containing the latest, up-tothe-minute LVM 1 code.

CVS commands work from anywhere inside the source tree and recurse downward. So if you happen to issue an update from inside the `tools subdirectory it will work fine, but only update the tools directory and its subdirectories. In the following command examples, it is assumed that you are at the top of the source tree.

Code Updates

Code changes are made fairly frequently in the CVS repository. Announcements of this are automatically sent to the lvm-commit list.

You can update your copy of the sources to match the master repository with the update command. It is not necessary to check out a new copy. Using update is significantly faster and simpler, as it will download only patches instead of entire files and update only those files that have changed since your last update. It will automatically merge any changes in the CVS repository with any local changes you have made as well. Just cd to the directory you’d like to update and then type the following.

# cvs update

If you did not specify a tag when you checked out the source, this will update your sources to the latest version on the main branch. If you specified a branch tag, it will update to the latest version on that branch. If you specified a version tag, it will not do anything.

Hacking the Code

So, have you found a bug you want to fix? Want to implement a feature from the TODO list? Got a new feature to implement? Hacking the code couldn’t be easier. Just edit your copy of the sources. No need to copy files to .orig or anything. CVS has copies of the originals. When you have your code in a working state and have tested as best you can with the hardware you have, generate a patch against the current sources in the CVS repository.

# cvs update
# cvs diff > patchfile

If someone else has been working on the same files as you have, you may find that there are conflicting modifications. Youll discover this when you try to update your sources.
# cvs update

RCS file: LVM/tools/pvcreate.c,v
retrieving revision 1.5
retrieving revision 1.6
Merging differences between 1.5 and 1.6 into pvcreate.c
rcsmerge: warning: conflicts during merge
cvs server: conflicts found in tools/pvcreate.c
C tools/pvcreate.c

Dont panic! Your working file, as it existed before the update, is saved under the filename .#pvcreate.c.1.5. You can always recover it should things go horribly wrong. The file named `pvcreate.c now contains bot both the old (i.e. your) version and a new version of lines that conflicted. You simply edit the file and resolve each conflict by deleting the unwanted version of the lines involved.

<<<<<<< pvcreate.c
j++;
=======
j;
>>>>>>> 1.6

Building the kernel modules

* Build the LVM 1 kernel module

To use LVM 1 you will have to build the LVM 1 kernel module (recommended), or if you prefer to rebuild the kernel with the LVM 1 code statically linked into it.
Your Linux system is probably based on one of the popular distributions (eg., Red Hat, SuSE, Debian) in which case it is possible that you already have the LVM 1 module. Check the version of the tools you have on your system. You can do this by running any of the LVM command line tools with the -h flag. Use pvscan -h if you dont know any of the commands. If the version number listed at the top of the help listing is LVM 1.0.8, use your current setup and avoid the rest of this section.

* Building a patch for your kernel

In order to patch the Linux kernel to support LVM 1.0.8, you must do the following:

1. Unpack LVM 1.0.8

# tar zxf lvm_1.0.8.tar.gz

2. Enter the root directory of that version.

# cd LVM/1.0.8

3. Run configure

# ./configure

4. You will need to pass the option with-kernel_dir to configure if your linux kernel source is not in /usr/src/linux. (Run ./configure help to see all the options available)

5. Enter the PATCHES directory

# cd PATCHES

6. Run make

# make

7. You should now have a patch called lvm-1.0.8-$KERNELVERSION.patch in the patches directory. This is the LVM kernel patch referenced in later sections of the howto.

8. Patch the kernel

# cd /usr/src/linux ; patch -pX < /directory/lvm lvm-1.0.8 1.0.8-$KERNELVERSION.patch

Building the LVM module for Linux 2.2.17+

The 2.2 series kernel needs to be patched before you can start building, look elsewhere for instructions on how to patch your kernel.
Patches:

1. rawio patch

Stephen Tweedies raw_io patch which can be found at http://www.kernel.org/pub/linux/kernel/people/sct/raw-io

2. lvm patch

The relevant LVM 1 patch which should be built out of the PATCHES sub-directory of the LVM distribution

Once the patches have been correctly applied, you need to make sure that the module is actually built, LVM 1 lives under the block devices section of the kernel config, you should probably request that the LVM /proc information is compiled as well. Build the kernel modules as usual.

Building the LVM modu modules for Linux 2.4

The 2.4 kernel comes with LVM 1 already included although you should check at the Sistina web site for updates, (eg. v2.4.9 kernels and earlier must have the latest LVM 1 patch applied ). When configuring your kernel look for LVM 1 under Multi Multi-device support(RAID and LVM).LVM1 can be compiled into the kernel or as a module. Build your kernel and modules and install then in the usual way. If you chose to build LVM as a module it will be called lvm-mod.o.If you want to use snapshots with ReiserFS, make sure you apply the Linux-2.4.x-VFS-lock patch (there are copies of this in the LVM/1.0.8/PATCHES directory.)

Checking the proc file system

If your kernel was compiled with the /proc file system (most are) then you can verify that LVM is present by looking for a /proc/lvm directory. If this doesn’t exist then you may have to load the module with the command.
# modprobe lvm lvm-mod

If /proc/lvm still does not exist then check your kernel configuration carefully.
When LVM is active you will see entries in /proc/lvm for all your physical volumes, volume groups and logical volumes. In addition, there is a file called /proc/lvm/global which gives a summary of the LVM status and also shows just which version of the LVM kernel you are using.

LVM 1 Boot time scripts

Boot-time scripts are not provided as part of the LVM distribution, however, these are quite simple to do for yourself.
The startup of LVM requires just the following two commands:

# vgscan
# vgchange -ay

And the shutdown only one:

# vgchange -an

* Caldera

It is necessary to edit the file /etc/rc.d/rc.boot. Look for the line that says Mounting local filesystems and insert the vgscan and vgchange commands just before it.
You may also want to edit the file /etc/rc.d/init.d/halt to deactivate the volume groups at shutdown. Insert the

vgchange -an

command near the end of this file just after the filesystems are unmounted or mounted read-only, before the comment that says Now halt or reboot.

* Debian

If you download the Debian lvm tool package, an initscript should be installed for you.
If you are installing LVM from source, you will still need to build your own initscript:

Create a startup script in /etc/init.d/lvm containing the following:

#!/bin/sh
case $1? in
start)
/sbin/vgscan
/sbin/vgchange -ay
;;
stop)
/sbin/vgchange -an
;;
restart|force-reload)
;;
esac
exit 0

Then execute the commands

# chmod 0755 /etc/init.d/lvm
# update update-rc.d lvm start 26 S . stop 82 1 .

* Mandrake

No initscript modifications should be necessary for current versions of Mandrake.

* Redhat

For Redhat 7.0 and up, you should not need to modify any initscripts to enable LVM at boot time if LVM is built into the kernel. If LVM is built as a module, it may be necessary to modify /etc/rc.d/rc.sysinit to load the LVM module by adding modprobe lvm-mod before the section that reads:

# LVM initialization, take 2 (it could be on top of RAID)
if [ -e /proc/lvm -a -x /sbin/vgchange -a -f /etc/lvmtab ]; then
action $Setting up Logical Volume Management: /sbin/vgscan &&
/sbin/vgchange -a y
fi

For versions of Redhat older than 7.0, it is necessary to edit the file /etc/rc.d/rc.sysinit. Look for the line that says Mount all other filesystems and insert the vgscan and vgchange commands just before it. You should be sure that your root file system is mounted read/write before you run the LVM commands.

You may also want to edit the the file /etc/rc.d/init.d/halt to deactivate the volume groups at shutdown. Insert the

vgchange -an

command near the end of this file just after the filesystems are mounted read-only, before the comment that says Now halt or reboot.

* Slackware

Slackware 8.1 requires no updating of boot-time scripts in order to make LVM work.
For versions previous to Slackware 8.1, you should apply the following patch to /etc/rc.d/rc.S

cd /etc/rc.d
cp -a rc.S rc.S.old
patch -p0 < rc.S.diff

snip snip file: rc.S.diff
rc.S.or Tue Jul 17 18:11:20 2001
+++ rc.S Tue Jul 17 17:57:36 2001
@@ -4,6 +4,7 @@
#
# Mostly written by: Patrick J. Volkerding, <volkerdi@slackware.com>
#
+# Added LVM support <tgs@iafrica.com>
PATH=/sbin:/usr/sbin:/bin:/usr/bin
@@ -28,19 +29,21 @@
READWRITE=yes
fi
+
# Check the integrity of all filesystems
if [ ! READWRITE = yes ]; then
– /sbin/fsck -A -a
+ /sbin/fsck -a /
+ # Check only the root fs first, but no others
# If there was a failure, drop into single-user mode.
if [ ? -gt 1 ] ; then
echo
echo
– echo *******************************************************
– echo *** An error occurred during the file system check. ***
– echo *** You will now be given a chance to log into the ***
– echo *** system in single-user mode to fix the problem. ***
– echo *** Running e2fsck -v -y <partition> might help. ***
– echo *******************************************************
+ echo ************************************************************
+ echo *** An error occurred during the root file system check. ***
+ echo *** You will now be given a chance to log into the ***
+ echo *** system in single-user mode to fix the problem. ***
+ echo *** Running e2fsck -v -y <partition> might help. ***
+ echo ************************************************************
echo
echo Once you exit the single-user shell, the system will reboot.
echo
@@ -82,6 +85,44 @@
echo -n get into your machine and start looking for the problem.
read junk;
fi
+ # okay / fs is clean, and mounted as rw
+ # This was an addition, limits vgscan to /proc thus
+ # speeding up the scan immensely.
+ /sbin/mount /proc
+
+ # Initialize Logical Volume Manager
+ /sbin/vgscan
+ /sbin/vgchange -ay
+
+ /sbin/fsck -A -a -R
+ #Check all the other filesystem, including the LVMs, excluding /
++ # If there was a failure, drop into single-user mode.
+ if [ ? -gt 1 ] ; then
+ echo
+ echo
+ echo *******************************************************
+ echo *** An error occurred during the file system check. ***
+ echo *** You will now be given a chance to log into the ***
+ echo *** system in single-user mode to fix the problem. ***
+ echo *** Running e2fsck -v -y <partition> might help. ***
+ echo *** The root filesystem is ok and mounted readwrite ***
+ echo *******************************************************
+ echo
+ echo Once you exit the single-user shell, the system will reboot.
+ echo
+
+ PS1=(Repair filesystem) #; export PS1
+ sulogin
+
+ echo Unmounting file systems.
+ umount -a -r
+ mount -n -o remount,ro /
+ echo Rebooting system.
+ sleep 2
+ reboot
+ fi
+
else
echo Testing filesystem status: read-write filesystem
if cat /etc/fstab | grep / | grep umsdos 1> /dev/null 2> /dev/null ;
then
@@ -111,14 +152,16 @@
echo -n Press ENTER to continue.
read junk;
fi
+
fi
+
# remove /etc/mtab* so that mount will create it with a root entry
/bin/rm -f /etc/mtab* /etc/nologin /etc/shutdownpid
# mount file systems in fstab (and create an entry for /)
# but not NFS or SMB because TCP/IP is not yet configured
-/sbin/mount -a -v -t nonfs,nosmbfs
+/sbin/mount -a -v -t nonfs,nosmbfs,proc
# Clean up temporary files on the /var volume:
/bin/rm -f /var/run/utmp /var/run/*.pid /var/log/setup/tmp/*
snip snip snip end of file

* SuSE

No changes should be necessary from 6.4 onward as LVM is included

Building LVM from the Source

* Make LVM library and tools

Change into the LVM directory and do a ./configure followed by make make. This will make all of the libraries and programs.
If the need arises you can change some options with the configure script. Do a ./configure help to determine which options are supported. Most of the time this will not be necessary.

* After the LVM source compiles properly, simply run make install to install the LVM library and tools onto your system.

* To remove the library and tools you just installed, run make source tree you used to install LVM to use this feature.

Transitioning from previous versions of LVM to LVM 1.0.8

Transitioning from previous versions of LVM to LVM 1.0.8 should be fairly painless. We have
come up with a method to read in PV version 1 metadata (LVM 0.9.1 Beta7 and earlier) as well as PV version 2 metadata (LVM 0.9.1 Beta8 and LVM 1.0).

Warning: New PVs initialized with LVM 1.0.8 are created with the PV version 1 on-disk
structure. This means that LVM 0.9.1 Beta8 and LVM 1.0 cannot read or use PVs created with
1.0.8.
Upgrading to LVM 1.0.8 with a non-non-LVM root partition

There are just a few simple steps to transition this setup, but it is still recommended that you backup your data before you try it. You have been warned.

1. Build LVM kernel and modules

2. Build the LVM user tools

3. Setup your init scripts

4. Boot into the new kernel

Make sure your boot-loader is setup to load the new LVM-enhanced kernel and, if you are using LVM modules, put an insmod lvm lvm-mod into your startup script OR extend /etc/modules.conf (formerly /etc/conf.modules) by adding

alias block-major-58 lvm-mod
alias char-major-109 lvm-mod

to enable modprobe to load the LVM module (don’t forget to enable kmod).

Reboot and enjoy.

Upgrading to LVM 1.0.8 with an LVM root partition and initrd

This is relatively straightforward if you follow the steps carefully. It is recommended you have a good backup and a suitable rescue disk handy just in case.

The normal way of running an LVM root file system is to have a single non-LVM partition called /boot which contains the kernel and initial RAM disk needed to start the system. The system I upgraded was as follows:

# df

Filesystem 1k-blocks Used Available Use% Mounted on
/dev/rootvg/root 253871 93384 147380 39% /
/dev/hda1 17534 12944 3685 78% /boot
/dev/rootvg/home 4128448 4568 3914168 0% /home
/dev/rootvg/usr 1032088 332716 646944 34% /usr
/dev/rootvg/var 253871 31760 209004 13% /var

/boot contains the old kernel and an initial RAM disk as well as the LILO boot files and the following entry in /etc/lilo.conf

# ls /boot

System.map lost+found vmlinux-2.2.16lvm
map module-info boot.0300
boot.b os2_d.b chain.b
initrd.gz

# tail /etc/lilo.conf

image=/boot/vmlinux-2.2.16lvm
label=lvm08
read-only
root=/dev/rootvg/root
initrd=/boot/initrd.gz

append=ramdisk_size=8192?

1. Build LVM kernel and modules
2. Build the LVM user tools
3. Rename the existing initrd.gz
# mv /boot/initrd.gz /boot/initrd08.gz
4. Edit /etc/lilo.conf

Make the existing boot entry point to the renamed file. You will need to reboot using this if something goes wrong in the next reboot. The changed entry will look something like this:
image=/boot/vmlinux-2.2.16lvm
label=lvm08
read-only
root=/dev/rootvg/root
initrd=/boot/initrd08.gz
append=ramdisk_size=8192?

5. Run lvmcreate_initrd to create a new initial RAM disk

# lvmcreate_initrd 2.4.9

6. Dont forget the put the new kernel version in there so that it picks up the correct
modules.

7. Add a new entry into /etc/lilo.conf

This new entry is to boot the new kernel with its new initrd.

image=/boot/vmlinux-2.4.9lvm
label=lvm10
read-only
root=/dev/rootvg/root
initrd=/boot/initrd.gz
append=ramdisk_size=8192?

8. Re Re-run lilo

This will install the new boot block.

# /sbin/lilo

9. Reboot

When you get the LILO prompt select the new entry name (in this example lvm10) and
your system should boot into Linux using the new LVM version.

If the new kernel does not boot, then simply boot the old one and try to fix the problem. It may be that the new kernel does not have all the correct device drivers built into it, or that they are not available in the initrd. Remember that all device drivers (apart from LVM) needed to access the root device should be compiled into the kernel and not as modules. If you need to do any LVM manipulation when booted back into the old version, then
simply recompile the old tools and install them with

# make install

If you do this, don’t forget to install the new tools when you reboot into the new LVM
version.

Initializing disks or disk partitions

Before you can use a disk or disk partition as a physical volume you will have to initialize it:

For entire disks:
Run pvcreate on the disk:

# pvcreate /dev/hdb

This creates a volume group descriptor at the start of the disk.
If you get an error that LVM can’t initialize a disk with a partition table on it, first make sure that the disk you are operating on is the correct one. If you are very sure that it is, run the following:

# dd if=/dev/zero of=/dev/diskname bs=1k count=1
# blockdev rereadpt /dev/diskname

For partitions:

When using LVM 1 on PCs with DOS partitions, set the partition type to 08e using fdisk or some other similar program. This step is unnecessary on PPC systems or when using LVM 2.

Run pvcreate on the partition:
# pvcreate /dev/hdb1

This creates a volume group descriptor at the start of the /dev/hdb1 partition.

Creating a volume group

Use the vgcreate program:

# vgcreate my_volume_group /dev/hda1 /dev/hdb1

NOTE: If you are using LVM 1 with devfs it is essential to use the full devfs name of the device rather than the symlinked name in /dev. So the above would be:

# vgcreate my_volume_group /dev/ide/host0/bus0/target0/lun0/part1 \
/dev/ide/host0/bus0/target1/lun0/part1
LVM 2 does not have this restriction.

You can also specify the extent size with this command if the default of 32

LVM 2 does not have this restriction.

You can also specify the extent size with this command if the default of 32MB is not suitable for you with the -s switch. In addition, you can put some limits on the number of physical or logical volumes the volume can have.

Activating a volume group

After rebooting the system or running vgchan vgchange ge -an an, you will not be able to access your VGs and LVs. To reactivate the volume group, run:

# vgchange -a y my_volume_group

Removing a volume group

Make sure that no logical volumes are present in the volume group, see later section for how to do this.

Deactivate the volume group:

# vgchange -a n my_volume_group

Now you actually remove the volume group:

# vgremove my_volume_group

Adding physical volumes to a volume group

Use vgextend to add an initialized physical volume to an existing volume group.

# vgextend my_volume_group /dev/hdc1
^^^^^^^^^ new physical volume

Removing physical volumes from a volume group

Make sure that the physical volume isn’t used by any logical volumes by using then pvdisplay command:

# pvdisplay /dev/hda1

Physical volume
PV Name /dev/hda1
VG Name myvg
PV Size 1.95 GB / NOT usable 4 MB [LVM: 122 KB]
PV# 1
PV Status available
Allocatable yes (but full)
Cur LV 1
PE Size (KByte) 4096
Total PE 499
Free PE 0
Allocated PE 499
PV UUID Sd44tK-9IRw-SrMC-MOkn-76iP-iftz-OVSen7

If the physical volume is still used you will have to migrate the data to another physical volume using pvmove.

Then use vgreduce to remove the physical volume:

# vgreduce my_volume_group /dev/hda1

Creating a logical volume

To create a 1500MB linear LV named testlv and its block device special dev/testvg/testlv:

# lvcreate -L1500 -ntestlv testvg

To create a 100 LE large logical volume with 2 stripes and stripe size 4 KB.

# lvcreate -i2 -I4 -l100 -nanotherte nanothertestlv testvg

If you want to create an LV that uses the entire VG, use vgdisplay to find the Total PE size,then use that when running lvcreate.

# vgdisplay testvg | grep Total PE

Total PE 10230

# lvcreate -l 10230 testvg -n mylv

This will create an LV called mylv filling the testvg VG.
If you want the logical volume to be allocated from a specific physical volume in the volume group, specify the PV or PVs at the end of the lvcreate command line.

# lvcreate -L 1500 -nte ntestlv testvg /dev/sdg

Removing a logical volume

A logical volume must be closed before it can be removed:

# umount /dev/myvg/homevol
# lvremove /dev/myvg/homevol

lvremove do you really want to remove /dev/myvg/homevol? [y/n]: y
lvremove doing automatic backup of volume group myvg
lvremove logical volume /dev/myvg/homevol successfully removed

Extending a logical volume

To extend a logical volume you simply tell the lvextend command how much you want to increase the size. You can specify how much to grow the volume, or how large you want it to grow to:

# lvextend -L12G /dev/myvg/homevol
lvextend extending logical volume /dev/myvg/homevol to 12 GB
lvextend doing automatic backup of volume group myvg
lvextend logical volume /dev/myvg/homevol successfully extended

will extend /dev/myvg/homevol to 12 Gigabytes.

# lvextend -L+1G /dev/myvg/homevol
lvextend extending logical volume /dev/myvg/homevol to 13 GB
lvextend doing automatic backup of volume group myvg

lvextend logical volume /dev/myvg/homevol successfully extended

will add another gigabyte to /dev/myvg/homevol.
After you have extended the logical volume it is necessary to increase the file system size to match. how you do this depends on the file system you are using.
By default, most file system resizing tools will increase the size of the file system to be the size of the underlying logical volume so you don’t need to worry about specifying the same size for each of the two commands.

1. ext2/ext3

Unless you have patched your kernel with the ext2online patch it is necessary to unmount the file system before resizing it. (It seems that the online resizing patch is
rather dangerous, so use at your own risk)

# umount /dev/myvg/homevol/dev/myvg/homevol
# resize2fs /dev/myvg/homevol
# mount /dev/myvg/homevol /home

If you dont have e2fsprogs 1.19 or later, you can download the ext2resize command from ext2resize.sourceforge.net and use that:

# umount /dev/myvg/homevol/dev/myvg/homevol
# resize2fs /dev/myvg/homevol
# mount /dev/myvg/homevol /home

For ext2 there is an easier way. LVM 1 ships with a utility called e2fsadm which does the lvextend and resize2fs for you.

# e2fsadm -L+1G /dev/myvg/homevol

is equivalent to the two commands:

# lvextend -L+1G /dev/myvg/homev homevol ol
# resize2fs /dev/myvg/homevol

2. reiserfs

Reiserfs file systems can be resized when mounted or unmounted as you prefer:

Online:
# resize_reiserfs -f /dev/myvg/homevol

Offline:

# umount /dev/myvg/homevol
# resize_reiserfs /dev/myvg/homevol
# mount -treiserfs /dev/m myvg/homevol /home

3. xfs

XFS file systems must be mounted to be resized and the mount-point is specified rather than the device name.

# xfs_growfs /home

4. jfs

Just like XFS the JFS file system must be mounted to be resized and the mount-point is specified rather than the device name. You need at least Version 1.0.21 of the jfs-utils to do this.

# mount -o remount,resize /home

Reducing a logical volume

Logical volumes can be reduced in size as well as increased. However, it is very important to remember to reduce the size of the file system or whatever is residing in the volume before shrinking the volume itself, otherwise, you risk losing data.

1. ext2

If you are using LVM 1 with ext2 as the file system then you can use the e2fsadm command mentioned earlier to take care of both the file system and volume resizing as
follows:

# umount /home
# e2fsadm -L-1G /dev/m myvg/homevol
# mount /home

If you prefer to do this manually you must know the new size of the volume in blocks and use the following commands:

# umount /home
# resize2fs /dev/myvg/homevol 524288
# lvreduce -L-1G /dev/myvg/homevol
# mount /home

2. reiserfs

Reiserfs seems to prefer to be unmounted when shrinking

# umount /home
# resize_reiserfs -s-1G /dev/myvg/homevol
# lvreduce -L-1G /dev/myvg/homevol
# mount -treiserfs /dev/myvg/homevol /home

3. xfs

There is no way to shrink XFS file systems.

4. jfs

There is no way to shrink JFS file systems.

Disk partitioning

* Multiple partitions on the same disk

LVM allows you to create PVs (physical volumes) out of almost any block device so, for example, the following are all valid commands and will work quite happily in an LVM
environment:

# pvcreate /dev/sda1
# pvcreate /dev/sdf
# pvcreate /dev/hda8
# pvcreate /dev/hda6
# pvcreate /dev/md1

In a normal production system it is recommended that only one PV exists on a single real disk,for the following reasons:

Administrative convenience

Its easier to keep track of the hardware in a system if each real disk only appears once.
This becomes particularly true if a disk fails.

To avoid striping performance problems

LVM can’t tell that two PVs are on the same physical disk, so if you create a striped LV then the stripes could be on different partitions on the same disk resulting in a decrease in performance rather than an increase.

However, it may be desirable to do this for some reasons:

Migration of existing system to LVM
On a system, with few disks, it may be necessary to move data around partitions to do the conversion.

Splitting one big disk between Volume Groups

If you have a very large disk and want to have more than one volume group for administrative purposes then it is necessary to partition the drive into more than one area. If you do have a disk with more than one partition and both of those partitions are in the same volume group, take care to specify which partitions are to be included in a logical volume when creating striped volumes.
The recommended method of partitioning a disk is to create a single partition that covers the whole disk. This avoids any nasty accidents with whole disk drive device nodes and prevents the kernel warning about unknown partition types at boot-up.

Sun disk labels

You need to be especially careful on SPARC systems where the disks have Sun disk labels on them. The normal layout for a Sun disk label is for the first partition to start at block zero of the disk, thus the first partition also covers the area containing the disk label itself. This works fine for ext2 filesystems (and is essential for booting using SILO) but such partitions should not be used for LVM. This is because LVM starts writing at the very start of the device and will overwrite the disklabel. If you want to use a disk with a Sun disklabel with LVM, make sure that the partition you are going to use starts at cylinder 1 or higher.

Setting up LVM on three SCSI disks

The setup has three SCSI disks that will be put into a logical volume using LVM. The disks are at /dev/sda, /dev/sdb, and /dev/sdc.

Preparing the disks

Before you can use a disk in a volume group you will have to prepare it:

Run pvcreate on the disks

# pvcreate /dev/sda
# pvcreate /dev/sdb
# pvcreate /dev/sdc

This creates a volume group descriptor area (VGDA) at the start of the disks.

Setup a Volume Group

1. Create a volume group

# vgcreat vgcreate my_volume_group /dev/sda /dev/sdb /dev/sdc/

2. Run vgdisplay to verify volume group

# vgdisplay

# vgdisplay
Volume Group
VG Name my_volume_group
VG Access read/write
VG Status available/resizable
VG # 1
MAX LV 256
Cur LV 0
Open LV 0
MAX LV Size 255.99 GB
Max PV 256
Cur PV 3
Act PV 3
VG Size 1.45 GB
PE Size 4 MB
Total PE 372
Alloc PE / Size 0 / 0
Free PE / Size 372/ 1.45 GB
VG UUID nP2PY5-5TOS-hLx0-FDu0-2a6N-f37x-0BME0Y

3. The most important things to verify are that the first three items are correct and that the VG Size item is the proper size for the amount of space in all four of your disks.

Creating the Logical Volume

If the volume group looks correct, it is time to create a logical volume on top of the volume group.
You can make the logical volume any size you like. (It is similar to a partition on a non-LVM setup.) For this example, we will create just a single logical volume of size 1GB on the volume group. We will not use striping because it is not currently possible to add a disk to a stripe set after the logical volume is created.

# lvcreate -L1G -nmy_logical_volume my_volume_group

lvcreate doing automatic backup of my_volume_group
lvcreate logical volume /dev/my_volume_group/my_logical_volume successfully created

Create the File System

Create an ext2 file system on the logical volume

# mke2fs /dev/my_volume_group/my_logical_volume

mke2fs 1.19, 13-Jul-2000 for EXT2 FS 0.5b, 95/08/09

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

131072 inodes, 262144 blocks

13107 blocks (5.00%) reserved for the superuser

First data block=0

9 block groups

32768 blocks per group, 32768 fragments per group

16384 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376

Writing inode tables: done
Writing superblocks and filesystem accounting information: done

Test the File System

Mount the logical volume and check to make sure everything looks correct

# mount /dev/my_volume_grou group/my_logical_volume /mnt
# df

Filesystem 1k-blocks Used Available Use% Mounted on

/dev/hda1 1311552 628824 616104 51%

/dev/my_volume_group/my_logical_volume

1040132 20 987276 0% /mnt

If everything worked properly, you should now have a logical volume with and ext2 file system mounted at /mnt.

Setting up LVM on three SCSI disks with striping

For this recipe, the setup has three SCSI disks that will be put into a logical volume using LVM.

The disks are at /dev/sda, /dev/sdb, and /dev/sdc.

Preparing the disk partitions

Before you can use a disk in a volume group you will have to prepare it:

Run pvcreate on the disks:
# pvcreate /dev/sda
# pvcreate /dev/sdb
# pvcreate /dev/sdc

This creates a volume group descriptor area (VGDA) at the start of the disks.

Setup a Volume Group

1. Create a volume group

# vgcreate my_volume_group /dev/sda /dev/sdb /dev/sdc

2. Run vgdisplay to verify volume group

# vgdisplay

VG Name my_volume_group
VG Access read/write
VG Status available/resizable
VG # 1
MAX LV 256
Cur LV 0
Open LV 0
MAX LV Size 255.99 GB
Max PV 256
Cur PV 3
Act PV 3
VG Size 1.45 GB
PE Size 4 MB
Total PE 372
Alloc PE / Size 0 / 0
Free PE / Size 372/ 1.45 GB
VG UUID nP2PY5-5TOS-hLx0-FDu0-2a6N-f37x-0BME0Y

3. The most important things to verify are that the first three items are correct and that the VG Size item is the proper size for the amount of space in all four of your disks.

Creating the Logical Volume

If the volume group looks correct, it is time to create a logical volume on top of the volumegroup.
You can make the logical volume any size you like. (It is similar to a partition on a non-LVM setup.) For this example, we will create just a single logical volume of size 1GB on the volume group. We will not use striping because it is not currently possible to add a disk to a stripe set after the logical volume is created.

# lvcreate -L1G -nmy_logical_volume my_volume_group
lvcreate doing automatic backup of my_volume_group
lvcreate logical volume /dev/my_volume_group/my_logical_volume successfully created

Create the File System

Create an ext2 file system on the logical volume

# mke2fs /dev/my_volume_group/my_logical_volume

mke2fs 1.19, 13-Jul-2000 for EXT2 FS 0.5b, 95/08/09
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
131072 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
9 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376

Writing inode tables: done
Writing superblocks and filesystem accounting information: done

Test the File System

Mount the logical volume and check to make sure everything looks correct

# mount /dev/my_volume_grou group/my_logical_volume /mnt

# df

Filesystem 1k-blocks Used Available Use% Mounted on

/dev/hda1 1311552 628824 616104 51% /

/dev/my_volume_group/my_logical_volume
1040132 20 987276 0% /mnt

If everything worked properly, you should now have a logical volume with and ext2 file system mounted at /mnt.

Setting up LVM on three SCSI disks with striping

For this recipe, the setup has three SCSI disks that will be put into a logical volume using LVM.

The disks are at /dev/sda, /dev/sdb, and /dev/sdc.

Preparing the disk partitions

Before you can use a disk in a volume group you will have to prepare it:

Run pvcreate on the disks:

# pvcreate /dev/sda
# pvcreate /dev/sdb
# pvcreate /dev/sd sdc

This creates a volume group descriptor area (VGDA) at the start of the disks.

Setup a Volume Group

1. Create a volume group

# vgcreate my_volume_group /dev/sda /dev/sdb /dev/sdc

2. Run vgdisplay to verify volume group

# vgdisplay
Volume Group

VG Name my_volume_group
VG Access read/write
VG Status available/resizable
VG # 1
MAX LV 256
Cur LV 0
Open LV 0
MAX LV Size 255.99 GB
Max PV 256
Cur PV 3
Act PV 3
VG Size 1.45 GB
PE Size 4 MB
Total PE 372
Alloc PE/Size 0 / 0
Free PE / Size 372/ 1.45 GB
VG UUID nP2PY5-5TOS-hLx0-FDu0-2a6N-f37x-0BME0Y

3. The most important things to verify are that the first three items are correct and that the VG Size item is the proper size for the amount of space in all four of your disks.

Creating t the Logical Volume

If the volume group looks correct, it is time to create a logical volume on top of the volume group.
You can make the logical volume any size you like (up to the size of the VG you are creating it on; it is similar to a partition on a non LVM setup). For this example we will create just a single logical volume of size 1GB on the volume group. The logical volume will be a striped set using for the 4k stripe size. This should increase the performance of the logical volume.

# lvcreate -i3 -I4 -L1G -nmy_logical_volume my_volume_group
lvcreate rounding 1048576 KB to stripe boundary size 1056768 KB / 258 PE
lvcreate doing automatic backup of my_volume_group
lvcreate logical volume /dev/my_volume_group/my_logical_volume successfully created

Create the File System

Create an ext2 file system on the logical volume

# mke2fs /dev/my_volume_group/my_logical_volume

mke2fs 1.19, 13-Jul-2000 for EXT2 FS 0.5b, 95/08/09

Filesystem label=

OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
132192 inodes, 264192 blocks
13209 blocks (5.00%) reserved for the super user
First data block=0
9 block groups
32768 blocks per group, 32768 fragments per group
14688 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376

Writing inode tables: done
Writing superblocks and filesystem accounting information: done

Test the File System

Mount the file system on the logical volume

# mount /dev/my_volume_group/my_logical_volume /mnt

and check to make sure everything looks correct

# df
Filesystem 1k-blocks Used Available Use% Mounted on

/dev/hda1 1311552 628824 616104 51% /

/dev/my_volume_group/my_logical_volume

1040132 20 987276 0% /mnt

If everything worked properly, you should now have a logical volume mounted at /mnt.

Add a new disk to a multi 3.3. multi-disk SCSI system

Current situation

A data centre machine has 6 disks attached as follows:

# pvscan
pvscan ACTIVE PV /dev/sda of VG dev [1.95 GB / 0 free]
pvscan ACTIVE PV /dev/sdb of VG sales [1.95 GB / 0 free]
pvscan ACTIVE PV /dev/sdc of VG ops [1.95 GB / 44 MB free]
pvscan ACTIVE PV /dev/sdd of VG dev [1.95 GB / 0 free]
pvscan ACTIVE PV /dev/sde1? of VG ops [996 MB / 52 MB free]
pvscan ACTIVE PV /dev/sde2? of VG sales [996 MB / 944 MB free]
pvscan ACTIVE PV /dev/sdf1? of VG ops [996 MB / 0 free]
pvscan ACTIVE PV /dev/sdf2? of VG dev [996 MB / 72 MB free]
pvscan total: 8 [11.72 GB] / in use: 8 [11.72 GB] / in no VG: 0 [0]

# df
Filesystem 1k-blocks Used Available Use % Mounted on

/dev/dev/cvs 1342492 516468 757828 41% /mnt/dev/cvs

/dev/dev/users 2064208 2060036 4172 100% /mnt/dev/users

/dev/dev/build 1548144 1023041 525103 66% /mnt/dev/build

/dev/ops/databases 2890692 2302417 588275 79% /mnt/ops/databases

/dev/sales/users 2064208 871214 1192994 42% /mnt/sales/users

/dev/ops/batch 1032088 897122 134966 86% /mnt/ops/batch

As you can see the dev and ops groups are getting full so a new disk is purchased and
added to the system. It becomes /dev/sdg.

Prepare the disk partitions

The new disk is to be shared equally between ops and dev so it is partitioned into two physical volumes /dev/sdg1 and /dev/sdg2 :

# fdisk /dev/sdg

The device contains neither a valid DOS partition table nor Sun or SGI disklabel Building a new DOS disklabel. Changes will remain in memory only until you decide to write them. After that, of course, the previous content won’t be recoverable.

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1000, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1000, default 1000): 500

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (501-1000, default 501):
Using default value 501
Last cylinder or +size or +sizeM or +sizeK (501-1000, default 1000):
Using default value 1000

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Unknown)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): 8e
Changed system type of partition 2 to 8e (Unknown)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: If you have created or modified any DOS 6.x partitions,
please see the fdisk manual page for additional information.

Next physical volumes are created on this partition:

# pvcreate /dev/sdg1
pvcreate physical volume /dev/sdg1? successfully created

# pvcreate /dev/sdg2
pvcreate physical volume /dev/sdg2? successfully created

Add the new disks to the volume groups

The volumes are then added to the dev and ops volume groups:

# vgextend ops /dev/sdg1
vgextend INFO: maximum logical volume size is 255.99 Gigabyte
vgextend doing automatic backup of volume group ops
vgextend volume group ops successfully extended

# vgextend dev /dev/sdg2
vgextend INFO: maximum logical volume size is 255.99 Gigabyte
vgextend doing automatic backup of volume group dev
vgextend volume group dev successfully extended

# pvscan
pvscan reading all physical volumes (this may take a while)
pvscan ACTIVE PV /dev/sda of VG dev [1.95 GB / 0 free]
pvscan ACTIVE PV /dev/sdb of VG sales [1.95 GB / 0 free]
pvscan ACTIVE PV /dev/sdc of VG ops [1.95 GB / 44 MB free]
pvscan ACTIVE PV /dev/sdd of VG dev [1.95 GB / 0 free]
pvscan ACTIVE PV /dev/sde1? of VG ops [996 MB / 52 MB free]
pvscan ACTIVE PV /dev/sde2? of VG sales [996 MB / 944 MB free]
pvscan ACTIVE PV /dev/sdf1? of VG ops [996 MB / 0 free]
pvscan ACTIVE PV /dev/sdf2? of VG dev [996 MB / 72 MB free]
pvscan ACTIVE PV /dev/sdg1? of VG ops [996 MB / 996 MB free]
pvscan ACTIVE PV /dev/sdg2? of VG dev [996 MB / 996 MB free]
pvscan total: 10 [13.67 GB] / in use: 10 [13.67 GB] / in no VG: 0 [0]

Extend the file systems

The next thing to do is to extend the file systems so that the users can make use of the extra space.
There are tools to allow online-resizing of ext2 file systems but here we take the safe route and unmount the two file systems before resizing them:
# umount /mnt/ops/batch
# umount /mnt/dev/users

We then use the e2fsadm command to resize the logical volume and the ext2 file system on one operation. We are using ext2resize instead of resize2fs (which is the default command for e2fsadm) so we define the environment variable E2FSADM_RESIZE_CMD to tell e2fsadm to
use that command.

# export E2F E2FSADM_RESIZE_CMD=ext2resize

SADM_# e2fsadm /dev/ops/batch -L+500M
e2fsck 1.18, 11-Nov-1999 for EXT2 FS 0.5b, 95/08/09
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/ops/batch: 11/131072 files (0.0<! non-contiguous), 4127/262144 blocks
lvextend extending logical volume /dev/ops/batch to 1.49 GB
lvextend doing automatic backup of volume group ops
lvextend logical volume /dev/ops/batch successfully extended

ext2resize v1.1.15 – 2000/08/08 for EXT2FS 0.5b
e2fsadm ext2fs in logical volume /dev/ops/batch successfully extended to 1.49 GB

# e2fsadm /dev/dev/users -L+900M

e2fsck 1.18, 11-Nov-1999 for EXT2 FS 0.5b, 95/08/09
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/dev/users: 12/262144 files (0.0% non-contiguous), 275245/524288 blocks
lvextend extending logical volume /dev/dev/users to 2.88 GB
lvextend doing automatic backup of volume group dev
lvextend logical volume /dev/dev/users successfully extended

ext2resize v1.1.15 – 2000/08/08 for EXT2FS 0.5b
e2fsadm ext2fs in logical volume /dev/dev/users successfully extended to 2.88 GB

Remount the extended volumes

We can now remount the file systems and see that the is plenty of space.

# mount /dev/ops/b batch atch
# mount /dev/dev/users
# df

Filesystem 1k-blocks Used Available Use% Mounted on

/dev/dev/cvs 1342492 516468 757828 41% /mnt/dev/cvs

/dev/dev/users 2969360 2060036 909324 69% /mnt/dev/users

/dev/dev/build 1548144 1023041 525103 66% /mnt/dev/build

/dev/ops/databases 2890692 2302417 588275 79% /mnt/ops/databases

/dev/sales/users 2064208 871214 1192994 42% /mnt/sales/users

/dev/ops/batch 1535856 897122 638734 58% /mnt/ops/batch

Taking a Backup Using Snapshots

Following on from the previous example we now want to use the extra space in the ops volume group to make a database backup every evening. To ensure that the data that goes onto the tape is consistent we use an LVM snapshot logical volume.

This type of volume is a read-only copy of another volume that contains all the data that was in the volume at the time the snapshot was created. This means we can back up that volume without having to worry about data being changed while the backup is going on, and we don’t have to take the database volume offline while the backup is taking place.

Create the snapshot volume

There is a little over 500 Megabytes of free space in the ops volume group, so we will use all of it to allocate space for the snapshot logical volume. A snapshot volume can be as large or a small as you like but it must be large enough to hold all the changes that are likely to happen to the original volume during the lifetime of the snapshot. So here, allowing 500 megabytes of changes to the database volume which should be plenty.

# lvcreate -L592M -s -n dbbackup /dev/ops/databases

lvcreate WARNING: the snapshot must be disabled if it gets full
lvcreate INFO: using default snapshot chunk size of 64 KB for /dev/ops/dbbackup
lvcreate doing automatic backup of ops
lvcreate logical volume /dev/ops/dbbackup successfully created

# xfs_freeze -f /mnt/point; lvcreate -L592M 592M -s -n dbbackup /dev/ops/databases;

Mount the snapshot volume

We can now create a mount-point and mount the volume

# mkdir /mnt/ops/dbbackup
# mount /dev/ops/dbbackup /mnt mnt/ops/dbbackup
mount: block device /dev/ops/dbbackup is write-protected, mounting read-only

If you are using XFS as the filesystem you will need to add the nouuid option to the mount
command:

# mount /dev/ops/dbbackup /mnt/ops/dbbackup -onouuid onouuid,ro

Do the backup

I assume you will have a more sophisticated backup strategy than this!

# tar -cf /dev/rmt0 /mnt/ops/dbbackup

Remove the snapshot

When the backup has finished you can now unmount the volume and remove it from the
system. You should remove snapshot volume when you have finished with them because of they
take a copy of all data written to the original volume and this can hurt performance.

# umount /mnt/ops/dbbackup
# lvremove /dev/ops/dbbackup
lvremove do you really want to remove /dev/ops/dbbackup? [y/n]: y
lvremove doing automatic backup of volume group ops
lvremove logical volume /dev/ops/dbbackup successfully removed

Distributing Old Extents to Existing Disks in Volume Group

If you have enough free extents on the other disks in the volume group, you have it easy. Simply run.

# pvmove /dev/hdb

pvmove moving physical extents in active volume group dev
pvmove WARNING: moving of active logical volumes may cause data loss!
pvmove do you want to continue? [y/n] y
pvmove 249 extents of physical volume /dev/hdb successfully moved

This will move the allocated physical extents from /dev/hdb onto the rest of the disks in the volume group.

Remove the unused disk

We can now remove the old IDE disk from the volume group.

# vgreduce dev /dev/hdb

vgreduce doing automatic backup of volume group dev
vgreduce volume group dev successfully reduced by physical volume:
vgreduce /dev/hdb

The drive can now be either physically removed when the machine is next powered down or
reallocated to other users.

Distributing Old Extents to a New Replacement Disk

If you do not have enough free physical extents to distribute the old physical extents to, you will have to add a disk to the volume group and move the extents to it.

1.Prepare the disk

First, you need to pvcreate the new disk to make it available to LVM. In this recipe, we show that you don’t need to partition a disk to be able to use it.

# pvcreate /dev/sdf
pvcreate physical volume /dev/sdf successfully created

2.Add it to the volume group

As developers use a lot of disk space this is a good volume group to add it into.

# vgextend dev /dev/sdf
vgextend INFO: maximum logical volume size is 255.99 Gigabyte
vgextend doing automatic backup of volume group dev
vgextend volume group dev successfully extended

3. Move th the data

Next, we move the data from the old disk onto the new one. Note that it is not necessary to unmount the file system before doing this. Although it is *highly* recommended that you do a full backup before attempting this operation in case of a power outage or some other problem that may interrupt it. The pvmove command can take a considerable amount of time to complete and it also exacts a performance hit on the two volumes so, although it isnt necessary, it is advisable to do this when the volumes are not too busy.

# pvmove /dev/hdb /dev/sdf
pvmove moving physical extents in active volume group dev
pvmove WARNING: moving of active logical volumes may cause data loss!
pvmove do you want to continue? [y/n] y
pvmove 249 extents of physical volume /dev/hdb successfully moved

The drive can now be either physically removed when the machine is next powered down or reallocated to some other users.

Moving a volume group to another system

It is quite easy to move a whole volume group to another system if, for example, a user department acquires a new server. To do this we use the vgexport and vgimport commands.

Unmount the file system

First, make sure that no users are accessing files on the active volume, then unmount it

# unmount /mnt/design/users

Mark the volume group inactive

Marking the volume group inactive removes it from the kernel and prevents any further activity on it.

# vgchange -an design

vgchange volume group design successfully deactivated

Export the volume group

It is now necessary to export the volume group. This prevents it from being accessed on the old host system and prepares it to be removed.

# vgexport design
vgexport volume group design successfully exported

When the machine is next shut down, the disk can be unplugged and then connected to its new machine.

Import the volume group

When plugged into the new system it becomes /dev/sdb so an initial pvscan shows:

# pvscan
pvscan reading all physical volumes (this may take a while)
pvscan inactive PV /dev/sdb1? is in EXPORTED VG design [996 MB / 996 MB free]
pvscan inactive PV /dev/sdb2? is in EXPORTED VG design [996 MB / 244 MB free]
pvscan total: 2 [1.95 GB] / in use: 2 [1.95 GB] / in no VG: 0 [0]

We can now import the volume group (which also activates it) and mount the file system.

If you are importing on an LVM 2 system, run:
# vgimport design
Volume group vg successfully imported

If you are importing on an LVM 1 system, add the PVs that need to be imported:
# vgimport design /dev/sdb1 /dev/sdb2
vgimport doing automatic backup of volume group design
vgimport volume group design successfully imported and activated

Activate the volume group

You must activate the volume group before you can access it.

# vgchange -ay design

Mount the file system

# mkdir -p /mnt/design/users
# mount /dev/design/users /mnt/design/users

The file system is now available for use.

Splitting a volume group

There is a new group of users design to add to the system. One way of dealing with this is to create a new volume group to hold their data. There are no new disks but there is plenty of free space on the existing disks that can be reallocated.

Determine free space

# pvscan
pvscan reading all physical volumes (this may take a while)
pvscan ACTIVE PV /dev/sda of VG dev [1.95 GB / 0 free]
pvscan ACTIVE PV /dev/sdb of VG sales [1.95 GB / 1.27 GB free]
pvscan ACTIVE PV /dev/sdc of VG ops [1.95 GB / 564 MB free]
pvscan ACTIVE PV /dev/sdd of VG dev [1.95 GB / 0 free]
pvscan ACTIVE PV /dev/sde of VG ops [1.95 GB / 1.9 GB free]
pvscan ACTIVE PV /dev/sdf of VG dev [1.95 GB / 1.33 GB free]
pvscan ACTIVE PV /dev/sdg1? of VG ops [996 MB / 432 MB free]
pvscan ACTIVE PV /dev/sdg2? of VG dev [996 MB / 632 MB free]
pvscan total: 8 [13.67 GB] / in use: 8 [13.67 GB] / in no VG: 0 [0]

We decide to reallocate /dev/sdg1 and /dev/sdg2 to design so first we have to move the physical extents into the free areas of the other volumes (in this case /dev/sdf for volume group dev and /dev/sde for volume group ops ).

Move data off the disks to be used

Some space is still used on the chosen volumes so it is necessary to move that used space off onto some others. Move all the used physical extents from /dev/sdg1 to /dev/sde and from /dev/sdg2 to /dev/sde

# pvmove /dev/sdg1 /dev/sde
pvmove moving physical extents in active volume group ops
pvmove WARNING: moving of active logical volumes may cause data loss!
pvmove do you want to continue? [y/n] y
pvmove doing automatic backup of volume group ops
pvmove 141 extents of physical volume /dev/sdg1? successfully moved

# pvmove /dev/sdg2 /dev/sdf
pvmove moving physical extents in active volume group dev
pvmove WARNING: moving of active logical volumes may cause data loss!
pvmove do you want to continue? [y/n] y
pvmove doing automatic backup of volume group dev
pvmove 91 extents of physical volume /dev/sdg2? successfully moved

Create the new volume group

Now, split /dev/sdg2 from dev and add it into a new group called design. it is possible to do this using vgreduce and vgcreate but the vgsplit command combines the two.

# vgsplit dev design /dev/sd sdg2 g2
vgsplit doing automatic backup of volume group dev
vgsplit doing automatic backup of volume group design
vgsplit volume group dev successfully split into dev and design

Remove remaining volume

Next, remove /dev/sdg1 from ops and add it into design.

# vgreduce ops /dev/sdg1
vgreduce doing automatic backup of volume group ops
vgreduce volume group ops successfully reduced by physical volume:
vgreduce /dev/sdg1

# vgextend design /dev/sdg1
vgextend INFO: maximum logical volume size is 255.99 Gigabyte
vgextend doing automatic backup of volume group design
vgextend volume group design successfully extended

Create new logical volume

Now create a logical volume. Rather than allocate all of the available space, leave some spare in case it is needed elsewhere.

# lvcreate -L750M -n users design
lvcreate rounding up size to physical extent boundary 752 MB
lvcreate doing automatic backup of design
lvcreate logical volume /dev/design/users successfully created

Make a file system on the volume

# mke2fs /dev/design/users

mke2fs 1.18, 11-Nov-1999 for EXT2 FS 0.5b, 95/08/09
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
96384 inodes, 192512 blocks
9625 blocks (5.00<! ) reserved for the super user
First data block=0
6 block groups
32768 blocks per group, 32768 fragments per group
16064 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840

Writing inode tables: done
Writing superblocks and filesystem accounting information: done

Mount the new volume

# mkdir -p /mnt/design/users mount /dev/design/users /mnt/design/users/

Its also a good idea to add an entry for this file system in your /etc/fstab file as follows:

/dev/design/user
/mnt/design/users ext2 defaults 1 2

Converting a root filesystem to LVM 1

In this example the whole system was installed in a single root partition with the exception of /boot. The system had a 2 gig disk partitioned as:

/dev/hda1 /boot
/dev/hda2 swap
/dev/hda3 /

The / partition covered all of the disk not used by /boot and swap. An important prerequisite of this procedure is that the root partition is less that half full (so that a copy of it can be created in a logical volume). If this is not the case then a second disk drive should be used. The procedure in that case is similar but there is no need to shrink the existing root partition and /dev/hda4 should be replaced with (eg) /dev/hdb1 in the examples.

To do this it is easiest to use GNU parted. This software allows you to grow and shrink partitions that contain filesystems. It is possible to use resize2fs and fdisk to do this but GNU parted makes it much less prone to error. It may be included in your distribution, if not you can download it from ftp://ftp.gnu.org/pub/gnu/parted.

Once you have parted on your system AND YOU HAVE BACKED THE SYSTEM UP:

Boot single user

Boot into single user mode (type linux S at the LILO prompt) This is important. Booting singleuser ensures that the root filesystem is mounted read-only and no programs are accessing the disk.

Run Parted

Run parted to shrink the root partition Do this so there is room on the disk for a complete copy of it in a logical volume. In this example a 1.8 gig partition is shrunk to 1 gigabyte This displays the sizes and names of the partitions on the disk.

# parted /dev/hda
(parted) p

Now resize the partition:

(parted) resize 3 145 999

The first number here the partition number (hda3), the second is the same starting position that hda3 currently has. Do not change this. The last number should make the partition around half the size it currently is.

Create a new partition

(parted) mkpart primary ext2 1000 1999

This makes a new partition to hold the initial LVM 1 data. It should start just beyond the newly shrunk hda3 and finish at the end of the disk.
Quit parted

(parted) q

Reboot the system.

Verify kernel config options

Make sure that the kernel you are currently running works with LVM 1 and has
CONFIG_BLK_DEV_RAM and CONFIG_BLK_DEV_INITRD set in the config file.

Adjust partition type

Change the partition type on the newly created partition from Linux to LVM (8e). Parted doesnt understand LVM 1 partitions so this has to be done using fdisk.

# fdisk /dev/hda
Command (m for help): t
Partition number (1-4): 4
Hex code (type L to list codes): 8e
Changed system type of partition 4 to 8e (Unknown)
Command (m for help): w

Set up LVM 1 for the new scheme

* Initialize LVM 1 (vgscan)

# vgscan

* Make the new partition into a PV

# pvcreate /dev/hda4

* create a new volume group

# vgcreate vg /dev/hda4

* Create a logical volume to hold the new root.

# lvcreate -L250M -n root vg

Create the Filesystem

Make a filesystem in the logical volume and copy the root files onto it.

# mke2fs /dev/vg/root
# mount /dev/vg/roo root /mnt/
# find / -xdev | cpio -pvmd /mnt

Update /etc/fstab

Edit /mnt/etc/fstab on the new root so that / is mounted on /dev/vg/root. For example:

/dev/hda3 / ext2 defaults 1 1

becomes:

/dev/vg/root / ext2 defaults 1 1

Create an LVM 1 initial RAM disk

# lvmcreate_initrd

Make sure you note the name that lvmcreate_initrd calls the initrd image. It should be in /boot.

Update /etc/lilo.conf

Add an entry in /etc/lilo.conf for LVM 1. This should look similar to the following:

image = /boot/KERNEL_IMAGE_NAME
label = lvm
root = /dev/vg/root
initrd = /boot/INITRD_IMAGE_NAME
ramdisk = 8192

Where KERNEL_IMAGE_NAME is the name of your LVM 1 enabled kernel, and
INITRD_IMAGE_NAME is the name of the initrd image created by lvmcreate_initrd. The ramdisk
line may need to be increased if you have a large LVM 1 configuration, but 8192 should suffice for most users. The default ramdisk size is 4096. If in doubt check the output from the lvmcreate_initrd command, the line that says:

lvmcreate_initrd making loopback file (6189 kB)

and make the ramdisk the size given in brackets.
You should copy this new lilo.conf onto /etc in the new root fs as well.

# cp /etc/lilo.conf /mnt/etc/

Run LILO to write the new boot sector

# lilo

Reboot – at the LILO prompt type lvm The system should reboot into Linux using the newly created Logical Volume.

If that worked then you should make lvm the default LILO boot destination by adding the line

default=lvm

in the first section of /etc/lilo.conf

If it did not work then reboot normally and try to diagnose the problem. It could be a typing error in lilo.conf or LVM 1 not being available in the initial RAM disk or its kernel. Examine the message produced at boot time carefully.

Add remainder of disk

Add the rest of the disk into LVM 1. When you are happy with this setup you can then add the old root partition to LVM 1 and spread out over the disk.
First set the partition type to 8e(LVM)

# fdisk /dev/hda

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): 8e
Changed system type of partition 3 to 8e (Unknown)
Command (m for help): w

Convert it into a PV and add it to the volume group:

# pvcreate /dev/hda3
# vgextend vg /dev/hda3

Restoring the VG UUIDs using uuid_fixer

If youve upgraded LVM from previous versions to early 0.9 and 0.9.1 versions of LVM and vgscan says vgscan no volume groups found, this is one way to fix it.

* Download the UUID fixer program from the contributor directory at Sistina. It is located at ftp://ftp.sistina.com/pub/LVM/contrib/uuid_fixer-0.3-IOP10.tar.gz

* Extract uuid_fixer-0.3-IOP10.tar.gz
# tar zxf uuid_fixer fixer-0.3 0.3-IOP10.tar.gz

* cd to uuid_fixer
# cd uuid_fixer

* You have one of two options at this point:

1. Use the prebuild binary (it is build for i386 architecture).

Make sure you list all the PVs in the VG you are restoring, and follow the prompts
# ./uuid_fixer <LIST OF ALL PVS IN VG TO BE RESTORED>

2. Build the uuid_builder program from source

Edit the Makefile with your favorite editor, and make sure LVMDIR points to your
LVM source.

Then run make.
# make

Now run uuid_fixer. Make sure you list all the PVs in the VG you are restoring, and
follow the prompts.

# ./uuid_fixer <LIST OF ALL PVS IN VG TO BE RESTORED>

* Deactivate any active Volume Groups ( optional )

# vgchange -an

* Run vgscan

# vgscan

* Reactivate Volume Groups

# vgchange -ay

Sharing LVM volumes

LVM is not cluster aware

Be very careful doing this, LVM is not currently cluster-aware and it is very easy to lose all your data.If you have a fibre-channel or shared-SCSI environment where more than one machine has physical access to a set of disks then you can use LVM to divide these disks up into logical volumes. If you want to share data you should really be looking at GFS or other cluster filesystems.

The key thing to remember when sharing volumes is that all the LVM administration must be done on one node only and that all other nodes must have LVM shut down before changing anything on the admin node. Then, when the changes have been made, it is necessary to run vgscan on the other nodes before reloading the volume groups. Also, unless you are running a cluster-aware filesystem (such as GFS) or application on the volume, only one node can mount each filesystem. It is up to you, as system administrator to enforce this, LVM will not stop you corrupting your data.

The startup sequence of each node is the same as for a single-node setup with

vgscan
vgchange -ay

in the startup scripts.
If you need to do any changes to the LVM metadata (regardless of whether it affects volumes mounted on other nodes) you must go through the following sequence. In the steps below admin node is any arbitrarily chosen node in the cluster.

Admin node Other nodes

Close all Logical volumes (umount)
vgchange -an
<make changes, eg lvextend>
vgscan
vgchange -ay

VGs should be active on the admin node

You do not need to, nor should you, unload the VGs on the admin node, so this can be the node with the highest uptime requirement.

Article Authored by Anoop K Baby

Author, Anoop, is a Sr.Systems Engineer(Team Lead) with SupportPRO. Anoop specializes in Windows and Linux server Administration. SupportPRO offers 24X7 technical support services to Web hosting companies and service providers.

If you require help, contact SupportPRO Server Admin

Server not running properly? Get A FREE Server Checkup By Expert Server Admins - $125 Value

Leave a Comment

CONTACT US

Sales and Support

Phone: 1-(847) 607-6123
Fax: 1-(847)-620-0626
Sales: sales@supportpro.com
Support: clients@supportpro.com
Skype ID: sales_supportpro

Postal Address

1020 Milwaukee Ave, #245,
Deerfield, IL-60015
USA

©2022  SupportPRO.com. All Rights Reserved