Quick tips:
  • To see the details of logical volume group use: vgdisplay
  • To see the details of logical volumes use: lvdisplay
  • To create new volumes, use: lvcreate
  • To resize existing volumes, use: lvresize
  • Use "--help" see quick help for the command: lvresize --help
  • It is easy to add extra space to an existing file system - no downtime required.
  • It is much more difficult to shrink the file system - downtime IS required.
  • Do not attempt to shrink a volume unless you really know what you are doing.
  • More details here: http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager
Before you begin:
  • Run "df -h /" to make sure you have some free space available on your root volume. If your root volume is 100% full, you must clean it up by removing some files before you proceed.
  • Run "vgdisplay" and note which volume groups you actually have.
  • Run "lvdisplay" and note which logical volumes exist on your system.
  • Adjust the examples below to match volume/group names.

Example 1: add 10GB to "/" (root) partition


Add 10GB to "root" volume:
 
lvresize -L +10G /dev/VolGroup00/root

Resize the filesystem that resides on that volume:

resize2fs /dev/VolGroup00/root

Note: resize2fs may take a while to expand the file system on a large volume, so you may want to open a second terminal and monitor the progress with the following command:

watch df -h
 
 
Example 2: set swap to 16GB

Disable all swaps:
swapoff -a
 
Set swap partition size to 16GB:
lvresize -L 16G /dev/VolGroup00/swap

Resize the swap area to use new volume size:

mkswap 
/dev/VolGroup00/swap
Enable all swaps:
swapon -a
 
 
Example 3: create a new volume "data" of 5GB and mount it as "/data"

Create the mount point:

mkdir /data

Create the volume:

lvcreate -L 5G -n data VolGroup00

Create the filesystem:

mkfs.ext3 /dev/VolGroup00/data

Add the following line to your /etc/fstab:

/dev/VolGroup00/data        /data        ext3    defaults    0    2

Mount the volume:

mount /data


Example 4: reducing size of existing volume "test"

IMPORTANT:
1. Reducing volume size CAN NOT be performed while file system is mounted and may take a LONG time.
2. You will need to boot from a Rescue CD in order to be able to reduce the size of a volume which holds root file system.
3. Improper reduction of a volume size WILL DESTROY YOUR DATA.

Make sure you have a good backup and plenty of downtime scheduled, as it may take a while to shrink a filesystem.

You have been warned: proceed at your own risk. 

First, make sure the file system has enough free space so it can be reduced:

root@server:~# df -h /test/
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-test
5.0G  139M4.6G   3% /test

Note that even empty file system will use some space: 139M in our case.
Proceed to unmount the file system.

root@server:~# umount /test

Run a filesystem consistency check. This may take a while.

root@server:~# fsck -f /dev/VolGroup00/test e2fsck 1.41.11 (14-Mar-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/VolGroup00-test: 11/327680 files (0.0% non-contiguous), 55935/1310720 blocks

Find out file system block size

root@server:~# tune2fs -l /dev/VolGroup00/test | grep 'Block size'
Block size:               4096

Double check the minimum file system size in blocks:

root@server:~# resize2fs -P /dev/VolGroup00/test 
resize2fs 1.41.11 (14-Mar-2010)
Estimated minimum size of the filesystem: 34477

In this example, the minimum logical volume size required is: 34477*4096 = 141217792 bytes.
Here, you can make calculations to figure out what your file system final size should be, proceed with caution. It is much safer to shrink a file system to its minimum size instead.

Depending on the current file system size, usage and layout, this may take a LONG time.

root@server:~# resize2fs -M /dev/VolGroup00/test
resize2fs 1.41.11 (14-Mar-2010)
Resizing the filesystem on /dev/raid1/test to 34477 (4k) blocks.
The filesystem on /dev/raid1/test is now 34477 blocks long.

MAKE SURE THAT YOUR PLANNED NEW VOLUME SIZE IS GREATER THAN THE SIZE OF THE FILESYSTEM. 
Keep in mind that LVM allocates space to volumes on Physical Extent (PE) granularity, so you may not be able to get the exact size you want.

root@server:~# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID 
Format                lvm2
Metadata Areas        1
Metadata Sequence No  44
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                20
Open LV               20
Max PV                0
Cur PV                1
Act PV                1
VG Size               930.50 GiB
PE Size               4.00 MiB
Total PE              238207
Alloc PE / Size       212966 / 831.90 GiB
Free  PE / Size       25241 / 98.60 GiB
VG UUID               LazRIZ-d8rU-2bwB-uI7o-6Amt-BH0w-oFxNh

In our example, PE size is 4MB, we will have to round up the planned volume size to next larger multiple of 4.

Just to be on a safe side, we strongly recommend you adding some extra "padding", so in this example we will shrink the volume down to 200M, which is greater than BOTH 139M used as reported by df and 141M as calculated in blocks. 

root@server:~# lvresize -L 200M /dev/VolGroup00/test
WARNING: Reducing active logical volume to 200.00 MiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce test? [y/n]: y
Reducing logical volume test to 200.00 MiB
Logical volume test successfully resized

Now, grow the file system to use all of the space available in a logical volume:

root@server:~# resize2fs /dev/VolGroup00/test
resize2fs 1.41.11 (14-Mar-2010)
Resizing the filesystem on 
/dev/VolGroup00/test to 51200 (4k) blocks.
The filesystem on 
/dev/VolGroup00/test is now 51200 blocks long.

Mount the file system and check the new size:

root@server:~# mount /dev/VolGroup00/test /test
root@server:~# df -h /test
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-test
196M131M   56M  71% /test

Was this answer helpful? 4 Users Found This Useful (4 Votes)