I、在原ESXi虚拟机硬盘上新增空间(需要重启)

一、增加分区:

1、VC登录控制台修改虚拟机磁盘大小。


2、现在系统中还看不到加入的容量,reboot,对新增加的磁盘创建分区

fdisk /dev/sda
#用n命令建个P类型的磁盘,然后用t命令更改ID为8e(LVM类别)。

Command (m for help): n

Command action

     e     extended

     p     primary partition (1-4)

p

Partition number (1-4): 3

First cylinder (5222-6527, default 5222): 

Using default value 5222

Last cylinder, +cylinders or +size{K,M,G} (5222-6527, default 6527): 

Using default value 6527



Command (m for help): t

Partition number (1-4): 3

Hex code (type L to list codes): 8e

Changed system type of partition 3 to 8e (Linux LVM)

#    p


Command (m for help): p


Disk /dev/sda: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000ef931

     Device Boot            Start                 End            Blocks     Id    System

/dev/sda1     *                     1                    64            512000     83    Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2                            64                5222        41430016     8e    Linux LVM

/dev/sda3                        5222                6527        10485087+    8e    Linux LVM

最后W保存
ps:这里要重启一下服务器,不然挂载不上新建的sda3

# 本次不用reboot的做法

我的系统是centos6.2,partprobe无效,要用kpartx命令重新读取分区。

新增分区时:kpartx -f /dev/sda

删除分区时:kpartx -d --nr N /dev/sda 

二.LVM扩容:

1.先格式化/dev/sda3

[root@nginx ~]# mkfs.ext4 /dev/sda3

mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

655360 inodes, 2621271 blocks

131063 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=2684354560

80 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks: 

  32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632



Writing inode tables: done                                                        

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done



This filesystem will be automatically checked every 38 mounts or

180 days, whichever comes first.    Use tune2fs -c or -i to override.

2.创建pv

[root@nginx ~]# pvcreate /dev/sda3

    Writing physical volume data to disk "/dev/sda3"

    Physical volume "/dev/sda3" successfully created

#查看已创建的pv

[root@nginx ~]# pvdisplay 

    --- Physical volume ---
    PV Name                             /dev/sda2
    VG Name                             vg_nginx
    PV Size                             39.51 GiB / not usable 3.00 MiB
    Allocatable                     yes (but full)
    PE Size                             4.00 MiB
    Total PE                            10114
    Free PE                             0
    Allocated PE                    10114
    PV UUID                             Z7S451-CiWS-l66s-6hiU-j5YY-35Wq-KcijFH

    
    --- Physical volume ---
    PV Name                             /dev/sda3
    VG Name                             vg_nginx
    PV Size                             10.00 GiB / not usable 3.34 MiB
    Allocatable                     yes (but full)
    PE Size                             4.00 MiB
    Total PE                            2559
    Free PE                             0
    Allocated PE                    2559
    PV UUID                             aQivcj-noPv-Mpy8-Dlmw-1fBT-DqhL-W1eyOe

#查看vg

[root@nginx ~]# vgdisplay 

    --- Volume group ---
    VG Name                             vg_nginx
    System ID                         
    Format                                lvm2
    Metadata Areas                1
    Metadata Sequence No    3
    VG Access                         read/write
    VG Status                         resizable
    MAX LV                                0
    Cur LV                                2
    Open LV                             2
    Max PV                                0
    Cur PV                                1
    Act PV                                1
    VG Size                             39.51 GiB
    PE Size                             4.00 MiB
    Total PE                            10114
    Alloc PE / Size             10114 / 39.51 GiB
    Free    PE / Size             0 / 0     
    VG UUID                             Yah1Wu-pdas-Lag1-ygDT-x9ck-XkZf-4EbwHb

3.将刚创建的pv加入到相应的vg


[root@nginx ~]# vgextend vg_nginx /dev/sda3

    Volume group "vg_nginx" successfully extended

创建lv可以用百分比: lvcreate -l 100%FREE -n wenzi dige

#查看vg是否添加成功

[root@nginx ~]# vgdisplay 

    --- Volume group ---
    VG Name                             vg_nginx
    System ID                         
    Format                                lvm2
    Metadata Areas                2
    Metadata Sequence No    4
    VG Access                         read/write
    VG Status                         resizable
    MAX LV                                0
    Cur LV                                2
    Open LV                             2
    Max PV                                0
    Cur PV                                2
    Act PV                                2
    VG Size                             49.50 GiB
    PE Size                             4.00 MiB
    Total PE                            12673
    Alloc PE / Size             10114 / 39.51 GiB
    Free    PE / Size             2559 / 10.00 GiB
    VG UUID                             Yah1Wu-pdas-Lag1-ygDT-x9ck-XkZf-4EbwHb

#查看lv状态

[root@nginx ~]# lvdisplay 

    --- Logical volume ---
    LV Name                                /dev/vg_nginx/lv_root
    VG Name                                vg_nginx
    LV UUID                                3EG5sY-NPMC-CKIy-MDWE-0ln7-GgtZ-TYNJWk
    LV Write Access                read/write
    LV Status                            available
    # open                                 1
    LV Size                                37.54 GiB
    Current LE                         9610
    Segments                             1
    Allocation                         inherit
    Read ahead sectors         auto
    - currently set to         256
    Block device                     253:0


    --- Logical volume ---
    LV Name                                /dev/vg_nginx/lv_swap
    VG Name                                vg_nginx
    LV UUID                                J50VXw-rWdY-ZBw7-lRnh-xB8s-NBdv-wGnr0B
    LV Write Access                read/write
    LV Status                            available
    # open                                 1
    LV Size                                1.97 GiB
    Current LE                         504
    Segments                             1
    Allocation                         inherit
    Read ahead sectors         auto
    - currently set to         256

    Block device                     253:1

4.为/dev/vg_nginx/lv_root增加容量

[root@nginx ~]# lvextend -l +2559 /dev/vg_nginx/lv_root                 # 可以使用这种方法分配所有剩余空间  lvextend -l +100%FREE /dev/mapper/dige-wenzi
    Extending logical volume lv_root to 47.54 GiB

    Logical volume lv_root successfully resized

#或者用lvextend -L +10G /dev/vg_nginx/lv_root

#查看已挂载分区的容量

[root@nginx ~]# df -hl

Filesystem                        Size    Used Avail Use% Mounted on
/dev/mapper/vg_nginx-lv_root
                                             37G    2.0G     34G     6% /
tmpfs                                 499M         0    499M     0% /dev/shm
/dev/sda1                         485M     51M    409M    12% /boot


5.看到了吧,这里容量还没增加需要执行resize2fs

(不是xfs格式的可以用resize2fs命令,若是xfx格式使用 xfs_growfs /dev/mapper/vg_nginx-lv_root)

[root@nginx ~]# resize2fs /dev/mapper/vg_nginx-lv_root

resize2fs 1.41.12 (17-May-2010)

Filesystem at /dev/mapper/vg_nginx-lv_root is mounted on /; on-line resizing required

old desc_blocks = 3, new_desc_blocks = 3

Performing an on-line resize of /dev/mapper/vg_nginx-lv_root to 12461056 (4k) blocks.

The filesystem on /dev/mapper/vg_nginx-lv_root is now 12461056 blocks long.

#现在查看一下,ok
[root@nginx ~]# df -hl
Filesystem                        Size    Used Avail Use% Mounted on
/dev/mapper/vg_nginx-lv_root
                                             47G    2.0G     43G     5% /
tmpfs                                 499M         0    499M     0% /dev/shm
/dev/sda1                         485M     51M    409M    12% /boot









#查看一下lv的最终结果:
[root@nginx ~]# lvdisplay 

    --- Logical volume ---
    LV Name                                /dev/vg_nginx/lv_root
    VG Name                                vg_nginx
    LV UUID                                3EG5sY-NPMC-CKIy-MDWE-0ln7-GgtZ-TYNJWk
    LV Write Access                read/write
    LV Status                            available
    # open                                 1
    LV Size                                47.54 GiB
    Current LE                         12169
    Segments                             2
    Allocation                         inherit
    Read ahead sectors         auto
    - currently set to         256
    Block device                     253:0

     
    --- Logical volume ---
    LV Name                                /dev/vg_nginx/lv_swap
    VG Name                                vg_nginx
    LV UUID                                J50VXw-rWdY-ZBw7-lRnh-xB8s-NBdv-wGnr0B
    LV Write Access                read/write
    LV Status                            available
    # open                                 1
    LV Size                                1.97 GiB
    Current LE                         504
    Segments                             1
    Allocation                         inherit
    Read ahead sectors         auto
    - currently set to         256
    Block device                     253:1




ESXi在线增加Linux LVM硬盘不需重新启动

(与上面的区别是,ESXi虚拟机新增硬盘,而不是在原来虚拟机的硬盘增加容量)


现在很多Linux操作系统在部署时无法确认硬盘大小,为了更好的部署有扩展性的Linux以下是我在VMware Esxi 在线增加硬盘而且不需要重新启动系统的一点小经验:Step # 1:在虚拟机状态下增加一块硬盘 ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_default

ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_primary_02

ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_虚拟机_03

ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_system_04

ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_system_05

Step # 2:让Linux操作系统识别新增加的硬盘

export TMOUT=0

echo "- - -" > /sys/class/scsi_host/host0/scan

如果fdisk -l 还识别不到,试试把host0改成host1、host2


ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_primary_06
增加/dev/sdb硬盘,查看/var/log/messages日志

echo "scsi add-single-device 0 0 2 0" > /proc/scsi/scsi

fdisk -l

cat /proc/scsi/scsi

ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_虚拟机_07
查看当前scsi状态
ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_system_08
将/dev/sdb1分区
ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_虚拟机_09
将/dev/sdb1格式化,建立新目录,如果是增加已经有的目录不需要填写/etc/fstab
ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_服务器_10
将这块硬盘ID改成8e,保存退出,不需要重新启动。
ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_虚拟机_11
创建物理卷
ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_虚拟机_12
通过vgextend命令扩展现有的vg
ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_虚拟机_13
通过lvresize命令扩展Logical Volume

ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_primary_14
还要重新一下识别磁盘容量
ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_system_15


http://centilinux.blog.51cto.com/1454781/1017980

Linux文件系统在线扩容(操作示范

 

转载请在文首保留原文出处:EMC中文支持论坛

介绍

 

Linux文件系统能在线扩容吗? - Yes

Linux文件系统扩容必须重新挂载? - NO

本文章通过LVM工具,给大家展示一种Linux文件系统在线扩容办法,无需重启服务器,无需重新挂在文件系统,应用也无需做任何更改。

更多信息

实验环境

操作系统:Red Hat Enterprise Linux Server release 6.0 (Santiago) - 64bit

文件系统:ext4

工具:e2fsprogs-1.41.12-3.el6.x86_64

 

逻辑卷管理(LVM)概念

已经有工程师对LVM相关概念做了详细介绍,本章就不再介绍。有兴趣可以参考:

AIX主机逻辑卷管理器(LVM)概念详解:卷组、物理/逻辑卷、分区

Linux逻辑卷管理LVM功能详解及应用实例

 

LVM逻辑架构

如下图:

ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_system_16

 

扩容实际操作分享

根据上面架构图可以发现,基于LVMLinux文件系统扩容主要有三个步骤:

1、卷组扩容;

2、逻辑卷扩容;

3、文件系统在线调整。

注意:操作严格按照123顺序依次来进行。

 


情景需求:/media文件系统添加在线添加100G磁盘空间。

ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_system_17



1、 卷组扩容

1.1、   确认需要扩容的卷组

 

1.2、   /dev/loop1制作成物理卷

 

ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_服务器_18

             

1.3、   test_vg卷组扩容

ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_虚拟机_19

 

             

2、 逻辑卷扩容

ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_服务器_20

          

 

 

3、 文件系统在线调整

ESXI 5.0 linux虚拟机LVM磁盘扩容(需要重启2次Linux服务器及不需要重启的方法)_default_21

          

 

故障排除:

1、      resize2fs命令出现这个错误“resize2fs: Operation not permitted While trying to add group #6656”,并且在/var/log/message里面出现以下错误“Jul 30 15:37:53 localhost kernel: EXT4-fs warning (device dm-2): ext4_group_add: No reserved GDT blocks, can't resize

这个问题是由于文件系统预留的Journal size太小导致,可以通过dumpe2fs /dev/test_vg/lvol0 |grep -i Journal查看。(Journal size大小默认由e2fsprogs工具自动根据文件系统大小来计算,也可以认为指定。越大Journal size,对文件系统性能越好,Ext4最大Journal size400M。)

解决办法:

删除现有Journal空间,重新创建一个新的Journal来解决这个问题。

解决步骤:

$ e2fsck -C 0 /dev/os/test

e2fsck 1.40.2 (12-Jul-2007)

/dev/os/test: clean, 11/524288 files, 24822/524288 blocks

 

$ tune2fs -O ^has_journal /dev/os/test – 删除原来的journal空间 

tune2fs 1.40.2 (12-Jul-2007)

 

$ tune2fs -j /dev/os/test – 自动生成新的journal

tune2fs 1.40.2 (12-Jul-2007)

Creating journal inode: done

This filesystem will be automatically checked every 33 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

 

$ e2fsck -C 0 /dev/os/test

e2fsck 1.40.2 (12-Jul-2007)

/dev/os/test: clean, 11/524288 files, 24822/524288 blocks

 

注意:这个过程要求卸载文件系统,严格按照上述顺序来操作。

如需在生产系统操作,请务必做好测试和备份工作。参考

 

·        http://h30499.www3.hp.com/t5/System-Administration/Online-resize2fs-Operation-not-permitted-While-trying-to-add/td-p/4680934

·        https://bugzilla.redhat.com/show_bug.cgi?id=160612


https://community.emc.com/docs/DOC-18041