先安装一个新的linux的系统,选择自动默认分区,并添加三块硬盘
安装完毕后进入(setup)设置IP地址为192.168.0.100
用脚本创建10个用户为huangnan1-10 用户组为huangnan,编辑如下
输入:vim useradd.sh
#!/bin/bash
  groupadd huangnan

  for username in huangnan1 huangan2 huangnan3 huangnan4 huangnan5 huangnan6 huangnan7 huangnan8 huangnan9 huangnan10
  do 
      useradd -g huangnan $username
      echo "123456" | passwd --stbin $username
  done
  :wq!
  sh useradd.sh 回车创建成功
  


将添加的三块硬盘分区格式化做磁盘阵列(RAID5)
fdisk -l 查看硬盘名称,有三块新硬盘,/dev/sdb /dev/sdc /dev/sdd 然后分区如下
fdisk /dev/sdb -n -p -w
fdisk /dev/sdc -n -p -w
fdisk /dev/sdd -n -p -w
格式化:mkfs.ext3 /dev/sdb
        mkfs.ext3 /dev/sdc
        mkfs.ext3 /dev/sdd
partprobe强制一下更新分区列表
    

建立一个磁盘阵列(RAID5) 自动默认建立一个/mnt/md1的文件目录 ,级别为5 一共两块硬盘作为数据存储,一块为备份,命令如下
mdadm --create --auto=yes /dev/md1 --level=5 --raid-devices=2 --spare-devices=1 /dev/sdb1 /dev/sdc1 /dev/sdd1 回车
然后查看:
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md1 : active raid5 sdc1[1] sdd1[2](S) sdb1[0]
      15727488 blocks level 5, 64k chunk, algorithm 2 [2/2] [UU]
或者查看RAID设备:
[root@localhost ~]# mdadm --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Fri Dec 28 15:19:24 2012
     Raid Level : raid5
     Array Size : 15727488 (15.00 GiB 16.10 GB)
  Used Dev Size : 15727488 (15.00 GiB 16.10 GB)
   Raid Devices : 2
  Total Devices : 3
Preferred Minor : 1
    Persistence : Superblock is persistent
然后建立成功后格式化即可:
mkfs.ext3 /dev/md1


然后再/mnt/下建立raid5文件:mkdir /mnt/raid5
再将/dev/md1挂载到/mnt/raid5下:mount /dev/md1 /mnt/raid5
[root@localhost ~]# df
文件系统               1K-块        已用     可用 已用% 挂载点
/dev/mapper/VolGroup00-LogVol00
                      18156292   2621124  14598004  16% /
/dev/sda1               101086     12290     83577  13% /boot
tmpfs                   517352         0    517352   0% /dev/shm
/dev/md1              15480688    169608  14524708   2% /mnt/raid5


然后设置raid自动挂载,用VIM编辑/etc/mdadm.conf配置文件
array /dev/md1 UUID=d514ef53:72a92c22:a661cd3b:64942bff
在修改:vim/etc/fstab
mount -o remount,userquota.grpquota /mnt/raid5 重新挂载,将用户和组挂载到/mnt/raid5下
[root@localhost raid5]# ls
aquota.group  aquota.user   lost+found
然后开启加载配额
/dev/md1               /mnt/raid5               ext3    defaults         0 0
/dev/md1               /mnt/raid5               ext3    defaults,usrquota,grpquota 0 0


建立数据文件
[root@localhost ~]# quotacheck -avug
quotacheck: Scanning /dev/md1 [/mnt/raid5] done
quotacheck: Checked 3 directories and 5 files
然后在启动配额
[root@localhost ~]# quotaon   -auvg
/dev/md1 [/mnt/raid5]: group quotas turned on
/dev/md1 [/mnt/raid5]: user quotas turned on

再编辑配额
edquota -u huangnan1
Disk quotas for user huangnan1 (uid 500):
  Filesystem                   blocks       soft       hard     inodes     soft     hard
  /dev/md1                          0       3000       5000          0        0        0

再把剩下的9个用户复制粘贴即可:
[root@localhost ~]# edquota -p huangnan1  -u huangnan2
[root@localhost ~]# edquota -p huangnan1  -u huangnan3
[root@localhost ~]# edquota -p huangnan1  -u huangnan4
[root@localhost ~]# edquota -p huangnan1  -u huangnan5
[root@localhost ~]# edquota -p huangnan1  -u huangnan6
[root@localhost ~]# edquota -p huangnan1  -u huangnan7
[root@localhost ~]# edquota -p huangnan1  -u huangnan8
[root@localhost ~]# edquota -p huangnan1  -u huangnan9
[root@localhost ~]# edquota -p huangnan1  -u huangnan10

设置软链接

ln -s /home /mnt/raid5/
cd /mnt/raid5/
[root@localhost raid5]# ls
aquota.group  aquota.user  home  lost+found
里面只要多了一个home就对了。。把根目录下的home链接到raid5里就行

[root@localhost raid5]# ll
总计 32
-rw------- 1 root root  6144 12-28 18:17 aquota.group
-rw------- 1 root root  7168 12-28 18:25 aquota.user
lrwxrwxrwx 1 root root     5 12-28 17:01 home -> /home
drwx------ 2 root root 16384 12-28 15:52 lost+found


设置日志服务
vim /etc/sysconfig/syslog
# Options to syslogd
# -m 0 disables 'MARK' messages.
# -r enables logging from remote machines
# -x disables DNS lookups on messages recieved with -r
# See syslogd(8) for more details
SYSLOGD_OPTIONS="-m 0 -r"
# Options to klogd
# -2 prints all kernel oops messages twice; once for klogd to decode, and
#    once for processing with 'ksymoops'
# -x disables all klogd processing of oops messages entirely
# See klogd(8) for more details
将SYSLOGD_OPTIONS="-m 0 -r"里多家一个-r即可
保存退出后重启下日志服务:
service syslog  restart
关闭内核日志记录器:                                       [确定]
关闭系统日志记录器:                                       [确定]
启动系统日志记录器:                                       [确定]
启动内核日志记录器:                                       [确定]


在另一台客户端中设置:
先配置日志服务的配置文件
vim /etc/syslog.conf
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.*                                                 /dev/console
*.*                                                                                                                                                   @192.168.18.111
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none                /var/log/messages

(应该是192.168.0.100,但是最后为了试验可靠性所以自配一个IP)


然后在服务器端输入tail查看是否客户端的日志传到服务器端(客户端ip:192.168.18.150)
[root@localhost ~]# tail -f /var/log/messages
Dec 28 17:09:37 localhost NET[15329]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf
Dec 28 17:13:41 localhost avahi-daemon[4043]: Invalid query packet.
Dec 28 17:14:21 localhost last message repeated 7 times
Dec 28 17:17:26 localhost kernel: Kernel logging (proc) stopped.
Dec 28 17:17:26 localhost kernel: Kernel log daemon terminating.
Dec 28 17:17:27 localhost exiting on signal 15
Dec 28 17:17:27 localhost syslogd 1.4.1: restart (remote reception).
Dec 28 17:17:27 localhost kernel: klogd 1.4.1, log source = /proc/kmsg started.
Dec 28 17:43:50 192.168.18.150 syslogd 1.4.1: restart (remote reception).
Dec 28 17:43:50 192.168.18.150 kernel: klogd 1.4.1, log source = /proc/kmsg started.


切换到普通用户上
[root@localhost ~]# su - huangnan1
再给普通用户设置个权限
[root@localhost ~]# chmod o+w /mnt/
[root@localhost ~]# ll -d /mnt/raid5
drwxr-xr-x 3 root root 4096 12-28 18:17 /mnt/raid5

[huangnan1@localhost ~]$ dd if=/dev/zero  of=huangnan1 bs=1M count=300
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 3.58825 seconds, 87.7 MB/s