今天这里追加存储相关的部署,主要是Block和Object,为了看到效果,简单的部署在单节点上,即Block一个节点,Object对应一个节点。

openstack cnp环境 openstack cyborg_openstack cnp环境

 

读者可能会觉得我这个图和之前的两个post有点点不同,对,存储的两个节点不同,这个没有关系,之所以有着个变化,是我没有时间继续在这个项目上投入了,我要进入另一个相对更紧急的项目,不说了,计划总不如变化快。。。扯淡了。

 

部署cinder。

序号cx表示在controller节点上的操作,序号为ccx表示在cinder节点上的操作。

c1. 准备数据库

1 mysql -u root -p
2 CREATE DATABASE cinder;
3 GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'openstack';
4 GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'openstack';

 

c2. 创建服务

1 source admin-openrc.sh
 2 
 3 openstack user create --domain default --password-prompt cinder
 4 openstack role add --project service --user cinder admin
 5 
 6 openstack service create --name cinder   --description "OpenStack Block Storage" volume
 7 openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
 8 
 9 openstack endpoint create --region RegionOne volume public http://node0:8776/v1/%\(tenant_id\)s
10 openstack endpoint create --region RegionOne volume internal http://node0:8776/v1/%\(tenant_id\)s
11 openstack endpoint create --region RegionOne volume admin http://node0:8776/v1/%\(tenant_id\)s
12 
13 openstack endpoint create --region RegionOne volumev2 public http://node0:8776/v1/%\(tenant_id\)s
14 openstack endpoint create --region RegionOne volumev2 internal http://node0:8776/v1/%\(tenant_id\)s
15 openstack endpoint create --region RegionOne volumev2 admin http://node0:8776/v1/%\(tenant_id\)s

 

c3. 安装组件

1 yum install openstack-cinder python-cinderclient

 

c4.配置/etc/cinder/cinder.conf,下面是要修改的,配置文件中的其他部分可以保留默认的信息。

1 [DEFAULT]
 2 rpc_backend = rabbit
 3 auth_strategy = keystone
 4 my_ip = 192.168.1.100
 5 verbose = True
 6 
 7 [database]
 8 connection = mysql://cinder:openstack@node0/cinder
 9 
10 [oslo_messaging_rabbit]
11 rabbit_host = node0
12 rabbit_userid = openstack
13 rabbit_password = openstack
14 
15 [keystone_authtoken]
16 auth_uri = http://node0:5000
17 auth_url = http://node0:35357
18 auth_plugin = password
19 project_domain_id = default
20 user_domain_id = default
21 project_name = service
22 username = cinder
23 password = openstack
24 
25 [oslo_concurrency]
26 lock_path = /var/lib/cinder/tmp

 

c5. 同步数据库

su -s /bin/sh -c "cinder-manage db sync" cinder

openstack cnp环境 openstack cyborg_openstack cnp环境_02

openstack cnp环境 openstack cyborg_sqlalchemy_03

1 [root@node0 opt]# su -s /bin/sh -c "cinder-manage db sync" cinder
  2 No handlers could be found for logger "oslo_config.cfg"
  3 /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:241: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  4   exception.NotSupportedWarning
  5 2016-02-24 09:26:25.731 1199 INFO migrate.versioning.api [-] 0 -> 1... 
  6 2016-02-24 09:26:27.005 1199 INFO migrate.versioning.api [-] done
  7 2016-02-24 09:26:27.005 1199 INFO migrate.versioning.api [-] 1 -> 2... 
  8 2016-02-24 09:26:27.338 1199 INFO migrate.versioning.api [-] done
  9 2016-02-24 09:26:27.339 1199 INFO migrate.versioning.api [-] 2 -> 3... 
 10 2016-02-24 09:26:27.396 1199 INFO migrate.versioning.api [-] done
 11 2016-02-24 09:26:27.397 1199 INFO migrate.versioning.api [-] 3 -> 4... 
 12 2016-02-24 09:26:27.731 1199 INFO migrate.versioning.api [-] done
 13 2016-02-24 09:26:27.731 1199 INFO migrate.versioning.api [-] 4 -> 5... 
 14 2016-02-24 09:26:27.814 1199 INFO migrate.versioning.api [-] done
 15 2016-02-24 09:26:27.814 1199 INFO migrate.versioning.api [-] 5 -> 6... 
 16 2016-02-24 09:26:27.889 1199 INFO migrate.versioning.api [-] done
 17 2016-02-24 09:26:27.889 1199 INFO migrate.versioning.api [-] 6 -> 7... 
 18 2016-02-24 09:26:27.964 1199 INFO migrate.versioning.api [-] done
 19 2016-02-24 09:26:27.964 1199 INFO migrate.versioning.api [-] 7 -> 8... 
 20 2016-02-24 09:26:28.014 1199 INFO migrate.versioning.api [-] done
 21 2016-02-24 09:26:28.014 1199 INFO migrate.versioning.api [-] 8 -> 9... 
 22 2016-02-24 09:26:28.072 1199 INFO migrate.versioning.api [-] done
 23 2016-02-24 09:26:28.073 1199 INFO migrate.versioning.api [-] 9 -> 10... 
 24 2016-02-24 09:26:28.123 1199 INFO migrate.versioning.api [-] done
 25 2016-02-24 09:26:28.124 1199 INFO migrate.versioning.api [-] 10 -> 11... 
 26 2016-02-24 09:26:28.214 1199 INFO migrate.versioning.api [-] done
 27 2016-02-24 09:26:28.214 1199 INFO migrate.versioning.api [-] 11 -> 12... 
 28 2016-02-24 09:26:28.297 1199 INFO migrate.versioning.api [-] done
 29 2016-02-24 09:26:28.298 1199 INFO migrate.versioning.api [-] 12 -> 13... 
 30 2016-02-24 09:26:28.381 1199 INFO migrate.versioning.api [-] done
 31 2016-02-24 09:26:28.381 1199 INFO migrate.versioning.api [-] 13 -> 14... 
 32 2016-02-24 09:26:28.465 1199 INFO migrate.versioning.api [-] done
 33 2016-02-24 09:26:28.465 1199 INFO migrate.versioning.api [-] 14 -> 15... 
 34 2016-02-24 09:26:28.489 1199 INFO migrate.versioning.api [-] done
 35 2016-02-24 09:26:28.489 1199 INFO migrate.versioning.api [-] 15 -> 16... 
 36 2016-02-24 09:26:28.548 1199 INFO migrate.versioning.api [-] done
 37 2016-02-24 09:26:28.548 1199 INFO migrate.versioning.api [-] 16 -> 17... 
 38 2016-02-24 09:26:28.807 1199 INFO migrate.versioning.api [-] done
 39 2016-02-24 09:26:28.807 1199 INFO migrate.versioning.api [-] 17 -> 18... 
 40 2016-02-24 09:26:28.991 1199 INFO migrate.versioning.api [-] done
 41 2016-02-24 09:26:28.992 1199 INFO migrate.versioning.api [-] 18 -> 19... 
 42 2016-02-24 09:26:29.074 1199 INFO migrate.versioning.api [-] done
 43 2016-02-24 09:26:29.074 1199 INFO migrate.versioning.api [-] 19 -> 20... 
 44 2016-02-24 09:26:29.132 1199 INFO migrate.versioning.api [-] done
 45 2016-02-24 09:26:29.133 1199 INFO migrate.versioning.api [-] 20 -> 21... 
 46 2016-02-24 09:26:29.183 1199 INFO migrate.versioning.api [-] done
 47 2016-02-24 09:26:29.183 1199 INFO migrate.versioning.api [-] 21 -> 22... 
 48 2016-02-24 09:26:29.257 1199 INFO migrate.versioning.api [-] done
 49 2016-02-24 09:26:29.257 1199 INFO migrate.versioning.api [-] 22 -> 23... 
 50 2016-02-24 09:26:29.349 1199 INFO migrate.versioning.api [-] done
 51 2016-02-24 09:26:29.349 1199 INFO migrate.versioning.api [-] 23 -> 24... 
 52 2016-02-24 09:26:29.649 1199 INFO migrate.versioning.api [-] done
 53 2016-02-24 09:26:29.649 1199 INFO migrate.versioning.api [-] 24 -> 25... 
 54 2016-02-24 09:26:30.158 1199 INFO migrate.versioning.api [-] done
 55 2016-02-24 09:26:30.158 1199 INFO migrate.versioning.api [-] 25 -> 26... 
 56 2016-02-24 09:26:30.183 1199 INFO migrate.versioning.api [-] done
 57 2016-02-24 09:26:30.184 1199 INFO migrate.versioning.api [-] 26 -> 27... 
 58 2016-02-24 09:26:30.191 1199 INFO migrate.versioning.api [-] done
 59 2016-02-24 09:26:30.192 1199 INFO migrate.versioning.api [-] 27 -> 28... 
 60 2016-02-24 09:26:30.200 1199 INFO migrate.versioning.api [-] done
 61 2016-02-24 09:26:30.200 1199 INFO migrate.versioning.api [-] 28 -> 29... 
 62 2016-02-24 09:26:30.208 1199 INFO migrate.versioning.api [-] done
 63 2016-02-24 09:26:30.208 1199 INFO migrate.versioning.api [-] 29 -> 30... 
 64 2016-02-24 09:26:30.216 1199 INFO migrate.versioning.api [-] done
 65 2016-02-24 09:26:30.217 1199 INFO migrate.versioning.api [-] 30 -> 31... 
 66 2016-02-24 09:26:30.233 1199 INFO migrate.versioning.api [-] done
 67 2016-02-24 09:26:30.233 1199 INFO migrate.versioning.api [-] 31 -> 32... 
 68 2016-02-24 09:26:30.342 1199 INFO migrate.versioning.api [-] done
 69 2016-02-24 09:26:30.342 1199 INFO migrate.versioning.api [-] 32 -> 33... 
 70 /usr/lib64/python2.7/site-packages/sqlalchemy/sql/schema.py:2922: SAWarning: Table 'encryption' specifies columns 'volume_type_id' as primary_key=True, not matching locally specified columns 'encryption_id'; setting the current primary key columns to 'encryption_id'. This warning may become an exception in a future release
 71   ", ".join("'%s'" % c.name for c in self.columns)
 72 2016-02-24 09:26:30.600 1199 INFO migrate.versioning.api [-] done
 73 2016-02-24 09:26:30.600 1199 INFO migrate.versioning.api [-] 33 -> 34... 
 74 2016-02-24 09:26:30.675 1199 INFO migrate.versioning.api [-] done
 75 2016-02-24 09:26:30.675 1199 INFO migrate.versioning.api [-] 34 -> 35... 
 76 2016-02-24 09:26:30.759 1199 INFO migrate.versioning.api [-] done
 77 2016-02-24 09:26:30.759 1199 INFO migrate.versioning.api [-] 35 -> 36... 
 78 2016-02-24 09:26:30.860 1199 INFO migrate.versioning.api [-] done
 79 2016-02-24 09:26:30.860 1199 INFO migrate.versioning.api [-] 36 -> 37... 
 80 2016-02-24 09:26:30.942 1199 INFO migrate.versioning.api [-] done
 81 2016-02-24 09:26:30.943 1199 INFO migrate.versioning.api [-] 37 -> 38... 
 82 2016-02-24 09:26:31.059 1199 INFO migrate.versioning.api [-] done
 83 2016-02-24 09:26:31.059 1199 INFO migrate.versioning.api [-] 38 -> 39... 
 84 2016-02-24 09:26:31.134 1199 INFO migrate.versioning.api [-] done
 85 2016-02-24 09:26:31.134 1199 INFO migrate.versioning.api [-] 39 -> 40... 
 86 2016-02-24 09:26:31.502 1199 INFO migrate.versioning.api [-] done
 87 2016-02-24 09:26:31.502 1199 INFO migrate.versioning.api [-] 40 -> 41... 
 88 2016-02-24 09:26:31.577 1199 INFO migrate.versioning.api [-] done
 89 2016-02-24 09:26:31.577 1199 INFO migrate.versioning.api [-] 41 -> 42... 
 90 2016-02-24 09:26:31.586 1199 INFO migrate.versioning.api [-] done
 91 2016-02-24 09:26:31.586 1199 INFO migrate.versioning.api [-] 42 -> 43... 
 92 2016-02-24 09:26:31.594 1199 INFO migrate.versioning.api [-] done
 93 2016-02-24 09:26:31.594 1199 INFO migrate.versioning.api [-] 43 -> 44... 
 94 2016-02-24 09:26:31.602 1199 INFO migrate.versioning.api [-] done
 95 2016-02-24 09:26:31.602 1199 INFO migrate.versioning.api [-] 44 -> 45... 
 96 2016-02-24 09:26:31.610 1199 INFO migrate.versioning.api [-] done
 97 2016-02-24 09:26:31.611 1199 INFO migrate.versioning.api [-] 45 -> 46... 
 98 2016-02-24 09:26:31.619 1199 INFO migrate.versioning.api [-] done
 99 2016-02-24 09:26:31.619 1199 INFO migrate.versioning.api [-] 46 -> 47... 
100 2016-02-24 09:26:31.643 1199 INFO migrate.versioning.api [-] done
101 2016-02-24 09:26:31.644 1199 INFO migrate.versioning.api [-] 47 -> 48... 
102 2016-02-24 09:26:31.719 1199 INFO migrate.versioning.api [-] done
103 2016-02-24 09:26:31.719 1199 INFO migrate.versioning.api [-] 48 -> 49... 
104 2016-02-24 09:26:31.852 1199 INFO migrate.versioning.api [-] done
105 2016-02-24 09:26:31.853 1199 INFO migrate.versioning.api [-] 49 -> 50... 
106 2016-02-24 09:26:31.936 1199 INFO migrate.versioning.api [-] done
107 2016-02-24 09:26:31.936 1199 INFO migrate.versioning.api [-] 50 -> 51... 
108 2016-02-24 09:26:32.019 1199 INFO migrate.versioning.api [-] done
109 2016-02-24 09:26:32.020 1199 INFO migrate.versioning.api [-] 51 -> 52... 
110 2016-02-24 09:26:32.120 1199 INFO migrate.versioning.api [-] done
111 2016-02-24 09:26:32.120 1199 INFO migrate.versioning.api [-] 52 -> 53... 
112 2016-02-24 09:26:32.378 1199 INFO migrate.versioning.api [-] done
113 2016-02-24 09:26:32.378 1199 INFO migrate.versioning.api [-] 53 -> 54... 
114 2016-02-24 09:26:32.470 1199 INFO migrate.versioning.api [-] done
115 2016-02-24 09:26:32.470 1199 INFO migrate.versioning.api [-] 54 -> 55... 
116 2016-02-24 09:26:32.662 1199 INFO migrate.versioning.api [-] done
117 2016-02-24 09:26:32.662 1199 INFO migrate.versioning.api [-] 55 -> 56... 
118 2016-02-24 09:26:32.670 1199 INFO migrate.versioning.api [-] done
119 2016-02-24 09:26:32.670 1199 INFO migrate.versioning.api [-] 56 -> 57... 
120 2016-02-24 09:26:32.678 1199 INFO migrate.versioning.api [-] done
121 2016-02-24 09:26:32.678 1199 INFO migrate.versioning.api [-] 57 -> 58... 
122 2016-02-24 09:26:32.686 1199 INFO migrate.versioning.api [-] done
123 2016-02-24 09:26:32.686 1199 INFO migrate.versioning.api [-] 58 -> 59... 
124 2016-02-24 09:26:32.695 1199 INFO migrate.versioning.api [-] done
125 2016-02-24 09:26:32.695 1199 INFO migrate.versioning.api [-] 59 -> 60... 
126 2016-02-24 09:26:32.703 1199 INFO migrate.versioning.api [-] done
127 [root@node0 opt]#

View Code

 

c6. 配置nova /etc/nova/nova.conf

1 [cinder]
2 os_region_name = RegionOne

 

c7. 重启计算API服务 (费比较长的时间

1 systemctl restart openstack-nova-api.service

openstack cnp环境 openstack cyborg_openstack cnp环境_02

openstack cnp环境 openstack cyborg_sqlalchemy_03

1 /var/log/message:
  2 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.869 18958 INFO oslo_service.service [-] Child 18979 exited with status 0
  3 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.870 18958 INFO oslo_service.service [-] Child 18986 killed by signal 15
  4 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.871 18958 INFO oslo_service.service [-] Child 18990 exited with status 0
  5 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.872 18958 INFO oslo_service.service [-] Child 18977 exited with status 0
  6 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.880 18958 INFO oslo_service.service [-] Child 19019 exited with status 0
  7 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.882 18958 INFO oslo_service.service [-] Child 19023 killed by signal 15
  8 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.883 18958 INFO oslo_service.service [-] Child 19025 exited with status 0
  9 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.885 18958 INFO oslo_service.service [-] Child 18984 exited with status 0
 10 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.886 18958 INFO oslo_service.service [-] Child 19018 exited with status 0
 11 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.886 18958 INFO oslo_service.service [-] Child 19024 exited with status 0
 12 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.887 18958 INFO oslo_service.service [-] Child 18983 exited with status 0
 13 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.890 18958 INFO oslo_service.service [-] Child 19010 killed by signal 15
 14 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.891 18958 INFO oslo_service.service [-] Child 19017 killed by signal 15
 15 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.894 18958 INFO oslo_service.service [-] Child 18980 exited with status 0
 16 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.896 18958 INFO oslo_service.service [-] Child 19021 exited with status 0
 17 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.896 18958 INFO oslo_service.service [-] Child 18978 exited with status 0
 18 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.897 18958 INFO oslo_service.service [-] Child 18992 exited with status 0
 19 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.898 18958 INFO oslo_service.service [-] Child 19026 killed by signal 15
 20 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.899 18958 INFO oslo_service.service [-] Child 18987 exited with status 0
 21 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.901 18958 INFO oslo_service.service [-] Child 18991 killed by signal 15
 22 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.901 18958 INFO oslo_service.service [-] Child 19016 killed by signal 15
 23 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.902 18958 INFO oslo_service.service [-] Child 19012 exited with status 0
 24 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.902 18958 INFO oslo_service.service [-] Child 19013 exited with status 0
 25 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.904 18958 INFO oslo_service.service [-] Child 18982 killed by signal 15
 26 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.904 18958 INFO oslo_service.service [-] Child 19015 killed by signal 15
 27 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.905 18958 INFO oslo_service.service [-] Child 19020 killed by signal 15
 28 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.906 18958 INFO oslo_service.service [-] Child 18988 killed by signal 15
 29 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.909 18958 INFO oslo_service.service [-] Child 18981 exited with status 0
 30 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.910 18958 INFO oslo_service.service [-] Child 19022 killed by signal 15
 31 Feb 24 09:39:06 node0 nova-api: 2016-02-24 09:39:06.911 18958 INFO oslo_service.service [-] Child 19011 killed by signal 15
 32 Feb 24 09:39:07 node0 dnsmasq-dhcp[21792]: DHCPDISCOVER(ns-3bf4d3fc-7e) 90:b1:1c:2e:01:b4 no address available
 33 Feb 24 09:39:15 node0 dnsmasq-dhcp[21792]: DHCPDISCOVER(ns-3bf4d3fc-7e) 90:b1:1c:2e:01:b4 no address available
 34 Feb 24 09:39:25 node0 dnsmasq-dhcp[21792]: DHCPDISCOVER(ns-3bf4d3fc-7e) 90:b1:1c:2e:01:b4 no address available
 35 Feb 24 09:39:37 node0 dnsmasq-dhcp[21792]: DHCPDISCOVER(ns-3bf4d3fc-7e) 90:b1:1c:2e:01:b4 no address available
 36 Feb 24 09:39:43 node0 dnsmasq-dhcp[21792]: DHCPDISCOVER(ns-3bf4d3fc-7e) 90:b1:1c:2e:01:b4 no address available
 37 Feb 24 09:39:45 node0 dnsmasq-dhcp[21792]: DHCPDISCOVER(ns-3bf4d3fc-7e) 90:b1:1c:2e:01:b4 no address available
 38 Feb 24 09:39:52 node0 dnsmasq-dhcp[21792]: DHCPDISCOVER(ns-3bf4d3fc-7e) 90:b1:1c:2e:01:b4 no address available
 39 Feb 24 09:39:59 node0 dnsmasq-dhcp[21792]: DHCPDISCOVER(ns-3bf4d3fc-7e) 90:b1:1c:2e:01:b4 no address available
 40 Feb 24 09:40:01 node0 systemd: Started Session 3527 of user root.
 41 Feb 24 09:40:01 node0 systemd: Starting Session 3527 of user root.
 42 Feb 24 09:40:11 node0 dnsmasq-dhcp[21792]: DHCPDISCOVER(ns-3bf4d3fc-7e) 90:b1:1c:2e:01:b4 no address available
 43 Feb 24 09:40:25 node0 dnsmasq-dhcp[21792]: DHCPDISCOVER(ns-3bf4d3fc-7e) 90:b1:1c:2e:01:b4 no address available
 44 Feb 24 09:40:33 node0 nova-scheduler: 2016-02-24 09:40:33.386 9375 INFO nova.scheduler.host_manager [req-57515c80-9016-43ab-ad68-73f9dc626f5e - - - - -] Successfully synced instances from host 'node1'.
 45 Feb 24 09:40:36 node0 systemd: openstack-nova-api.service stop-sigterm timed out. Killing.
 46 Feb 24 09:40:36 node0 systemd: openstack-nova-api.service: main process exited, code=killed, status=9/KILL
 47 Feb 24 09:40:36 node0 systemd: Unit openstack-nova-api.service entered failed state.
 48 Feb 24 09:40:36 node0 systemd: openstack-nova-api.service failed.
 49 Feb 24 09:40:36 node0 systemd: Starting OpenStack Nova API Server...
 50 Feb 24 09:40:39 node0 nova-api: No handlers could be found for logger "oslo_config.cfg"
 51 Feb 24 09:40:39 node0 nova-api: 2016-02-24 09:40:39.872 2039 INFO oslo_service.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative
 52 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.157 2039 INFO nova.api.openstack [-] Loaded extensions: ['extensions', 'flavors', 'image-metadata', 'image-size', 'images', 'ips', 'limits', 'os-access-ips', 'os-admin-actions', 'os-admin-password', 'os-agents', 'os-aggregates', 'os-assisted-volume-snapshots', 'os-attach-interfaces', 'os-availability-zone', 'os-baremetal-nodes', 'os-block-device-mapping', 'os-cells', 'os-certificates', 'os-cloudpipe', 'os-config-drive', 'os-console-auth-tokens', 'os-console-output', 'os-consoles', 'os-create-backup', 'os-deferred-delete', 'os-disk-config', 'os-evacuate', 'os-extended-availability-zone', 'os-extended-server-attributes', 'os-extended-status', 'os-extended-volumes', 'os-fixed-ips', 'os-flavor-access', 'os-flavor-extra-specs', 'os-flavor-manage', 'os-flavor-rxtx', 'os-floating-ip-dns', 'os-floating-ip-pools', 'os-floating-ips', 'os-floating-ips-bulk', 'os-fping', 'os-hide-server-addresses', 'os-hosts', 'os-hypervisors', 'os-instance-actions', 'os-instance-usage-audit-log', 'os-keypairs', 'os-lock-server', 'os-migrate-server', 'os-migrations', 'os-multinic', 'os-multiple-create', 'os-networks', 'os-networks-associate', 'os-pause-server', 'os-personality', 'os-preserve-ephemeral-rebuild', 'os-quota-class-sets', 'os-quota-sets', 'os-remote-consoles', 'os-rescue', 'os-scheduler-hints', 'os-security-group-default-rules', 'os-security-groups', 'os-server-diagnostics', 'os-server-external-events', 'os-server-groups', 'os-server-password', 'os-server-usage', 'os-services', 'os-shelve', 'os-simple-tenant-usage', 'os-suspend-server', 'os-tenant-networks', 'os-used-limits', 'os-user-data', 'os-virtual-interfaces', 'os-volumes', 'server-metadata', 'servers', 'versions']
 53 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.161 2039 WARNING oslo_config.cfg [-] Option "username" from group "keystone_authtoken" is deprecated. Use option "user-name" from group "keystone_authtoken".
 54 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.317 2039 INFO nova.api.openstack [-] Loaded extensions: ['extensions', 'flavors', 'image-metadata', 'image-size', 'images', 'ips', 'limits', 'os-access-ips', 'os-admin-actions', 'os-admin-password', 'os-agents', 'os-aggregates', 'os-assisted-volume-snapshots', 'os-attach-interfaces', 'os-availability-zone', 'os-baremetal-nodes', 'os-block-device-mapping', 'os-cells', 'os-certificates', 'os-cloudpipe', 'os-config-drive', 'os-console-auth-tokens', 'os-console-output', 'os-consoles', 'os-create-backup', 'os-deferred-delete', 'os-disk-config', 'os-evacuate', 'os-extended-availability-zone', 'os-extended-server-attributes', 'os-extended-status', 'os-extended-volumes', 'os-fixed-ips', 'os-flavor-access', 'os-flavor-extra-specs', 'os-flavor-manage', 'os-flavor-rxtx', 'os-floating-ip-dns', 'os-floating-ip-pools', 'os-floating-ips', 'os-floating-ips-bulk', 'os-fping', 'os-hide-server-addresses', 'os-hosts', 'os-hypervisors', 'os-instance-actions', 'os-instance-usage-audit-log', 'os-keypairs', 'os-lock-server', 'os-migrate-server', 'os-migrations', 'os-multinic', 'os-multiple-create', 'os-networks', 'os-networks-associate', 'os-pause-server', 'os-personality', 'os-preserve-ephemeral-rebuild', 'os-quota-class-sets', 'os-quota-sets', 'os-remote-consoles', 'os-rescue', 'os-scheduler-hints', 'os-security-group-default-rules', 'os-security-groups', 'os-server-diagnostics', 'os-server-external-events', 'os-server-groups', 'os-server-password', 'os-server-usage', 'os-services', 'os-shelve', 'os-simple-tenant-usage', 'os-suspend-server', 'os-tenant-networks', 'os-used-limits', 'os-user-data', 'os-virtual-interfaces', 'os-volumes', 'server-metadata', 'servers', 'versions']
 55 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.479 2039 INFO nova.wsgi [-] osapi_compute listening on 0.0.0.0:8774
 56 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.479 2039 INFO oslo_service.service [-] Starting 16 workers
 57 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.482 2039 INFO oslo_service.service [-] Started child 2050
 58 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.485 2050 INFO nova.osapi_compute.wsgi.server [-] (2050) wsgi starting up on http://0.0.0.0:8774/
 59 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.486 2039 INFO oslo_service.service [-] Started child 2051
 60 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.489 2039 INFO oslo_service.service [-] Started child 2052
 61 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.490 2051 INFO nova.osapi_compute.wsgi.server [-] (2051) wsgi starting up on http://0.0.0.0:8774/
 62 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.492 2052 INFO nova.osapi_compute.wsgi.server [-] (2052) wsgi starting up on http://0.0.0.0:8774/
 63 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.493 2039 INFO oslo_service.service [-] Started child 2053
 64 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.495 2053 INFO nova.osapi_compute.wsgi.server [-] (2053) wsgi starting up on http://0.0.0.0:8774/
 65 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.496 2039 INFO oslo_service.service [-] Started child 2054
 66 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.500 2039 INFO oslo_service.service [-] Started child 2055
 67 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.501 2054 INFO nova.osapi_compute.wsgi.server [-] (2054) wsgi starting up on http://0.0.0.0:8774/
 68 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.502 2055 INFO nova.osapi_compute.wsgi.server [-] (2055) wsgi starting up on http://0.0.0.0:8774/
 69 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.503 2039 INFO oslo_service.service [-] Started child 2056
 70 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.507 2039 INFO oslo_service.service [-] Started child 2057
 71 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.507 2056 INFO nova.osapi_compute.wsgi.server [-] (2056) wsgi starting up on http://0.0.0.0:8774/
 72 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.509 2057 INFO nova.osapi_compute.wsgi.server [-] (2057) wsgi starting up on http://0.0.0.0:8774/
 73 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.510 2039 INFO oslo_service.service [-] Started child 2058
 74 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.513 2058 INFO nova.osapi_compute.wsgi.server [-] (2058) wsgi starting up on http://0.0.0.0:8774/
 75 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.514 2039 INFO oslo_service.service [-] Started child 2059
 76 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.518 2039 INFO oslo_service.service [-] Started child 2060
 77 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.518 2059 INFO nova.osapi_compute.wsgi.server [-] (2059) wsgi starting up on http://0.0.0.0:8774/
 78 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.521 2039 INFO oslo_service.service [-] Started child 2061
 79 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.522 2060 INFO nova.osapi_compute.wsgi.server [-] (2060) wsgi starting up on http://0.0.0.0:8774/
 80 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.524 2061 INFO nova.osapi_compute.wsgi.server [-] (2061) wsgi starting up on http://0.0.0.0:8774/
 81 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.525 2039 INFO oslo_service.service [-] Started child 2062
 82 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.528 2039 INFO oslo_service.service [-] Started child 2063
 83 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.528 2062 INFO nova.osapi_compute.wsgi.server [-] (2062) wsgi starting up on http://0.0.0.0:8774/
 84 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.531 2063 INFO nova.osapi_compute.wsgi.server [-] (2063) wsgi starting up on http://0.0.0.0:8774/
 85 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.532 2039 INFO oslo_service.service [-] Started child 2064
 86 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.534 2064 INFO nova.osapi_compute.wsgi.server [-] (2064) wsgi starting up on http://0.0.0.0:8774/
 87 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.535 2039 INFO oslo_service.service [-] Started child 2065
 88 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.538 2039 INFO nova.network.driver [-] Loading network driver 'nova.network.linux_net'
 89 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.539 2065 INFO nova.osapi_compute.wsgi.server [-] (2065) wsgi starting up on http://0.0.0.0:8774/
 90 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.766 2039 INFO nova.wsgi [-] metadata listening on 0.0.0.0:8775
 91 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.767 2039 INFO oslo_service.service [-] Starting 16 workers
 92 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.772 2039 INFO oslo_service.service [-] Started child 2072
 93 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.775 2072 INFO nova.metadata.wsgi.server [-] (2072) wsgi starting up on http://0.0.0.0:8775/
 94 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.777 2039 INFO oslo_service.service [-] Started child 2073
 95 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.780 2073 INFO nova.metadata.wsgi.server [-] (2073) wsgi starting up on http://0.0.0.0:8775/
 96 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.782 2039 INFO oslo_service.service [-] Started child 2074
 97 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.785 2074 INFO nova.metadata.wsgi.server [-] (2074) wsgi starting up on http://0.0.0.0:8775/
 98 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.787 2039 INFO oslo_service.service [-] Started child 2075
 99 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.789 2075 INFO nova.metadata.wsgi.server [-] (2075) wsgi starting up on http://0.0.0.0:8775/
100 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.792 2039 INFO oslo_service.service [-] Started child 2076
101 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.794 2076 INFO nova.metadata.wsgi.server [-] (2076) wsgi starting up on http://0.0.0.0:8775/
102 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.797 2039 INFO oslo_service.service [-] Started child 2077
103 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.799 2077 INFO nova.metadata.wsgi.server [-] (2077) wsgi starting up on http://0.0.0.0:8775/
104 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.801 2039 INFO oslo_service.service [-] Started child 2078
105 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.804 2078 INFO nova.metadata.wsgi.server [-] (2078) wsgi starting up on http://0.0.0.0:8775/
106 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.806 2039 INFO oslo_service.service [-] Started child 2079
107 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.808 2079 INFO nova.metadata.wsgi.server [-] (2079) wsgi starting up on http://0.0.0.0:8775/
108 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.810 2039 INFO oslo_service.service [-] Started child 2080
109 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.813 2080 INFO nova.metadata.wsgi.server [-] (2080) wsgi starting up on http://0.0.0.0:8775/
110 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.815 2039 INFO oslo_service.service [-] Started child 2081
111 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.818 2081 INFO nova.metadata.wsgi.server [-] (2081) wsgi starting up on http://0.0.0.0:8775/
112 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.820 2039 INFO oslo_service.service [-] Started child 2082
113 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.823 2082 INFO nova.metadata.wsgi.server [-] (2082) wsgi starting up on http://0.0.0.0:8775/
114 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.825 2039 INFO oslo_service.service [-] Started child 2083
115 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.828 2083 INFO nova.metadata.wsgi.server [-] (2083) wsgi starting up on http://0.0.0.0:8775/
116 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.830 2039 INFO oslo_service.service [-] Started child 2084
117 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.833 2084 INFO nova.metadata.wsgi.server [-] (2084) wsgi starting up on http://0.0.0.0:8775/
118 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.835 2039 INFO oslo_service.service [-] Started child 2085
119 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.837 2085 INFO nova.metadata.wsgi.server [-] (2085) wsgi starting up on http://0.0.0.0:8775/
120 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.840 2039 INFO oslo_service.service [-] Started child 2086
121 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.842 2086 INFO nova.metadata.wsgi.server [-] (2086) wsgi starting up on http://0.0.0.0:8775/
122 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.844 2039 INFO oslo_service.service [-] Started child 2087
123 Feb 24 09:40:40 node0 systemd: Started OpenStack Nova API Server.
124 Feb 24 09:40:40 node0 nova-api: 2016-02-24 09:40:40.847 2087 INFO nova.metadata.wsgi.server [-] (2087) wsgi starting up on http://0.0.0.0:8775/

View Code

 

c8. 启动cinder服务

1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
2 systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

 

下面开始在Block节点上部署cinder。

cc1. 安装工具包

1 yum install lvm2
2 
3 systemctl enable lvm2-lvmetad.service
4 systemctl start lvm2-lvmetad.service

 

cc2. 创建LVM物理卷及卷组

1 pvcreate /dev/sdb
2 
3 vgcreate cinder-volumes /dev/sdb

 

cc3. 配置/etc/lvm/lvm.conf

If your storage nodes use LVM on the operating system disk, you must also add the associated device to the filter. For example, if the /dev/sda

filter = [ "a/sda/", "a/sdb/", "r/.*/"]

Similarly, if your compute nodes use LVM on the operating system disk, you must also modify the filter in the /etc/lvm/lvm.conf file on those nodes to include only the operating system disk. For example, if the /dev/sda

filter = [ "a/sda/", "r/.*/"]

我的配置信息如下(这里出现了笔误的,后续会提到错误信息):

1         # Example
 2         # Accept every block device:
 3         # filter = [ "a|.*/|" ]
 4         # Reject the cdrom drive:
 5         # filter = [ "r|/dev/cdrom|" ]
 6         # Work with just loopback devices, e.g. for testing:
 7         # filter = [ "a|loop|", "r|.*|" ]
 8         # Accept all loop devices and ide drives except hdc:
 9         # filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
10         # Use anchors to be very specific:
11         # filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
12         # 
13         # This configuration option has an automatic default value.
14         # filter = [ "a|.*/|" ]
15         filter = [ "a/sda/", "a/sdb/", "r/.*/"]

 

cc4. 安装组件

1 yum install openstack-cinder targetcli python-oslo-policy

 

cc5. 配置/etc/cinder/cinder.conf

1 [DEFAULT]
 2 rpc_backend = rabbit
 3 auth_strategy = keystone
 4 my_ip =192.168.1.120
 5 enabled_backends = lvm
 6 glance_host = node0
 7 verbose = True
 8 
 9 [oslo_concurrency]
10 lock_path = /var/lib/cinder/tmp
11 
12 [keystone_authtoken]
13 auth_uri = http://node0:5000
14 auth_url = http://node0:35357
15 auth_plugin = password
16 project_domain_id = default
17 user_domain_id = default
18 project_name = service
19 username = cinder
20 password = openstack
21 
22 [oslo_messaging_rabbit]
23 rabbit_host = node0
24 rabbit_userid = openstack
25 rabbit_password = openstack
26 
27 [database]
28 connection = mysql://cinder:openstack@node0/cinder
29 
30 [lvm]
31 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
32 volume_group = cinder-volumes
33 iscsi_protocol = iscsi
34 iscsi_helper = lioadm

 

cc6. 启动服务

1 systemctl enable openstack-cinder-volume.service target.service
2 systemctl start openstack-cinder-volume.service target.service

 

cc7. 验证服务

1 source admin-openrc.sh
2 
3 [root@node0 opt]# cinder service-list
4 +------------------+-----------+------+---------+-------+----------------------------+-----------------+
5 |      Binary      |    Host   | Zone |  Status | State |         Updated_at         | Disabled Reason |
6 +------------------+-----------+------+---------+-------+----------------------------+-----------------+
7 | cinder-scheduler |   node0   | nova | enabled |   up  | 2016-02-24T05:45:09.000000 |        -        |
8 |  cinder-volume   | node2@lvm | nova | enabled |  down |             -              |        -        |
9 +------------------+-----------+------+---------+-------+----------------------------+-----------------+
1 [root@node0 opt]# cinder-manage service list
2 No handlers could be found for logger "oslo_config.cfg"
3 /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:241: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
4   exception.NotSupportedWarning
5 Binary           Host                                 Zone             Status     State Updated At
6 cinder-scheduler node0                                nova             enabled    :-)   2016-02-24 05:56:49
7 cinder-volume    node2@lvm                            nova             enabled    XXX   None      
8 [root@node0 opt]#

发现出了问题,上面的node2没有正常工作,是down的状态,对于这个问题,我首先查看了下ntp是否同步上,就是node2和node0之间,发现是没有问题的!

1 [root@node2 ~]# ntpdate node0
2  1 Mar 09:14:21 ntpdate[26469]: adjust time server 192.168.1.100 offset -0.000001 sec

那这个问题会不会是配置的问题呢?看日志,查看/var/log/cinder/volume.log:

1 2016-02-24 13:33:30.563 3928 INFO cinder.service [-] Starting cinder-volume node (version 7.0.1)
 2 2016-02-24 13:33:30.565 3928 INFO cinder.volume.manager [req-d78b5d83-26f9-45d1-96a8-52c422c294e3 - - - - -] Starting volume driver LVMVolumeDriver (3.0.0)
 3 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager [req-d78b5d83-26f9-45d1-96a8-52c422c294e3 - - - - -] Failed to initialize driver.
 4 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager Traceback (most recent call last):
 5 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 368, in init_host
 6 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager     self.driver.check_for_setup_error()
 7 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper
 8 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager     return f(*args, **kwargs)
 9 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 269, in check_for_setup_error
10 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager     lvm_conf=lvm_conf_file)
11 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 86, in __init__
12 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager     if self._vg_exists() is False:
13 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 123, in _vg_exists
14 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager     run_as_root=True)
15 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 155, in execute
16 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager     return processutils.execute(*cmd, **kwargs)
17 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 275, in execute
18 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager     cmd=sanitized_cmd)
19 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager ProcessExecutionError: Unexpected error while running command.
20 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings -o name cinder-volumes
21 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager Exit code: 5
22 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager Stdout: u''
23 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager Stderr: u'  Invalid filter pattern "i"a/sda/",".\n'
24 2016-02-24 13:33:30.690 3928 ERROR cinder.volume.manager
25 2016-02-24 13:33:30.756 3928 INFO oslo.messaging._drivers.impl_rabbit [req-8941ff35-6d20-4e5e-97c6-79fdbbfb508d - - - - -] Connecting to AMQP server on node0:5672
26 2016-02-24 13:33:30.780 3928 INFO oslo.messaging._drivers.impl_rabbit [req-8941ff35-6d20-4e5e-97c6-79fdbbfb508d - - - - -] Connected to AMQP server on node0:5672
27 2016-02-24 13:33:40.800 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
28 2016-02-24 13:33:50.805 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
29 2016-02-24 13:34:00.809 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
30 2016-02-24 13:34:10.819 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
31 2016-02-24 13:34:20.824 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
32 2016-02-24 13:34:30.824 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
33 2016-02-24 13:34:31.796 3928 WARNING cinder.volume.manager [req-41b39517-4c6e-4e91-a594-488f3d25e68e - - - - -] Update driver status failed: (config name lvm) is uninitialized.
34 2016-02-24 13:34:40.833 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
35 2016-02-24 13:34:50.838 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
36 2016-02-24 13:35:00.838 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
37 2016-02-24 13:35:10.848 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
38 2016-02-24 13:35:20.852 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
39 2016-02-24 13:35:30.852 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
40 2016-02-24 13:35:31.797 3928 WARNING cinder.volume.manager [req-aacbae0b-cce8-4bc9-9c6a-2e3098f34c25 - - - - -] Update driver status failed: (config name lvm) is uninitialized.
41 2016-02-24 13:35:40.861 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
42 2016-02-24 13:35:50.866 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
43 2016-02-24 13:36:00.866 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".
44 2016-02-24 13:36:10.876 3928 ERROR cinder.service [-] Manager for service cinder-volume node2@lvm is reporting problems, not sending heartbeat. Service will appear "down".

果然是配置中有问题,filter里面出现了笔误,多了一个i字符。更正后,再次启动cinder-volume服务。再次verify:

1 [root@node0 opt]# cinder service-list
 2 +------------------+-----------+------+---------+-------+----------------------------+-----------------+
 3 |      Binary      |    Host   | Zone |  Status | State |         Updated_at         | Disabled Reason |
 4 +------------------+-----------+------+---------+-------+----------------------------+-----------------+
 5 | cinder-scheduler |   node0   | nova | enabled |   up  | 2016-02-24T06:04:49.000000 |        -        |
 6 |  cinder-volume   | node2@lvm | nova | enabled |   up  | 2016-02-24T06:04:51.000000 |        -        |
 7 +------------------+-----------+------+---------+-------+----------------------------+-----------------+
 8 [root@node0 opt]# 
 9 [root@node0 opt]# 
10 [root@node0 opt]# cinder-manage service list
11 No handlers could be found for logger "oslo_config.cfg"
12 /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:241: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
13   exception.NotSupportedWarning
14 Binary           Host                                 Zone             Status     State Updated At
15 cinder-scheduler node0                                nova             enabled    :-)   2016-02-24 06:04:49
16 cinder-volume    node2@lvm                            nova             enabled    :-)   2016-02-24 06:04:51

一切正常启动了,OK,cinder部署告捷了!

 

最后部署swift。swift主要包含两部分的部署,和cinder类似,controller和object 节点的部署。 就软件组成来说,swift主要包含proxy-server,account server,container server以及object server几个部分。其中proxy-server原则上可以部署在任何机器节点上,在我这个环境中,我部署在controller节点上了,其他三个,部署在object节点上。

下面,用sx表示在controller节点的部署信息,ssx表示在object节点上的部署信息。

s1. 创建服务(注意,proxy-server安装,不需要创建数据库)

1 source admin-openrc.sh
 2 
 3 openstack user create --domain default --password-prompt swift
 4 openstack role add --project service --user swift admin
 5 
 6 openstack service create --name swift --description "OpenStack Object Storage" object-store
 7 
 8 openstack endpoint create --region RegionOne object-store public http://node0:8080/v1/AUTH_%\(tenant_id\)s
 9 openstack endpoint create --region RegionOne object-store internal http://node0:8080/v1/AUTH_%\(tenant_id\)s
10 openstack endpoint create --region RegionOne object-store admin http://node0:8080/v1/AUTH_%\(tenant_id\)s

 

s2. 安装组件

1 yum install openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached

 

s3. 配置/etc/swift/proxy-server.conf

这里,需要先下载原始版本的配置文件到指定目录下,然后修改配置。

curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/libert

openstack cnp环境 openstack cyborg_openstack cnp环境_02

openstack cnp环境 openstack cyborg_sqlalchemy_03

1 [DEFAULT]
  2 bind_port = 8080
  3 swift_dir = /etc/swift
  4 user = swift
  5 
  6 # Enables exposing configuration settings via HTTP GET /info.
  7 # expose_info = true
  8 
  9 # Key to use for admin calls that are HMAC signed.  Default is empty,
 10 # which will disable admin calls to /info.
 11 # admin_key = secret_admin_key
 12 #
 13 # Allows the ability to withhold sections from showing up in the public calls
 14 # to /info.  You can withhold subsections by separating the dict level with a
 15 # ".".  The following would cause the sections 'container_quotas' and 'tempurl'
 16 # to not be listed, and the key max_failed_deletes would be removed from
 17 # bulk_delete.  Default value is 'swift.valid_api_versions' which allows all
 18 # registered features to be listed via HTTP GET /info except
 19 # swift.valid_api_versions information
 20 # disallowed_sections = swift.valid_api_versions, container_quotas, tempurl
 21 
 22 # Use an integer to override the number of pre-forked processes that will
 23 # accept connections.  Should default to the number of effective cpu
 24 # cores in the system.  It's worth noting that individual workers will
 25 # use many eventlet co-routines to service multiple concurrent requests.
 26 # workers = auto
 27 #
 28 # Maximum concurrent requests per worker
 29 # max_clients = 1024
 30 #
 31 # Set the following two lines to enable SSL. This is for testing only.
 32 # cert_file = /etc/swift/proxy.crt
 33 # key_file = /etc/swift/proxy.key
 34 #
 35 # expiring_objects_container_divisor = 86400
 36 # expiring_objects_account_name = expiring_objects
 37 #
 38 # You can specify default log routing here if you want:
 39 # log_name = swift
 40 # log_facility = LOG_LOCAL0
 41 # log_level = INFO
 42 # log_headers = false
 43 # log_address = /dev/log
 44 # The following caps the length of log lines to the value given; no limit if
 45 # set to 0, the default.
 46 # log_max_line_length = 0
 47 #
 48 # This optional suffix (default is empty) that would be appended to the swift transaction
 49 # id allows one to easily figure out from which cluster that X-Trans-Id belongs to.
 50 # This is very useful when one is managing more than one swift cluster.
 51 # trans_id_suffix =
 52 #
 53 # comma separated list of functions to call to setup custom log handlers.
 54 # functions get passed: conf, name, log_to_console, log_route, fmt, logger,
 55 # adapted_logger
 56 # log_custom_handlers =
 57 #
 58 # If set, log_udp_host will override log_address
 59 # log_udp_host =
 60 # log_udp_port = 514
 61 #
 62 # You can enable StatsD logging here:
 63 # log_statsd_host = localhost
 64 # log_statsd_port = 8125
 65 # log_statsd_default_sample_rate = 1.0
 66 # log_statsd_sample_rate_factor = 1.0
 67 # log_statsd_metric_prefix =
 68 #
 69 # Use a comma separated list of full url (http://foo.bar:1234,https://foo.bar)
 70 # cors_allow_origin =
 71 # strict_cors_mode = True
 72 #
 73 # client_timeout = 60
 74 # eventlet_debug = false
 75 
 76 [pipeline:main]
 77 pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
 78 
 79 [app:proxy-server]
 80 use = egg:swift#proxy
 81 # You can override the default log routing for this app here:
 82 # set log_name = proxy-server
 83 # set log_facility = LOG_LOCAL0
 84 # set log_level = INFO
 85 # set log_address = /dev/log
 86 #
 87 # log_handoffs = true
 88 # recheck_account_existence = 60
 89 # recheck_container_existence = 60
 90 # object_chunk_size = 65536
 91 # client_chunk_size = 65536
 92 #
 93 # How long the proxy server will wait on responses from the a/c/o servers.
 94 # node_timeout = 10
 95 #
 96 # How long the proxy server will wait for an initial response and to read a
 97 # chunk of data from the object servers while serving GET / HEAD requests.
 98 # Timeouts from these requests can be recovered from so setting this to
 99 # something lower than node_timeout would provide quicker error recovery
100 # while allowing for a longer timeout for non-recoverable requests (PUTs).
101 # Defaults to node_timeout, should be overriden if node_timeout is set to a
102 # high number to prevent client timeouts from firing before the proxy server
103 # has a chance to retry.
104 # recoverable_node_timeout = node_timeout
105 #
106 # conn_timeout = 0.5
107 #
108 # How long to wait for requests to finish after a quorum has been established.
109 # post_quorum_timeout = 0.5
110 #
111 # How long without an error before a node's error count is reset. This will
112 # also be how long before a node is reenabled after suppression is triggered.
113 # error_suppression_interval = 60
114 #
115 # How many errors can accumulate before a node is temporarily ignored.
116 # error_suppression_limit = 10
117 #
118 # If set to 'true' any authorized user may create and delete accounts; if
119 # 'false' no one, even authorized, can.
120 # allow_account_management = false
121 #
122 # Set object_post_as_copy = false to turn on fast posts where only the metadata
123 # changes are stored anew and the original data file is kept in place. This
124 # makes for quicker posts; but since the container metadata isn't updated in
125 # this mode, features like container sync won't be able to sync posts.
126 # object_post_as_copy = true
127 #
128 # If set to 'true' authorized accounts that do not yet exist within the Swift
129 # cluster will be automatically created.
130 account_autocreate = true
131 #
132 # If set to a positive value, trying to create a container when the account
133 # already has at least this maximum containers will result in a 403 Forbidden.
134 # Note: This is a soft limit, meaning a user might exceed the cap for
135 # recheck_account_existence before the 403s kick in.
136 # max_containers_per_account = 0
137 #
138 # This is a comma separated list of account hashes that ignore the
139 # max_containers_per_account cap.
140 # max_containers_whitelist =
141 #
142 # Comma separated list of Host headers to which the proxy will deny requests.
143 # deny_host_headers =
144 #
145 # Prefix used when automatically creating accounts.
146 # auto_create_account_prefix = .
147 #
148 # Depth of the proxy put queue.
149 # put_queue_depth = 10
150 #
151 # Storage nodes can be chosen at random (shuffle), by using timing
152 # measurements (timing), or by using an explicit match (affinity).
153 # Using timing measurements may allow for lower overall latency, while
154 # using affinity allows for finer control. In both the timing and
155 # affinity cases, equally-sorting nodes are still randomly chosen to
156 # spread load.
157 # The valid values for sorting_method are "affinity", "shuffle", and "timing".
158 # sorting_method = shuffle
159 #
160 # If the "timing" sorting_method is used, the timings will only be valid for
161 # the number of seconds configured by timing_expiry.
162 # timing_expiry = 300
163 #
164 # The maximum time (seconds) that a large object connection is allowed to last.
165 # max_large_object_get_time = 86400
166 #
167 # Set to the number of nodes to contact for a normal request. You can use
168 # '* replicas' at the end to have it use the number given times the number of
169 # replicas for the ring being used for the request.
170 # request_node_count = 2 * replicas
171 #
172 # Which backend servers to prefer on reads. Format is r<N> for region
173 # N or r<N>z<M> for region N, zone M. The value after the equals is
174 # the priority; lower numbers are higher priority.
175 #
176 # Example: first read from region 1 zone 1, then region 1 zone 2, then
177 # anything in region 2, then everything else:
178 # read_affinity = r1z1=100, r1z2=200, r2=300
179 # Default is empty, meaning no preference.
180 # read_affinity =
181 #
182 # Which backend servers to prefer on writes. Format is r<N> for region
183 # N or r<N>z<M> for region N, zone M. If this is set, then when
184 # handling an object PUT request, some number (see setting
185 # write_affinity_node_count) of local backend servers will be tried
186 # before any nonlocal ones.
187 #
188 # Example: try to write to regions 1 and 2 before writing to any other
189 # nodes:
190 # write_affinity = r1, r2
191 # Default is empty, meaning no preference.
192 # write_affinity =
193 #
194 # The number of local (as governed by the write_affinity setting)
195 # nodes to attempt to contact first, before any non-local ones. You
196 # can use '* replicas' at the end to have it use the number given
197 # times the number of replicas for the ring being used for the
198 # request.
199 # write_affinity_node_count = 2 * replicas
200 #
201 # These are the headers whose values will only be shown to swift_owners. The
202 # exact definition of a swift_owner is up to the auth system in use, but
203 # usually indicates administrative responsibilities.
204 # swift_owner_headers = x-container-read, x-container-write, x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, x-account-access-control
205 
206 [filter:tempauth]
207 use = egg:swift#tempauth
208 # You can override the default log routing for this filter here:
209 # set log_name = tempauth
210 # set log_facility = LOG_LOCAL0
211 # set log_level = INFO
212 # set log_headers = false
213 # set log_address = /dev/log
214 #
215 # The reseller prefix will verify a token begins with this prefix before even
216 # attempting to validate it. Also, with authorization, only Swift storage
217 # accounts with this prefix will be authorized by this middleware. Useful if
218 # multiple auth systems are in use for one Swift cluster.
219 # The reseller_prefix may contain a comma separated list of items. The first
220 # item is used for the token as mentioned above. If second and subsequent
221 # items exist, the middleware will handle authorization for an account with
222 # that prefix. For example, for prefixes "AUTH, SERVICE", a path of
223 # /v1/SERVICE_account is handled the same as /v1/AUTH_account. If an empty
224 # (blank) reseller prefix is required, it must be first in the list. Two
225 # single quote characters indicates an empty (blank) reseller prefix.
226 # reseller_prefix = AUTH
227 
228 #
229 # The require_group parameter names a group that must be presented by
230 # either X-Auth-Token or X-Service-Token. Usually this parameter is
231 # used only with multiple reseller prefixes (e.g., SERVICE_require_group=blah).
232 # By default, no group is needed. Do not use .admin.
233 # require_group =
234 
235 # The auth prefix will cause requests beginning with this prefix to be routed
236 # to the auth subsystem, for granting tokens, etc.
237 # auth_prefix = /auth/
238 # token_life = 86400
239 #
240 # This allows middleware higher in the WSGI pipeline to override auth
241 # processing, useful for middleware such as tempurl and formpost. If you know
242 # you're not going to use such middleware and you want a bit of extra security,
243 # you can set this to false.
244 # allow_overrides = true
245 #
246 # This specifies what scheme to return with storage urls:
247 # http, https, or default (chooses based on what the server is running as)
248 # This can be useful with an SSL load balancer in front of a non-SSL server.
249 # storage_url_scheme = default
250 #
251 # Lastly, you need to list all the accounts/users you want here. The format is:
252 #   user_<account>_<user> = <key> [group] [group] [...] [storage_url]
253 # or if you want underscores in <account> or <user>, you can base64 encode them
254 # (with no equal signs) and use this format:
255 #   user64_<account_b64>_<user_b64> = <key> [group] [group] [...] [storage_url]
256 # There are special groups of:
257 #   .reseller_admin = can do anything to any account for this auth
258 #   .admin = can do anything within the account
259 # If neither of these groups are specified, the user can only access containers
260 # that have been explicitly allowed for them by a .admin or .reseller_admin.
261 # The trailing optional storage_url allows you to specify an alternate url to
262 # hand back to the user upon authentication. If not specified, this defaults to
263 # $HOST/v1/<reseller_prefix>_<account> where $HOST will do its best to resolve
264 # to what the requester would need to use to reach this host.
265 # Here are example entries, required for running the tests:
266 user_admin_admin = admin .admin .reseller_admin
267 user_test_tester = testing .admin
268 user_test2_tester2 = testing2 .admin
269 user_test_tester3 = testing3
270 user_test5_tester5 = testing5 service
271 
272 # To enable Keystone authentication you need to have the auth token
273 # middleware first to be configured. Here is an example below, please
274 # refer to the keystone's documentation for details about the
275 # different settings.
276 #
277 # You'll need to have as well the keystoneauth middleware enabled
278 # and have it in your main pipeline so instead of having tempauth in
279 # there you can change it to: authtoken keystoneauth
280 #
281 [filter:authtoken]
282 # paste.filter_factory = keystonemiddleware.auth_token:filter_factory
283 # identity_uri = http://keystonehost:35357/
284 # auth_uri = http://keystonehost:5000/
285 # admin_tenant_name = service
286 # admin_user = swift
287 # admin_password = password
288 #
289 # delay_auth_decision defaults to False, but leaving it as false will
290 # prevent other auth systems, staticweb, tempurl, formpost, and ACLs from
291 # working. This value must be explicitly set to True.
292 # delay_auth_decision = False
293 #
294 # cache = swift.cache
295 # include_service_catalog = False
296 #
297 paste.filter_factory = keystonemiddleware.auth_token:filter_factory
298 auth_uri = http://node0:5000
299 auth_url = http://node0:35357
300 auth_plugin = password
301 project_domain_id = default
302 user_domain_id = default
303 project_name = service
304 username = swift
305 password = openstack
306 delay_auth_decision = true
307 auth_protocol = http
308 
309 [filter:keystoneauth]
310 use = egg:swift#keystoneauth
311 # The reseller_prefix option lists account namespaces that this middleware is
312 # responsible for. The prefix is placed before the Keystone project id.
313 # For example, for project 12345678, and prefix AUTH, the account is
314 # named AUTH_12345678 (i.e., path is /v1/AUTH_12345678/...).
315 # Several prefixes are allowed by specifying a comma-separated list
316 # as in: "reseller_prefix = AUTH, SERVICE". The empty string indicates a
317 # single blank/empty prefix. If an empty prefix is required in a list of
318 # prefixes, a value of '' (two single quote characters) indicates a
319 # blank/empty prefix. Except for the blank/empty prefix, an underscore ('_')
320 # character is appended to the value unless already present.
321 # reseller_prefix = AUTH
322 #
323 # The user must have at least one role named by operator_roles on a
324 # project in order to create, delete and modify containers and objects
325 # and to set and read privileged headers such as ACLs.
326 # If there are several reseller prefix items, you can prefix the
327 # parameter so it applies only to those accounts (for example
328 # the parameter SERVICE_operator_roles applies to the /v1/SERVICE_<project>
329 # path). If you omit the prefix, the option applies to all reseller
330 # prefix items. For the blank/empty prefix, prefix with '' (do not put
331 # underscore after the two single quote characters).
332 operator_roles = admin, user
333 #
334 # The reseller admin role has the ability to create and delete accounts
335 # reseller_admin_role = ResellerAdmin
336 #
337 # This allows middleware higher in the WSGI pipeline to override auth
338 # processing, useful for middleware such as tempurl and formpost. If you know
339 # you're not going to use such middleware and you want a bit of extra security,
340 # you can set this to false.
341 # allow_overrides = true
342 #
343 # If is_admin is true, a user whose username is the same as the project name
344 # and who has any role on the project will have access rights elevated to be
345 # the same as if the user had an operator role. Note that the condition
346 # compares names rather than UUIDs. This option is deprecated.
347 # is_admin = false
348 #
349 # If the service_roles parameter is present, an X-Service-Token must be
350 # present in the request that when validated, grants at least one role listed
351 # in the parameter. The X-Service-Token may be scoped to any project.
352 # If there are several reseller prefix items, you can prefix the
353 # parameter so it applies only to those accounts (for example
354 # the parameter SERVICE_service_roles applies to the /v1/SERVICE_<project>
355 # path). If you omit the prefix, the option applies to all reseller
356 # prefix items. For the blank/empty prefix, prefix with '' (do not put
357 # underscore after the two single quote characters).
358 # By default, no service_roles are required.
359 # service_roles =
360 #
361 # For backwards compatibility, keystoneauth will match names in cross-tenant
362 # access control lists (ACLs) when both the requesting user and the tenant
363 # are in the default domain i.e the domain to which existing tenants are
364 # migrated. The default_domain_id value configured here should be the same as
365 # the value used during migration of tenants to keystone domains.
366 # default_domain_id = default
367 #
368 # For a new installation, or an installation in which keystone projects may
369 # move between domains, you should disable backwards compatible name matching
370 # in ACLs by setting allow_names_in_acls to false:
371 # allow_names_in_acls = true
372 
373 [filter:healthcheck]
374 use = egg:swift#healthcheck
375 # An optional filesystem path, which if present, will cause the healthcheck
376 # URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
377 # This facility may be used to temporarily remove a Swift node from a load
378 # balancer pool during maintenance or upgrade (remove the file to allow the
379 # node back into the load balancer pool).
380 # disable_path =
381 
382 [filter:cache]
383 use = egg:swift#memcache
384 # You can override the default log routing for this filter here:
385 # set log_name = cache
386 # set log_facility = LOG_LOCAL0
387 # set log_level = INFO
388 # set log_headers = false
389 # set log_address = /dev/log
390 #
391 # If not set here, the value for memcache_servers will be read from
392 # memcache.conf (see memcache.conf-sample) or lacking that file, it will
393 # default to the value below. You can specify multiple servers separated with
394 # commas, as in: 10.1.2.3:11211,10.1.2.4:11211
395 memcache_servers = 127.0.0.1:11211
396 #
397 # Sets how memcache values are serialized and deserialized:
398 # 0 = older, insecure pickle serialization
399 # 1 = json serialization but pickles can still be read (still insecure)
400 # 2 = json serialization only (secure and the default)
401 # If not set here, the value for memcache_serialization_support will be read
402 # from /etc/swift/memcache.conf (see memcache.conf-sample).
403 # To avoid an instant full cache flush, existing installations should
404 # upgrade with 0, then set to 1 and reload, then after some time (24 hours)
405 # set to 2 and reload.
406 # In the future, the ability to use pickle serialization will be removed.
407 # memcache_serialization_support = 2
408 #
409 # Sets the maximum number of connections to each memcached server per worker
410 # memcache_max_connections = 2
411 #
412 # More options documented in memcache.conf-sample
413 
414 [filter:ratelimit]
415 use = egg:swift#ratelimit
416 # You can override the default log routing for this filter here:
417 # set log_name = ratelimit
418 # set log_facility = LOG_LOCAL0
419 # set log_level = INFO
420 # set log_headers = false
421 # set log_address = /dev/log
422 #
423 # clock_accuracy should represent how accurate the proxy servers' system clocks
424 # are with each other. 1000 means that all the proxies' clock are accurate to
425 # each other within 1 millisecond.  No ratelimit should be higher than the
426 # clock accuracy.
427 # clock_accuracy = 1000
428 #
429 # max_sleep_time_seconds = 60
430 #
431 # log_sleep_time_seconds of 0 means disabled
432 # log_sleep_time_seconds = 0
433 #
434 # allows for slow rates (e.g. running up to 5 sec's behind) to catch up.
435 # rate_buffer_seconds = 5
436 #
437 # account_ratelimit of 0 means disabled
438 # account_ratelimit = 0
439 
440 # DEPRECATED- these will continue to work but will be replaced
441 # by the X-Account-Sysmeta-Global-Write-Ratelimit flag.
442 # Please see ratelimiting docs for details.
443 # these are comma separated lists of account names
444 # account_whitelist = a,b
445 # account_blacklist = c,d
446 
447 # with container_limit_x = r
448 # for containers of size x limit write requests per second to r.  The container
449 # rate will be linearly interpolated from the values given. With the values
450 # below, a container of size 5 will get a rate of 75.
451 # container_ratelimit_0 = 100
452 # container_ratelimit_10 = 50
453 # container_ratelimit_50 = 20
454 
455 # Similarly to the above container-level write limits, the following will limit
456 # container GET (listing) requests.
457 # container_listing_ratelimit_0 = 100
458 # container_listing_ratelimit_10 = 50
459 # container_listing_ratelimit_50 = 20
460 
461 [filter:domain_remap]
462 use = egg:swift#domain_remap
463 # You can override the default log routing for this filter here:
464 # set log_name = domain_remap
465 # set log_facility = LOG_LOCAL0
466 # set log_level = INFO
467 # set log_headers = false
468 # set log_address = /dev/log
469 #
470 # storage_domain = example.com
471 # path_root = v1
472 
473 # Browsers can convert a host header to lowercase, so check that reseller
474 # prefix on the account is the correct case. This is done by comparing the
475 # items in the reseller_prefixes config option to the found prefix. If they
476 # match except for case, the item from reseller_prefixes will be used
477 # instead of the found reseller prefix. When none match, the default reseller
478 # prefix is used. When no default reseller prefix is configured, any request
479 # with an account prefix not in that list will be ignored by this middleware.
480 # reseller_prefixes = AUTH
481 # default_reseller_prefix =
482 
483 [filter:catch_errors]
484 use = egg:swift#catch_errors
485 # You can override the default log routing for this filter here:
486 # set log_name = catch_errors
487 # set log_facility = LOG_LOCAL0
488 # set log_level = INFO
489 # set log_headers = false
490 # set log_address = /dev/log
491 
492 [filter:cname_lookup]
493 # Note: this middleware requires python-dnspython
494 use = egg:swift#cname_lookup
495 # You can override the default log routing for this filter here:
496 # set log_name = cname_lookup
497 # set log_facility = LOG_LOCAL0
498 # set log_level = INFO
499 # set log_headers = false
500 # set log_address = /dev/log
501 #
502 # Specify the storage_domain that match your cloud, multiple domains
503 # can be specified separated by a comma
504 # storage_domain = example.com
505 #
506 # lookup_depth = 1
507 
508 # Note: Put staticweb just after your auth filter(s) in the pipeline
509 [filter:staticweb]
510 use = egg:swift#staticweb
511 
512 # Note: Put tempurl before dlo, slo and your auth filter(s) in the pipeline
513 [filter:tempurl]
514 use = egg:swift#tempurl
515 # The methods allowed with Temp URLs.
516 # methods = GET HEAD PUT POST DELETE
517 #
518 # The headers to remove from incoming requests. Simply a whitespace delimited
519 # list of header names and names can optionally end with '*' to indicate a
520 # prefix match. incoming_allow_headers is a list of exceptions to these
521 # removals.
522 # incoming_remove_headers = x-timestamp
523 #
524 # The headers allowed as exceptions to incoming_remove_headers. Simply a
525 # whitespace delimited list of header names and names can optionally end with
526 # '*' to indicate a prefix match.
527 # incoming_allow_headers =
528 #
529 # The headers to remove from outgoing responses. Simply a whitespace delimited
530 # list of header names and names can optionally end with '*' to indicate a
531 # prefix match. outgoing_allow_headers is a list of exceptions to these
532 # removals.
533 # outgoing_remove_headers = x-object-meta-*
534 #
535 # The headers allowed as exceptions to outgoing_remove_headers. Simply a
536 # whitespace delimited list of header names and names can optionally end with
537 # '*' to indicate a prefix match.
538 # outgoing_allow_headers = x-object-meta-public-*
539 
540 # Note: Put formpost just before your auth filter(s) in the pipeline
541 [filter:formpost]
542 use = egg:swift#formpost
543 
544 # Note: Just needs to be placed before the proxy-server in the pipeline.
545 [filter:name_check]
546 use = egg:swift#name_check
547 # forbidden_chars = '"`<>
548 # maximum_length = 255
549 # forbidden_regexp = /\./|/\.\./|/\.$|/\.\.$
550 
551 [filter:list-endpoints]
552 use = egg:swift#list_endpoints
553 # list_endpoints_path = /endpoints/
554 
555 [filter:proxy-logging]
556 use = egg:swift#proxy_logging
557 # If not set, logging directives from [DEFAULT] without "access_" will be used
558 # access_log_name = swift
559 # access_log_facility = LOG_LOCAL0
560 # access_log_level = INFO
561 # access_log_address = /dev/log
562 #
563 # If set, access_log_udp_host will override access_log_address
564 # access_log_udp_host =
565 # access_log_udp_port = 514
566 #
567 # You can use log_statsd_* from [DEFAULT] or override them here:
568 # access_log_statsd_host = localhost
569 # access_log_statsd_port = 8125
570 # access_log_statsd_default_sample_rate = 1.0
571 # access_log_statsd_sample_rate_factor = 1.0
572 # access_log_statsd_metric_prefix =
573 # access_log_headers = false
574 #
575 # If access_log_headers is True and access_log_headers_only is set only
576 # these headers are logged. Multiple headers can be defined as comma separated
577 # list like this: access_log_headers_only = Host, X-Object-Meta-Mtime
578 # access_log_headers_only =
579 #
580 # By default, the X-Auth-Token is logged. To obscure the value,
581 # set reveal_sensitive_prefix to the number of characters to log.
582 # For example, if set to 12, only the first 12 characters of the
583 # token appear in the log. An unauthorized access of the log file
584 # won't allow unauthorized usage of the token. However, the first
585 # 12 or so characters is unique enough that you can trace/debug
586 # token usage. Set to 0 to suppress the token completely (replaced
587 # by '...' in the log).
588 # Note: reveal_sensitive_prefix will not affect the value
589 # logged with access_log_headers=True.
590 # reveal_sensitive_prefix = 16
591 #
592 # What HTTP methods are allowed for StatsD logging (comma-sep); request methods
593 # not in this list will have "BAD_METHOD" for the <verb> portion of the metric.
594 # log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS
595 #
596 # Note: The double proxy-logging in the pipeline is not a mistake. The
597 # left-most proxy-logging is there to log requests that were handled in
598 # middleware and never made it through to the right-most middleware (and
599 # proxy server). Double logging is prevented for normal requests. See
600 # proxy-logging docs.
601 
602 # Note: Put before both ratelimit and auth in the pipeline.
603 [filter:bulk]
604 use = egg:swift#bulk
605 # max_containers_per_extraction = 10000
606 # max_failed_extractions = 1000
607 # max_deletes_per_request = 10000
608 # max_failed_deletes = 1000
609 
610 # In order to keep a connection active during a potentially long bulk request,
611 # Swift may return whitespace prepended to the actual response body. This
612 # whitespace will be yielded no more than every yield_frequency seconds.
613 # yield_frequency = 10
614 
615 # Note: The following parameter is used during a bulk delete of objects and
616 # their container. This would frequently fail because it is very likely
617 # that all replicated objects have not been deleted by the time the middleware got a
618 # successful response. It can be configured the number of retries. And the
619 # number of seconds to wait between each retry will be 1.5**retry
620 
621 # delete_container_retry_count = 0
622 
623 # Note: Put after auth and staticweb in the pipeline.
624 [filter:slo]
625 use = egg:swift#slo
626 # max_manifest_segments = 1000
627 # max_manifest_size = 2097152
628 # min_segment_size = 1048576
629 # Start rate-limiting SLO segment serving after the Nth segment of a
630 # segmented object.
631 # rate_limit_after_segment = 10
632 #
633 # Once segment rate-limiting kicks in for an object, limit segments served
634 # to N per second. 0 means no rate-limiting.
635 # rate_limit_segments_per_sec = 0
636 #
637 # Time limit on GET requests (seconds)
638 # max_get_time = 86400
639 
640 # Note: Put after auth and staticweb in the pipeline.
641 # If you don't put it in the pipeline, it will be inserted for you.
642 [filter:dlo]
643 use = egg:swift#dlo
644 # Start rate-limiting DLO segment serving after the Nth segment of a
645 # segmented object.
646 # rate_limit_after_segment = 10
647 #
648 # Once segment rate-limiting kicks in for an object, limit segments served
649 # to N per second. 0 means no rate-limiting.
650 # rate_limit_segments_per_sec = 1
651 #
652 # Time limit on GET requests (seconds)
653 # max_get_time = 86400
654 
655 # Note: Put after auth in the pipeline.
656 [filter:container-quotas]
657 use = egg:swift#container_quotas
658 
659 # Note: Put after auth in the pipeline.
660 [filter:account-quotas]
661 use = egg:swift#account_quotas
662 
663 [filter:gatekeeper]
664 use = egg:swift#gatekeeper
665 # You can override the default log routing for this filter here:
666 # set log_name = gatekeeper
667 # set log_facility = LOG_LOCAL0
668 # set log_level = INFO
669 # set log_headers = false
670 # set log_address = /dev/log
671 
672 [filter:container_sync]
673 use = egg:swift#container_sync
674 # Set this to false if you want to disallow any full url values to be set for
675 # any new X-Container-Sync-To headers. This will keep any new full urls from
676 # coming in, but won't change any existing values already in the cluster.
677 # Updating those will have to be done manually, as knowing what the true realm
678 # endpoint should be cannot always be guessed.
679 # allow_full_urls = true
680 # Set this to specify this clusters //realm/cluster as "current" in /info
681 # current = //REALM/CLUSTER
682 
683 # Note: Put it at the beginning of the pipeline to profile all middleware. But
684 # it is safer to put this after catch_errors, gatekeeper and healthcheck.
685 [filter:xprofile]
686 use = egg:swift#xprofile
687 # This option enable you to switch profilers which should inherit from python
688 # standard profiler. Currently the supported value can be 'cProfile',
689 # 'eventlet.green.profile' etc.
690 # profile_module = eventlet.green.profile
691 #
692 # This prefix will be used to combine process ID and timestamp to name the
693 # profile data file.  Make sure the executing user has permission to write
694 # into this path (missing path segments will be created, if necessary).
695 # If you enable profiling in more than one type of daemon, you must override
696 # it with an unique value like: /var/log/swift/profile/proxy.profile
697 # log_filename_prefix = /tmp/log/swift/profile/default.profile
698 #
699 # the profile data will be dumped to local disk based on above naming rule
700 # in this interval.
701 # dump_interval = 5.0
702 #
703 # Be careful, this option will enable profiler to dump data into the file with
704 # time stamp which means there will be lots of files piled up in the directory.
705 # dump_timestamp = false
706 #
707 # This is the path of the URL to access the mini web UI.
708 # path = /__profile__
709 #
710 # Clear the data when the wsgi server shutdown.
711 # flush_at_shutdown = false
712 #
713 # Clear the data when the wsgi server shutdown.
714 # flush_at_shutdown = false
715 #
716 # unwind the iterator of applications
717 # unwind = false
718 
719 # Note: Put after slo, dlo in the pipeline.
720 # If you don't put it in the pipeline, it will be inserted automatically.
721 [filter:versioned_writes]
722 use = egg:swift#versioned_writes
723 # Enables using versioned writes middleware and exposing configuration
724 # settings via HTTP GET /info.
725 # WARNING: Setting this option bypasses the "allow_versions" option
726 # in the container configuration file, which will be eventually
727 # deprecated. See documentation for more details.
728 # allow_versioned_writes = false

View Code

 

接下来,配置swift节点:

ss1. 安装组件,基本配置。

1 yum install xfsprogs rsync
2 
3 mkfs.xfs /dev/sdb
4 mkdir -p /srv/node/sdb

下面看看我的swift节点的磁盘信息吧:

1 [root@node3 opt]# fdisk -l
 2 
 3 Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
 4 Units = sectors of 1 * 512 = 512 bytes
 5 Sector size (logical/physical): 512 bytes / 512 bytes
 6 I/O size (minimum/optimal): 512 bytes / 512 bytes
 7 Disk label type: dos
 8 Disk identifier: 0x0005b206
 9 
10    Device Boot      Start         End      Blocks   Id  System
11 /dev/sda1   *        2048     1026047      512000   83  Linux
12 /dev/sda2         1026048   976773119   487873536   8e  Linux LVM
13 
14 Disk /dev/sdb: 500.1 GB, 500107862016 bytes, 976773168 sectors
15 Units = sectors of 1 * 512 = 512 bytes
16 Sector size (logical/physical): 512 bytes / 512 bytes
17 I/O size (minimum/optimal): 512 bytes / 512 bytes
18 
19 
20 Disk /dev/mapper/centos00-swap: 16.9 GB, 16919822336 bytes, 33046528 sectors
21 Units = sectors of 1 * 512 = 512 bytes
22 Sector size (logical/physical): 512 bytes / 512 bytes
23 I/O size (minimum/optimal): 512 bytes / 512 bytes
24 
25 
26 Disk /dev/mapper/centos00-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
27 Units = sectors of 1 * 512 = 512 bytes
28 Sector size (logical/physical): 512 bytes / 512 bytes
29 I/O size (minimum/optimal): 512 bytes / 512 bytes
30 
31 
32 Disk /dev/mapper/centos00-home: 429.0 GB, 428972441600 bytes, 837836800 sectors
33 Units = sectors of 1 * 512 = 512 bytes
34 Sector size (logical/physical): 512 bytes / 512 bytes
35 I/O size (minimum/optimal): 512 bytes / 512 bytes

 

ss2. 配置/etc/fstab

在这个文件末尾添加下面一行信息:

1 /dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

 

ss3. 挂载磁盘

1 mount /srv/node/sdb

 

ss4. 配置/etc/rsyncd.conf,追加下面的信息。

1 uid = swift
 2 gid = swift
 3 log file = /var/log/rsyncd.log
 4 pid file = /var/run/rsyncd.pid
 5 address = 192.168.1.130
 6 
 7 [account]
 8 max connections = 2
 9 path = /srv/node/
10 read only = false
11 lock file = /var/lock/account.lock
12 
13 [container]
14 max connections = 2
15 path = /srv/node/
16 read only = false
17 lock file = /var/lock/container.lock
18 
19 [object]
20 max connections = 2
21 path = /srv/node/
22 read only = false
23 lock file = /var/lock/object.lock

 

ss5. 启动服务

1 systemctl enable rsyncd.service
2 systemctl start rsyncd.service

 

ss6. 安装分组件

1 yum install openstack-swift-account openstack-swift-container openstack-swift-object

 

ss7. 配置account,container,object

1 curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/liberty
2 curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/liberty
3 curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/liberty

/etc/swift/account-server.conf,其他部分信息保持默认值。

1 [DEFAULT]
 2 bind_ip = 192.168.1.130
 3 bind_port = 6002
 4 user = swift
 5 swift_dir = /etc/swift
 6 devices = /srv/node
 7 mount_check = true
 8 
 9 [pipeline:main]
10 pipeline = healthcheck recon account-server
11 
12 [filter:recon]
13 use = egg:swift#recon
14 recon_cache_path = /var/cache/swift

/etc/swift/container-server.conf,其他部分信息保持默认值。

1 [DEFAULT]
 2 bind_ip = 192.168.1.130
 3 bind_port = 6001
 4 user = swift
 5 swift_dir = /etc/swift
 6 devices = /srv/node
 7 mount_check = true
 8 
 9 [pipeline:main]
10 pipeline = healthcheck recon container-server
11 
12 [filter:recon]
13 use = egg:swift#recon
14 recon_cache_path = /var/cache/swift

/etc/swift/object-server.conf,其他部分信息保持默认值。

1 [DEFAULT]
 2 bind_ip = 192.168.1.130
 3 bind_port = 6000
 4 user = swift
 5 swift_dir = /etc/swift
 6 devices = /srv/node
 7 mount_check = true
 8 
 9 [pipeline:main]
10 pipeline = healthcheck recon object-server
11 
12 [filter:recon]
13 use = egg:swift#recon
14 recon_cache_path = /var/cache/swift
15 recon_lock_path = /var/lock

 

ss8.修改属组

1 chown -R swift:swift /srv/node
2 
3 mkdir -p /var/cache/swift
4 chown -R root:swift /var/cache/swift

 

下面的操作又回到了controller节点上:

s4. 创建account ring,首先要cd到/etc/swift目录。

1 swift-ring-builder account.builder create 10 3 1
2 
3 swift-ring-builder account.builder add --region 1 --zone 1 --ip 192.168.1.130 --port 6002 --device sdb --weight 100
4 
5 swift-ring-builder account.builder rebalance

 

s5. 创建container ring,首先要cd到/etc/swift目录。

1 swift-ring-builder container.builder create 10 3 1
2 
3 swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.130 --port 6001 --device sdb --weight 100
4 
5 swift-ring-builder container.builder rebalance

 

s5. 创建object ring,首先要cd到/etc/swift目录。

1 swift-ring-builder object.builder create 10 3 1
2 
3 swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.130 --port 6000 --device sdb --weight 100
4 
5 swift-ring-builder object.builder rebalance

 

s6. Copy the account.ring.gz, container.ring.gz, and object.ring.gz files to the /etc/swift

 

s7. 配置/etc/swift/swift.conf。然后将配置好的文件copy到node3(其他任何运行object节点以及运行了proxy-server的节点上)

1 curl -o /etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/liberty

/etc/swift/swift.conf

1 [swift-hash]
2 swift_hash_path_suffix = shihuc
3 swift_hash_path_prefix = openstack
4 
5 [storage-policy:0]
6 name = Policy-0
7 default = yes

 

s8. 修改属组

1 chown -R root:swift /etc/swift

 

s9. 启动服务(controller 节点以及运行了proxy-server的节点)

1 systemctl enable openstack-swift-proxy.service memcached.service
2 systemctl start openstack-swift-proxy.service memcached.service

 

s10. 启动account,container以及object服务 (node3 object节点)

1 systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service   openstack-swift-account-reaper.service openstack-swift-account-replicator.service
2 systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
3 systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
4 systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
5 systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
6 systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service

 

最后,就是验证了。首先添加环境参数(controller节点)

1 echo "export OS_AUTH_VERSION=3"  | tee -a admin-openrc.sh demo-openrc.sh
1 [root@node0 opt]# swift stat
 2                         Account: AUTH_c6669377868c438f8a81cc234f85338f
 3                      Containers: 1
 4                         Objects: 0
 5                           Bytes: 0
 6 Containers in policy "policy-0": 1
 7    Objects in policy "policy-0": 0
 8      Bytes in policy "policy-0": 0
 9     X-Account-Project-Domain-Id: default
10                     X-Timestamp: 1456454203.32398
11                      X-Trans-Id: txaee8904dd48f484bb7534-0056d4f908
12                    Content-Type: text/plain; charset=utf-8
13                   Accept-Ranges: bytes

看到上面的信息,说明基本已经成功了。下面来看看传文件,查询等操作,看到下面的内容,说明一切都可以了,成功!

1 [root@node0 opt]# swift upload C1 admin-openrc.sh 
2 admin-openrc.sh
3 [root@node0 opt]# 
4 [root@node0 opt]# swift list
5 C1
6 [root@node0 opt]# swift list C1
7 admin-openrc.sh
8 [root@node0 opt]#

 

在这里,需要说明的是,在部署swift的时候遇到的问题!

problem1:

1 [root@node0 opt]# swift stat -v
2 /usr/lib/python2.7/site-packages/keystoneclient/service_catalog.py:196: UserWarning: Providing attr without filter_value to get_urls() is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. Either both should be provided or neither should be provided.
3   'Providing attr without filter_value to get_urls() is '
4 Account HEAD failed: http://node0:8080/v1/AUTH_c6669377868c438f8a81cc234f85338f 503 Service Unavailable

这个503的错误,通过查询controller节点的/var/log/message信息:

1 Feb 26 10:02:30 localhost swift-account-server: Traceback (most recent call last):
 2 Feb 26 10:02:30 localhost swift-account-server: File "/usr/bin/swift-account-server", line 23, in <module>
 3 Feb 26 10:02:30 localhost swift-account-server: sys.exit(run_wsgi(conf_file, 'account-server', **options))
 4 Feb 26 10:02:30 localhost swift-account-server: File "/usr/lib/python2.7/site-packages/swift/common/wsgi.py", line 888, in run_wsgi
 5 Feb 26 10:02:30 localhost swift-account-server: loadapp(conf_path, global_conf=global_conf)
 6 Feb 26 10:02:30 localhost swift-account-server: File "/usr/lib/python2.7/site-packages/swift/common/wsgi.py", line 384, in loadapp
 7 Feb 26 10:02:30 localhost swift-account-server: ctx = loadcontext(loadwsgi.APP, conf_file, global_conf=global_conf)
 8 Feb 26 10:02:30 localhost swift-account-server: File "/usr/lib/python2.7/site-packages/swift/common/wsgi.py", line 368, in loadcontext
 9 Feb 26 10:02:30 localhost swift-account-server: global_conf=global_conf)
10 Feb 26 10:02:30 localhost swift-account-server: File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext
11 Feb 26 10:02:30 localhost swift-account-server: global_conf=global_conf)
12 Feb 26 10:02:30 localhost swift-account-server: File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 320, in _loadconfig
13 Feb 26 10:02:30 localhost swift-account-server: return loader.get_context(object_type, name, global_conf)
14 Feb 26 10:02:30 localhost swift-account-server: File "/usr/lib/python2.7/site-packages/swift/common/wsgi.py", line 64, in get_context
15 Feb 26 10:02:30 localhost swift-account-server: object_type, name=name, global_conf=global_conf)
16 Feb 26 10:02:30 localhost swift-account-server: File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 450, in get_context
17 Feb 26 10:02:30 localhost swift-account-server: global_additions=global_additions)
18 Feb 26 10:02:30 localhost swift-account-server: File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 562, in _pipeline_app_context
19 Feb 26 10:02:30 localhost swift-account-server: for name in pipeline[:-1]]
20 Feb 26 10:02:30 localhost swift-account-server: File "/usr/lib/python2.7/site-packages/swift/common/wsgi.py", line 64, in get_context
21 Feb 26 10:02:30 localhost swift-account-server: object_type, name=name, global_conf=global_conf)
22 Feb 26 10:02:30 localhost swift-account-server: File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 408, in get_context
23 Feb 26 10:02:30 localhost swift-account-server: object_type, name=name)
24 Feb 26 10:02:30 localhost swift-account-server: File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 587, in find_config_section
25 Feb 26 10:02:30 localhost swift-account-server: self.filename))
26 Feb 26 10:02:30 localhost swift-account-server: LookupError: No section 'healthcheck' (prefixed by 'filter') found in config /etc/swift/account-server.conf
27 Feb 26 10:02:30 localhost systemd: openstack-swift-account.service: main process exited, code=exited, status=1/FAILURE
28 Feb 26 10:02:30 localhost systemd: Unit openstack-swift-account.service entered failed state.
29 Feb 26 10:02:30 localhost systemd: openstack-swift-account.service failed.

发现,没有healthcheck段,这个就需要去检测配置文件了,account-server, container-server, object-server。起初,我的这三个配置文件,获取过程是有问题的。里面的确没有这个healthcheck段,起初我的配置是通过wget -qO target source这种方式获取的,没有注意这个操作指令是错的。对照官方的guide重新配置后,重启account,container以及object的服务,通过systemd-cgls命令查看swift相关的服务都起来了。说明配置没有问题了。再次verify:

1 [root@node0 swift]# swift stat -v
 2 /usr/lib/python2.7/site-packages/keystoneclient/service_catalog.py:196: UserWarning: Providing attr without filter_value to get_urls() is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. Either both should be provided or neither should be provided.
 3   'Providing attr without filter_value to get_urls() is '
 4      StorageURL: http://node0:8080/v1/AUTH_c6669377868c438f8a81cc234f85338f
 5      Auth Token: 98bd7931a5834f6dba424dcab9a14d3a
 6         Account: AUTH_c6669377868c438f8a81cc234f85338f
 7      Containers: 0
 8         Objects: 0
 9           Bytes: 0
10 X-Put-Timestamp: 1456453671.08880
11     X-Timestamp: 1456453671.08880
12      X-Trans-Id: tx952ca1fe02b94b35b11d4-0056cfb826
13    Content-Type: text/plain; charset=utf-8
14 [root@node0 swift]#

 

problem2:

1 /usr/lib/python2.7/site-packages/keystoneclient/service_catalog.py:196: UserWarning: Providing attr without filter_value to get_urls() is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. Either both should be provided or neither should be provided.
2   'Providing attr without filter_value to get_urls() is '

针对上面swift指令执行后,出现的下面的warning信息的解决办法,很简单,就是要给swift指令提供region信息。我在admin-openrc.sh中添加了export OS_REGION_NAME=RegionOne,然后source admin-openrc.sh之后,再次执行swift的指令,就不再报这个warning了。

 

最后上传一个swift相关的dashboard的截图,作为结束!

openstack cnp环境 openstack cyborg_openstack cnp环境_08