控制节点

各个组件:

1. neutron-server    端口(9696)api:接收和响应外部的网络管理请求
 2. neutron-linuxbridge-agent:    负责创建桥接网卡
 3. neutron-dhcp-agent:    负责分配IP
 4. neutron-metadata-agent:    配置nova-metadata-api实现虚拟机的定制化操作
 5. L3-agent:    实现三层网络vxlan(网络层)

一、 创库、授权

MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    ->     IDENTIFIED BY 'Fq9atARCZtjEbqu3XMh8';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    ->     IDENTIFIED BY 'Fq9atARCZtjEbqu3XMh8';
Query OK, 0 rows affected (0.00 sec)

二、在keystone创建用户、并关联角色

如果你在期间关机或者重启等其他操作,记得重新加载环境变量

source admin-openrc

[root@controller01 ~]# openstack user create --domain default --password N3Tt3A80q2NqpADFNwTV neutron
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | 8cba1e7341c14ab993124909c705919a |
| enabled   | True                             |
| id        | 3cc4436aa3cb4af4bf88af2ce3494703 |
| name      | neutron                          |
+-----------+----------------------------------+
[root@controller01 ~]# openstack role add --project service --user neutron admin

#这个命令执行后没有输出。

三、 在keystone上创建服务和注册api

[root@controller01 ~]# openstack service create --name neutron \
>   --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | 99a04c8b3b654d8f8a92ad4566d25a36 |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

[root@controller01 ~]# openstack endpoint create --region RegionOne \
>   network public http://controller01:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | fc583a1b99234c139ff3089347d196c3 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 99a04c8b3b654d8f8a92ad4566d25a36 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller01:9696         |
+--------------+----------------------------------+

[root@controller01 ~]# openstack endpoint create --region RegionOne \
>   network internal http://controller01:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 54dab9489bdd409c9669060a45ef57da |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 99a04c8b3b654d8f8a92ad4566d25a36 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller01:9696         |
+--------------+----------------------------------+

[root@controller01 ~]# openstack endpoint create --region RegionOne \
>   network admin http://controller01:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | cc48baaa147d431eada1c2411343b911 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 99a04c8b3b654d8f8a92ad4566d25a36 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller01:9696         |
+--------------+----------------------------------+

四、 安装服务相应软件包

据官方配置,添加完服务API以后,安装neutron软件就发生了分叉:

  • 公共网络:二层网络
  • 私有网络:三层网络,如果需要使用三层网络,需要先配置好公共网络。

我们这里选用的是公共网络

五、 修改配置文件

neutron控制节点

  • 公网网络的安装配置:

配置服务组件、配置Modular Layer 2(ML2)插件、Linuxbridge代理、DHCP代理,共四个配置文件需要修改。

yum install openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-linuxbridge ebtables
  • ebtables:ebtables和iptables类似,都是Linux系统下网络数据包过滤的配置工具。为什么叫配置工具呢?是因为他们只制定规则,具体的实施者是内核!也就是说过滤功能是由内核底层提供支持的,这两个工具只是负责制定过滤的rules

编辑/etc/neutron/neutron.conf 文件并完成如下操作:

在 [database] 部分,配置数据库访问:

[database]
...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

使用你设置的数据库密码替换 NEUTRON_DBPASS 。

[DEFAULT]部分,启用ML2插件并禁用其他插件:

[DEFAULT]
...
core_plugin = ml2    #在这里指定了启用ml2的插件以后,后面就需要针对ml2插件进行配置
service_plugins =    #默认的二层网络里面,是不需要配置服务插件的,只有三层网络才需要进行配置

[DEFAULT][oslo_messaging_rabbit]部分,配置 RabbitMQ 消息队列的连接:

[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

用你在RabbitMQ中为openstack选择的密码替换 “RABBIT_PASS”。

[DEFAULT][keystone_authtoken]部分,配置认证服务访问:

[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

将 NEUTRON_PASS 替换为你在认证服务中为 neutron 用户选择的密码。

[DEFAULT][nova]部分,配置网络服务来通知计算节点的网络拓扑变化:

[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[nova]
...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS

PS:在早期,网络服务是从nova中分出去的,所以他们两个组件之间关系非常密切。所以,在neutron里面需要配nova,nova里面也会需要配置neutron;到后续还需要回到nova中,配置neutron。
例:

  1. 当需要去删除一个网络配置的时候,需要到nova上查询,这个网络是否还有用户在使用。
  2. 当需要删除一台虚拟机的时候,需要通知neutron,将对应这台主机的端口进行删除。

使用你在身份认证服务中设置的nova 用户的密码替换NOVA_PASS

[oslo_concurrency]部分,配置锁路径:

[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp

配置 Modular Layer 2 (ML2) 插件

ML2插件使用Linuxbridge机制来为实例创建layer-2虚拟网络基础设施

编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件并完成以下操作:

[ml2]部分,启用flat和VLAN网络:

[ml2]
...
type_drivers = flat,vlan

1. flat指平面网络,宿主机和我们创出来的云主机在同一个网络;可以理解为网络的桥接;
2. vlan基于二层的网络,vlan需要交换机的支持,而且需要交换机支持做vlan;在vmware虚拟机环境下,是没有办法模拟的。
3. vlan网络能建立多个网络。flat类型只能建立一个网络,就相当于桥接。

[ml2]部分,禁用私有网络:

[ml2]
...
tenant_network_types =

租户网络类型,现在搭建的这个环境,使用的二层网络,所以私有网络这个部分等于空。

[ml2]部分,启用Linuxbridge机制:

[ml2]
...
mechanism_drivers = linuxbridge
  • 虚拟机制,启用linuxbridge;还有另外一个机制是openvswitch,简称OVS。
  • 这个配置,决定了之后需要调整的两个配置文件的配置和文件的不一样

[ml2] 部分,启用端口安全扩展驱动:

[ml2]
...
extension_drivers = port_security

等同于云上的安全组配置

[ml2_type_flat]部分,配置公共虚拟网络为flat网络

[ml2_type_flat]
...
flat_networks = provider

定义的flat网络名字为provider;但并没有说明通过那块网卡进行桥接

[securitygroup]部分,启用 ipset 增加安全组规则的高效性:

[securitygroup]
...
enable_ipset = True

#使用ebtables来实现管理

配置Linuxbridge代理

  • Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。

编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并且完成以下操作:

[linux_bridge]部分,将公共虚拟网络和公共物理网络接口对应起来:

[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
  • 将PROVIDER_INTERFACE_NAME这段替换为虚拟机上的物理网卡的名字,例如:我这里配置的IP为192.168.137.11,对应的网卡名为eth0。

[vxlan]部分,禁止VXLAN覆盖网络:

[vxlan]
enable_vxlan = False

[securitygroup]部分,启用安全组并配置 Linuxbridge iptables firewall driver:

[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

同样的,neutron.agent.linux.iptables_firewall.IptablesFirewallDriver,这一段也为python导包的路径

配置DHCP代理

  • DHCP agent为虚拟网络提供DHCP服务。

编辑/etc/neutron/dhcp_agent.ini文件并完成下面的操作:

[DEFAULT]部分,配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据,这样在公共网络上的实例就可以通过网络来访问元数据

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True

配置元数据代理

The :term:metadata agent <Metadata agent>负责提供配置信息,例如:访问实例的凭证

编辑/etc/neutron/metadata_agent.ini文件并完成以下操作:

[DEFAULT] 部分,配置元数据主机以及共享密码:

[DEFAULT]
...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
  • metadata_proxy_shared_secret暂未配置。这个地方的配置取决于与nova.con上的配置进行对称

用你为元数据代理设置的密码替换 METADATA_SECRET。

为控制节点计算服务配置文件新增 网络服务--为补充的neutron配置
编辑/etc/nova/nova.conf文件并完成以下操作:

[neutron]部分,配置访问参数,启用元数据代理并设置密码:

[neutron]
...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS

service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET

1.这里metadata_proxy_shared_secret,就是为neutron服务设置的密码。
2. 将 NEUTRON_PASS 替换为你在认证服务中为 neutron 用户选择的密码。
3. 使用你为元数据代理设置的密码替换METADATA_SECRET

六、 同步数据库

网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini。如果超链接不存在,使用下面的命令创建它:

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

因为核心的网络虚拟化两大类ml2和vmware,同步数据库的时候,只根据/etc/neutron/plugin.ini进行执行,所以,使用那个插件,就将那个插件软链接到plugin.ini文件上

[root@controller01 ml2]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@controller01 ml2]# ls !$
ls /etc/neutron/plugin.ini
lrwxrwxrwx 1 root root 37 Nov 24 18:00 /etc/neutron/plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini

同步数据库:

[root@controller01 ml2]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
>   --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
No handlers could be found for logger "oslo_config.cfg"
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  Running upgrade for neutron ...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> kilo, kilo_initial
INFO  [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py
INFO  [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, neutrodb_ipam
INFO  [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf, Initial operations in support of address scopes
INFO  [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee, Flavor framework
INFO  [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f, network_rbac
INFO  [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773, quota_usage
INFO  [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592, subnetpool hash
INFO  [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7, add order to dnsnameservers
INFO  [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79, address scope support in subnetpool
INFO  [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes
INFO  [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations
INFO  [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port
INFO  [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d, Add availability zone
INFO  [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a, add is_default to subnetpool
INFO  [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25, Add standard attribute table
INFO  [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee, Add network availability zone
INFO  [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9, Add router availability zone
INFO  [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4, Add ip_version to AddressScope
INFO  [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664, Add tables and attributes to support external DNS integration
INFO  [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5, add_unique_ha_router_agent_port_bindings
INFO  [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f, Auto Allocated Topology - aka Get-Me-A-Network
INFO  [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821, add dynamic routing model data
INFO  [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4, add_bgp_dragent_model_data
INFO  [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81, rbac_qos_policy
INFO  [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6, Add resource_versions row to agent table
INFO  [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532, tag support
INFO  [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f, add_timestamp_to_base_resources
INFO  [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a, Add desc to standard attr table
INFO  [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99, Initial no-op Liberty contract rule.
INFO  [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada, network_rbac
INFO  [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016, Drop legacy OVS and LB plugin tables
INFO  [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3, Metaplugin removal
INFO  [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d, Add missing foreign keys
INFO  [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d, add geneve ml2 type driver
INFO  [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297, Drop cisco monolithic tables
INFO  [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c, Drop embrane plugin table
INFO  [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39, standardattributes migration
INFO  [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b, DVR sheduling refactoring
INFO  [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050, Drop NEC plugin tables
INFO  [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9, rbac_qos_policy
INFO  [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada, network_rbac_external
INFO  [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc, standard_desc
  OK

如果结尾,有一个OK,那就表示同步成功了。

注解:数据库的同步发生在 Networking 之后,因为脚本需要完成服务器和插件的配置文件。

检查数据是否正确生成

[root@controller01 ml2]# mysql -uroot -phl044sdvwTT1LZ7Oa4wp neutron -e "show  tables;"
+-----------------------------------------+
| Tables_in_neutron                       |
+-----------------------------------------+
| address_scopes                          |
| agents                                  |
| alembic_version                         |
| allowedaddresspairs                     |
...
......

七、 启动服务

重启nova-api

  • 这个很重要,因为改了nova的配置
systemctl restart openstack-nova-api.service

设置开机启动

systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service

启动服务

systemctl status neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service

检查服务是否正确启动

[root@controller01 ~]# source admin-openrc
[root@controller01 ~]# neutron agent-list
+--------------------------------------+--------------------+--------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host         | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+--------------+-------------------+-------+----------------+---------------------------+
| 15e5fe0b-11d6-4b2b-bb8d-e4f049d3d02f | DHCP agent         | controller01 | nova              | :-)   | True           | neutron-dhcp-agent        |
| a80942fe-0468-4055-aa01-8a06dd2c3ce0 | Metadata agent     | controller01 |                   | :-)   | True           | neutron-metadata-agent    |
| efcb5c70-df27-422c-b69f-b2853d86a2e3 | Linux bridge agent | controller01 |                   | :-)   | True           | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+--------------+-------------------+-------+----------------+---------------------------+

出现这个结果即表示服务运行成功。新启动的服务,大概需要等待1到2分钟,才能加载到对应的服务。neutron是用:-)来表示服务运行健康,服务没有运行成功,会显示xxx;neutron的检查是有时间限制的,并不是每分每秒都在进行,如果在还没有检查的时候某个服务宕了。那因为还没有进行检查,他会继续显示服务,为正常状态。

neutron计算节点

一、 安装服务

yum install openstack-neutron-linuxbridge ebtables ipset
  1. ipset,批量控制iptables
  2. openstack-neutron-linuxbridge,创建给虚拟机的桥接网卡

计算节点必备的两个服务:

  1. nova-compute 创建主机
  2. neutron-linuxbridge 创建网络

二、 修改配置文件

配置neutron

编辑/etc/neutron/neutron.conf 文件并完成如下操作:

[database] 部分,注释所有connection 项,因为计算节点不直接访问数据库。

在 “[DEFAULT]” 和 “[oslo_messaging_rabbit]”部分,配置 “RabbitMQ” 消息队列的连接:

[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

用你在RabbitMQ中为openstack选择的密码替换 “RABBIT_PASS”。

[DEFAULT][keystone_authtoken]部分,配置认证服务访问:

[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

将 NEUTRON_PASS 替换为你在认证服务中为 neutron 用户选择的密码。

在 [oslo_concurrency] 部分,配置锁路径:

[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp

配置公共网络

因为控制节点配置的是公共网络;所以,同步的控制节点也要配置一下公网网络

一、 配置linuxbridge

  • Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。

编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并且完成以下操作:

[linux_bridge]部分,将公共虚拟网络和公共物理网络接口对应起来:

[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

这个配置,就是用于区分网段网卡的,因为网卡对应的网段可能会存在不一致,所以,这里指定的网卡,应和管理节点同个网段的网卡,例如,管理节点使用的10.0.0.x网段,这个网段对应的网卡是eth0,但阶段节点10.0.0.x网段对应的网卡是ens32,那这里计算节点的这个配置,应使用的配置就是ens32,。

[vxlan]部分,禁止VXLAN覆盖网络:

[vxlan]
enable_vxlan = False

[securitygroup]部分,启用安全组并配置 Linuxbridge iptables firewall driver:

[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

二、 为计算节点配置网络服务

编辑/etc/nova/nova.conf文件并完成下面的操作:

[neutron] 部分,配置访问参数:

[neutron]
...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS

将 NEUTRON_PASS 替换为你在认证服务中为 neutron 用户选择的密码。

三、 启动服务

因为修改了nova的配置,所以首先要重启nova服务

[root@computer01 ~]# systemctl restart openstack-nova-compute.service

启动neutron服务

[root@computer01 ~]# systemctl enable neutron-linuxbridge-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
[root@computer01 ~]# systemctl start neutron-linuxbridge-agent.service
[root@computer01 ~]# systemctl status neutron-linuxbridge-agent
● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent
   Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-11-25 11:00:00 CST; 15s ago
  Process: 3054 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
Main PID: 3062 (neutron-linuxbr)
    Tasks: 1
   CGroup: /system.slice/neutron-linuxbridge-agent.service
           └─3062 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neut...
Nov 25 11:00:00 computer01 neutron-enable-bridge-firewall.sh[3054]: net.bridge.bridge-nf-call-arptables = 1
Nov 25 11:00:00 computer01 neutron-enable-bridge-firewall.sh[3054]: net.bridge.bridge-nf-call-iptables = 1
Nov 25 11:00:00 computer01 neutron-enable-bridge-firewall.sh[3054]: net.bridge.bridge-nf-call-ip6tables = 1
Nov 25 11:00:00 computer01 systemd[1]: Started OpenStack Neutron Linux Bridge Agent.
Nov 25 11:00:00 computer01 neutron-linuxbridge-agent[3062]: Guru mediation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be ...e reports.
Nov 25 11:00:00 computer01 neutron-linuxbridge-agent[3062]: Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.
Nov 25 11:00:01 computer01 neutron-linuxbridge-agent[3062]: Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications".
Nov 25 11:00:01 computer01 sudo[3081]:  neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf
Nov 25 11:00:01 computer01 neutron-linuxbridge-agent[3062]: /usr/lib/python2.7/site-packages/pkg_resources/__init__.py:187: RuntimeWarning: You have iterated over the result of pkg_res...
Nov 25 11:00:01 computer01 neutron-linuxbridge-agent[3062]: stacklevel=1,
Hint: Some lines were ellipsized, use -l to show in full.

验证neutron

回到控制节点

[root@controller01 ~]# neutron agent-list
+--------------------------------------+--------------------+--------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host         | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+--------------+-------------------+-------+----------------+---------------------------+
| 15e5fe0b-11d6-4b2b-bb8d-e4f049d3d02f | DHCP agent         | controller01 | nova              | :-)   | True           | neutron-dhcp-agent        |
| 25fcf6e4-444b-43dc-a341-c7789cd167d1 | Linux bridge agent | computer01   |                   | :-)   | True           | neutron-linuxbridge-agent |
| a80942fe-0468-4055-aa01-8a06dd2c3ce0 | Metadata agent     | controller01 |                   | :-)   | True           | neutron-metadata-agent    |
| efcb5c70-df27-422c-b69f-b2853d86a2e3 | Linux bridge agent | controller01 |                   | :-)   | True           | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+--------------+-------------------+-------+----------------+---------------------------+
  • 可以发现host中,新增了一个computer01的信息。