早期的时候是没有neutron,早期所使用的网络的nova-network,经过版本改变才有个neutron。


quantum是因为商标和别的公司重名了,又改成的Neutron


nest 部署服务器的时候直接用dis就行吗_元数据

OpenStack Networking

网络:在实际的物理环境下,我们使用交换机或者集线器把多个计算机连接起来形成了网络。在Neutron的世界里,网络也是将多个不同的云主机连接起来。
子网:在实际的物理环境下,在一个网络中。我们可以将网络划分成多为逻辑子网。在Neutron的世界里,子网也是隶属于网络下的。
端口:在实际的物理环境下,每个子网或者网络,都有很多的端口,比如交换机端口来供计算机连接。在Neutron的世界端口也是隶属于子网下,云主机的网卡会对应到一个端口上。
路由器:在实际的网络环境下,不同网络或者不同逻辑子网之间如果需要进行通信,需要通过路由器进行路由。在Neutron的实际里路由也是这个作用。用来连接不同的网络或者子网。

 

Neutron相关组件



不管是linux bridge 还是ovs,都要连接数据库,连接数据库的代码都是一样的。



就这样搞了个ML2, 在ML2下面才是linux bridge和ovs,linux bridge和ovs都是开源的



其它商业插件,ML2也支持



dhcp agent是分配ip地址的



L3-agent 是做3层网络的,路由的



LBAAS 是负载均衡的

nest 部署服务器的时候直接用dis就行吗_子网_02


宿主机和虚拟机在一个网络,叫单一扁平网络,在官方文档里叫提供者网络。比如下图

单一平面网络的缺点:

存在单一网络瓶颈,缺乏可伸缩性。

缺乏合适的多租户隔离

nest 部署服务器的时候直接用dis就行吗_子网_03


网络介绍

配置网络选项,分公共网络和私有网络
部署网络服务使用公共网络和私有网络两种架构中的一种来部署网络服务。
公共网络:采用尽可能简单的架构进行部署,只支持实例连接到公有网络(外部网络)。没有私有网络(个人网络),路由器以及浮动IP地址。
只有admin或者其他特权用户才可以管理公有网络。
私有网络:在公共网络的基础上多了layer3服务,支持实例连接到私有网络

 本次实验使用公共网络

控制节点安装配置Neutron

1、 控制节点安装组件

[root@linux-node1 ~]# yum install openstack-neutron openstack-neutron-ml2   openstack-neutron-linuxbridge ebtables


2、控制节点配置部分---数据库

编辑/etc/neutron/neutron.conf 文件并完成如下操作:    
 在 [database] 部分,配置数据库访问:    
[database]
 ...
 connection = mysql+pymysql://neutron:neutron@192.168.1.2/neutron


neutron改完数据库连接配置之后,并不需要立即同步数据库,还需要继续配置

3、控制节点配置部分---keystone

在[DEFAULT]和[keystone_authtoken]部分,配置认证服务访问:

[DEFAULT]     
 ...     
 auth_strategy = keystone     


      [keystone_authtoken]模块配置 
     
 加入下面参数           
 auth_uri = http://192.168.1.2:5000    
 auth_url = http://192.168.1.2:35357    
 memcached_servers = 192.168.1.2:11211    
 auth_type = password    
 project_domain_name = default    
 user_domain_name = default    
 project_name = service    
 username = neutron    
 password = neutron

4、控制节点配置部分---RabbitMQ

在 [DEFAULT] 和 [oslo_messaging_rabbit]部分,配置 RabbitMQ 消息队列的连接:
[DEFAULT]
...
rpc_backend = rabbit


[oslo_messaging_rabbit]模块下面配置

[oslo_messaging_rabbit]     
     
      ...     
     
      rabbit_host = 192.168.1.2
     
     
      rabbit_userid = openstack     
     
      rabbit_password = openstack


5、控制节点配置部分---Neutron核心配置

在[DEFAULT]部分,启用ML2插件并禁用其他插件,等号后面不写,就表示禁用其它插件的意思

[DEFAULT]    
 ...    
 core_plugin = ml2    
 service_plugins =


6、 控制节点配置部分---结合nova的配置

在[DEFAULT]和[nova]部分,配置网络服务来通知计算节点的网络拓扑变化

打开这两行的注释

意思是端口状态发生改变,通知nova


[DEFAULT]    
 ...    
 notify_nova_on_port_status_changes = True    
 notify_nova_on_port_data_changes = True


[nova]模块下面配置(Neutron配置文件有nova模块)


auth_url = http://192.168.1.2:35357
 auth_type = password
 project_domain_name = default
 user_domain_name = default
 region_name = RegionOne
 project_name = service
 username = nova
 password = nova


7、控制节点配置部分---结合锁路径配置

 在 [oslo_concurrency] 部分,配置锁路径:

[oslo_concurrency]    
 ...    
 lock_path = /var/lib/neutron/tmp


8、控制节点检查主配置文件


控制节点neutron主配置文件的配置完毕

[root@linux-node1 ~]# grep -n '^[a-Z]' /etc/neutron/neutron.conf     
 2:auth_strategy = keystone     
 3:core_plugin = ml2     
 4:service_plugins =     
 5:notify_nova_on_port_status_changes = true     
 6:notify_nova_on_port_data_changes = true     
 515:rpc_backend = rabbit     
 658:connection = mysql+pymysql://neutron:neutron@192.168.1.2/neutron     
 767:auth_uri = http://192.168.1.2:5000     
 768:auth_url = http://192.168.1.2:35357     
 769:memcached_servers = 192.168.1.2:11211     
 770:auth_type = password     
 771:project_domain_name = default     
 772:user_domain_name = default     
 773:project_name = service     
 774:username = neutron     
 775:password = neutron     
 944:auth_url = http://192.168.1.2:35357     
 945:auth_type = password     
 946:project_domain_name = default     
 947:user_domain_name = default     
 948:region_name = RegionOne     
 949:project_name = service     
 950:username = nova     
 951:password = nova     
 1050:lock_path = /var/lib/neutron/tmp     
 1069:rabbit_host = 192.168.1.2     
 1070:rabbit_userid = openstack     
 1071:rabbit_password = openstack     
 1224:rabbit_port = 5672

9、控制节点配置 Modular Layer 2 (ML2) 插件

ML2是2层网络的配置,ML2插件使用Linuxbridge机制来为实例创建layer-2虚拟网络基础设施

编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件并完成以下操作:     
 在[ml2]部分,启用flat和VLAN网络:     
 vim /etc/neutron/plugins/ml2/ml2_conf.ini     
 [ml2]     
 type_drivers = flat,vlan,gre,vxlan,geneve     
     tenant_network_types =



在[ml2]部分,启用Linuxbridge机制:

这个的作用是你告诉neutron使用哪几个插件创建网络,此时是linuxbridge

[ml2]
 ...


mechanism_drivers = linuxbridge

它是个列表,你可以写多个,比如再添加个openvswitch

mechanism_drivers = linuxbridge,openvswitch       

在[ml2]部分,启用端口安全扩展驱动:
 [ml2]       
 ...       
 extension_drivers = port_security       

在[ml2_type_flat]部分,配置公共虚拟网络为flat网络,官方文档写的改为provider,我们改为flat_networks = public
[ml2_type_flat]
...
flat_networks = public
在[securitygroup]部分,

启用 ipset 增加安全组规则的高效性:

[securitygroup]        
        
         ...        
        
         enable_ipset = True


10、控制节点检查ML2配置文件


至此控制节点,ML2的配置更改完毕,如下

[root@linux-node1 ~]# grep -n '^[a-Z]' /etc/neutron/plugins/ml2/ml2_conf.ini       
 107:type_drivers = flat,vlan,gre,vxlan,geneve       
 112:tenant_network_types =       
 116:mechanism_drivers = linuxbridge,openvswitch       
 121:extension_drivers = port_security       
 153:flat_networks = public       
 215:enable_ipset = true

11、控制节点配置Linuxbridge代理

Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。       
 编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并且完成以下操作:       
 在[linux_bridge]部分,将公共虚拟网络和公共物理网络接口对应起来:       
 将PUBLIC_INTERFACE_NAME替换为底层的物理公共网络接口       
         [linux_bridge]        
 physical_interface_mappings = public:ens33

在[vxlan]部分,禁止VXLAN覆盖网络:


[vxlan]        
        
         enable_vxlan = False        
       
在 [securitygroup]部分,启用安全组并配置 Linuxbridge iptables firewall driver:
       
         [securitygroup]        
        
         ...        
        
         enable_security_group = True        
        
         firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

查看更改了哪些配置

[root@linux-node1 ~]# grep -n '^[a-Z]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini       
 128:physical_interface_mappings = public:ens33       
 156:enable_security_group = true       
 157:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver       
 165:enable_vxlan = false

12、控制节点配置DHCP代理

编辑/etc/neutron/dhcp_agent.ini文件并完成下面的操作:

在[DEFAULT]部分,配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据,这样在公共网络上的实例就可以通过网络来访问元数据

[DEFAULT]        
        
         ...        
        
         interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver        
        
         dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq        
        
         enable_isolated_metadata = True


查看更改了哪些配置


第一行是底层接口的配置


第二行dnsmasq是一个小的dhcp开源项目


第三行是刷新路由用的

13、控制节点配置元数据代理 

[root@linux-node1 ~]# grep -n '^[a-Z]' /etc/neutron/dhcp_agent.ini       
 2:interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver       
 3:dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq       
 4:enable_isolated_metadata = true

编辑/etc/neutron/metadata_agent.ini文件并完成以下操作:

在[DEFAULT] 部分,配置元数据主机以及共享密码:

[DEFAULT]        
        
         ...        
        
         nova_metadata_ip = controller        
        
         metadata_proxy_shared_secret = METADATA_SECRET

用你为元数据代理设置的密码替换 METADATA_SECRET。下面的zyx是自定义的共享密钥

这个共享密钥,在nova里还要配置一遍,你要保持一致的

[root@linux-node1 ~]# grep -n '^[a-Z]'  /etc/neutron/metadata_agent.ini       
 2:nova_metadata_ip = 192.168.1.2       
 3:metadata_proxy_shared_secret = shi

14、在控制节点的nova上面配置neutron

下面配置的是neutron的keystone的认证地址。9696是neutron-server的端口
编辑/etc/nova/nova.conf文件并完成以下操作:
在[neutron]部分,配置访问参数,启用元数据代理并设置密码:


url = http:         //192         .168.1.2:9696        
        
         auth_url = http:         //192         .168.1.2:35357        
        
         auth_type = password        
        
         project_domain_name = default        
        
         user_domain_name = default        
        
         region_name = RegionOne        
        
         project_name = service        
        
         username = neutron        
        
         password = neutron

 然后打开下面并配置如下
         
           service_metadata_proxy = True          
          
           metadata_proxy_shared_secret = shi


15、控制节点配置超链接

网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini 如果超链接不存在,使用下面的命令创建它:

[root@linux-node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini



16、控制节点同步数据库

[root@linux-node1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron       
 No handlers could be found for logger "oslo_config.cfg"       
 INFO  [alembic.runtime.migration] Context impl MySQLImpl.       
 INFO  [alembic.runtime.migration] Will assume non-transactional DDL.       
   Running upgrade for neutron ...       
 INFO  [alembic.runtime.migration] Context impl MySQLImpl.       
 INFO  [alembic.runtime.migration] Will assume non-transactional DDL.       
 INFO  [alembic.runtime.migration] Running upgrade  -> kilo, kilo_initial       
 INFO  [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py       
 INFO  [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, neutrodb_ipam       
 INFO  [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf, Initial operations in support of address scopes       
 INFO  [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee, Flavor framework       
 INFO  [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f, network_rbac       
 INFO  [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773, quota_usage       
 INFO  [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592, subnetpool hash       
 INFO  [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7, add order to dnsnameservers       
 INFO  [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79, address scope support in subnetpool       
 INFO  [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes       
 INFO  [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations       
 INFO  [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port       
 INFO  [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d, Add availability zone       
 INFO  [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a, add is_default to subnetpool       
 INFO  [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25, Add standard attribute table       
 INFO  [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee, Add network availability zone       
 INFO  [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9, Add router availability zone       
 INFO  [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4, Add ip_version to AddressScope       
 INFO  [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664, Add tables and attributes to support external DNS integration       
 INFO  [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5, add_unique_ha_router_agent_port_bindings       
 INFO  [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f, Auto Allocated Topology - aka Get-Me-A-Network       
 INFO  [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821, add dynamic routing model data       
 INFO  [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4, add_bgp_dragent_model_data       
 INFO  [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81, rbac_qos_policy       
 INFO  [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6, Add resource_versions row to agent table       
 INFO  [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532, tag support       
 INFO  [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f, add_timestamp_to_base_resources       
 INFO  [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a, Add desc to standard attr table       
 INFO  [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99, Initial no-op Liberty contract rule.       
 INFO  [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada, network_rbac       
 INFO  [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016, Drop legacy OVS and LB plugin tables       
 INFO  [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3, Metaplugin removal       
 INFO  [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d, Add missing foreign keys       
 INFO  [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d, add geneve ml2 type driver       
 INFO  [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297, Drop cisco monolithic tables       
 INFO  [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c, Drop embrane plugin table       
 INFO  [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39, standardattributes migration       
 INFO  [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b, DVR sheduling refactoring       
 INFO  [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050, Drop NEC plugin tables       
 INFO  [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9, rbac_qos_policy       
 INFO  [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada, network_rbac_external       
 INFO  [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc, standard_desc       
   OK


17、控制节点重启nova服务以及启动neutron服务

重启计算nova-api 服务,在控制节点上操作:

[root@linux-node1 ~]# systemctl restart openstack-nova-api.service       
启动以下neutron相关服务,并设置开机启动
 [root@linux-node1 ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service       
 Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service.       
 Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.serv       
 Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service.       
 Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.       
[root@linux-node1 ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

官方文档提到下面,我们用不到,不用操作,这里以删除线标识

对于网络选项2,同样启用layer-3服务并设置其随系统自启动

# systemctl enable neutron-l3-agent.service        
        
         # systemctl start neutron-l3-agent.service



查看监听,多了9696端口

[root@linux-node1 ~]# netstat -nltp       
 Active Internet connections (only servers)       
 Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name           
 tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      1874/mysqld                
 tcp        0      0 0.0.0.0:11211           0.0.0.0:*               LISTEN      910/memcached              
 tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      4019/python2               
 tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      912/httpd                  
 tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      1/systemd                  
 tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      997/sshd                   
 tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      898/beam                   
 tcp        0      0 0.0.0.0:35357           0.0.0.0:*               LISTEN      912/httpd                  
 tcp        0      0 0.0.0.0:9696            0.0.0.0:*               LISTEN      6659/python2               
 tcp        0      0 0.0.0.0:6080            0.0.0.0:*               LISTEN      4569/python2               
 tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      6592/python2               
 tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      6592/python2               
 tcp        0      0 0.0.0.0:9191            0.0.0.0:*               LISTEN      4020/python2               
 tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      898/beam                   
 tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      912/httpd                  
 tcp6       0      0 :::22                   :::*                    LISTEN      997/sshd                   
 tcp6       0      0 :::5672                 :::*                    LISTEN      898/beam


18、控制节点创建服务实体和注册端点

在keystone上创建服务和注册端点

创建neutron服务实体:
 [root@linux-node1 ~]# source admin-openstack.sh       
 [root@linux-node1 ~]# openstack service create --name neutron  --description "OpenStack Networking" network       
 +-------------+----------------------------------+       
 | Field       | Value                            |       
 +-------------+----------------------------------+       
 | description | OpenStack Networking             |       
 | enabled     | True                             |       
 | id          | c868d49bb40e4d679c2dca6f3f0d1663 |       
 | name        | neutron                          |       
 | type        | network                          |       
 +-------------+----------------------------------+       
       

        创建网络服务API端点


 


创建public端点


[root@linux-node1 ~]# openstack endpoint create --region RegionOne network public http://192.168.1.2:9696       
 +--------------+----------------------------------+       
 | Field        | Value                            |       
 +--------------+----------------------------------+       
 | enabled      | True                             |       
 | id           | a0e23a10c72b43d69195c14361a1d0d3 |       
 | interface    | public                           |       
 | region       | RegionOne                        |       
 | region_id    | RegionOne                        |       
 | service_id   | c868d49bb40e4d679c2dca6f3f0d1663 |       
 | service_name | neutron                          |       
 | service_type | network                          |       
 | url          | http://192.168.1.2:9696          |       
 +--------------+----------------------------------+       

创建internal端点
 [root@linux-node1 ~]# openstack endpoint create --region RegionOne  network internal http://192.168.1.2:9696       
 +--------------+----------------------------------+       
 | Field        | Value                            |       
 +--------------+----------------------------------+       
 | enabled      | True                             |       
 | id           | 3caa998f60f34ebdae00fa1a843f7dc8 |       
 | interface    | internal                         |       
 | region       | RegionOne                        |       
 | region_id    | RegionOne                        |       
 | service_id   | c868d49bb40e4d679c2dca6f3f0d1663 |       
 | service_name | neutron                          |       
 | service_type | network                          |       
 | url          | http://192.168.1.2:9696          |       
 +--------------+----------------------------------+

创建admin端点

[root@linux-node1 ~]# openstack endpoint create --region RegionOne  network admin http://192.168.1.2:9696    
 +--------------+----------------------------------+    
 | Field        | Value                            |    
 +--------------+----------------------------------+    
 | enabled      | True                             |    
 | id           | 5c56915f077447d09c06905bcfb7069a |    
 | interface    | admin                            |    
 | region       | RegionOne                        |    
 | region_id    | RegionOne                        |    
 | service_id   | c868d49bb40e4d679c2dca6f3f0d1663 |    
 | service_name | neutron                          |    
 | service_type | network                          |    
 | url          | http://192.168.1.2:9696          |    
 +--------------+----------------------------------+


检查,看到下面3行,说明没问题,右边alive是笑脸状态。表示正常


[root@linux-node1 ~]# neutron agent-list    
 +--------------------------------+--------------------+---------------------+-------------------+-------+----------------+---------------------------+    
 | id                             | agent_type         | host                | availability_zone | alive | admin_state_up | binary                    |    
 +--------------------------------+--------------------+---------------------+-------------------+-------+----------------+---------------------------+    
 | 8beadbbc-                      | Linux bridge agent | linux-node1.shi.com |                   | :-)   | True           | neutron-linuxbridge-agent |    
 | d8f0-4a07-8036-eb4d0bf4b563    |                    |                     |                   |       |                |                           |    
 | 9a486841-382c-                 | DHCP agent         | linux-node1.shi.com | nova              | :-)   | True           | neutron-dhcp-agent        |    
 | 49f9-a1d5-7c8dd752ec5d         |                    |                     |                   |       |                |                           |    
 | ead97fe2-90be-4516-a120-8c21ad | Metadata agent     | linux-node1.shi.com |                   | :-)   | True           | neutron-metadata-agent    |    
 | 6b10f9                         |                    |                     |                   |       |                |                           |    
 +--------------------------------+--------------------+---------------------+-------------------+-------+----------------+---------------------------+

计算节点安装和配置neutron

早期版本nova-compute可以直接连接数据库,那么存在一个问题,任何一个计算节点被入侵了。那么数据库整个就危险了。后来就出现了个nova-condutor,它作为中间访问的

1、安装组件

[root@linux-node2 ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y

计算节点要改2个文件配置通用组件和配置网络选项


配置通用组件
Networking 通用组件的配置包括认证机制、消息队列和插件。
/etc/neutron/neutron.conf

配置网络选项
配置Linuxbridge代理
/etc/neutron/plugins/ml2/linuxbridge_agent.ini

文档连接可以参照
https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron-compute-install-option1.html

因为计算节点和控制节点neutron配置相似,可以在控制节点配置文件基础上完善下

[root@linux-node1 ~]# scp -p /etc/neutron/neutron.conf 192.168.1.3:/etc/neutron/

2、计算节点更改配置


删除mysql的配置,并注释这行

[database]    
 #connection =    
    

     把下面nova的下面配置删除 
    
 [nova]     
 auth_url = http://192.168.1.2:35357     
 auth_type = password     
 project_domain_name = default     
 user_domain_name = default     
 region_name = RegionOne     
 project_name = service     
 username = nova     
 password = nova     

 注释下面4行     
 #notify_nova_on_port_status_changes = true     
 #notify_nova_on_port_data_changes = true     
 #core_plugin = ml2     
 #service_plugins =     3、查看更改后的配置
 [root@linux-node2 neutron]# grep -n '^[a-Z]' /etc/neutron/neutron.conf     
 2:auth_strategy = keystone     
 515:rpc_backend = rabbit     
 767:auth_uri = http://192.168.1.2:5000     
 768:auth_url = http://192.168.1.2:35357     
 769:memcached_servers = 192.168.1.2:11211     
 770:auth_type = password     
 771:project_domain_name = default     
 772:user_domain_name = default     
 773:project_name = service     
 774:username = neutron     
 775:password = neutron     
 1042:lock_path = /var/lib/neutron/tmp     
 1061:rabbit_host = 192.168.1.2     
 1062:rabbit_userid = openstack     
 1063:rabbit_password = openstack     
 1216:rabbit_port = 5672


4、计算节点更改nova主配置文件

编辑/etc/nova/nova.conf文件并完成下面的操作:
在[neutron] 部分,配置访问参数:


[neutron]      
      
       ...      
      
       url = http:       //192       .168.1.2:9696      
      
       auth_url = http:       //192       .168.1.2:35357      
      
       auth_type = password      
      
       project_domain_name = default      
      
       user_domain_name = default      
      
       region_name = RegionOne      
      
       project_name = service      
      
       username = neutron      
      
       password = neutron


5、计算节点配置Linuxbridge代理

(1)Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并且完成以下操作:
在[linux_bridge]部分,将公共虚拟网络和公共物理网络接口对应起来:


[linux_bridge]     
     
      physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
 将PUBLIC_INTERFACE_NAME 替换为底层的物理公共网络接口

 (2)在[vxlan]部分,禁止VXLAN覆盖网络:
      
        [vxlan]       
       
        enable_vxlan = False


(3)在 [securitygroup]部分,启用安全组并配置 Linuxbridge iptables firewall driver:


[securitygroup]     
     
      ...     
     
      enable_security_group = True     
     
      firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver


由于上面3处的配置和控制节点的一模一样,直接拷贝控制节点文件到此替换即可
[root@linux-node1 ~]# scp -r /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.1.3/etc/neutron/plugins/ml2/

6、在计算节点检查linuxbridge_agent配置文件

[root@linux-node2 neutron]# grep -n '^[a-Z]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini       
 128:physical_interface_mappings = public:ens33       
 156:enable_security_group = true       
 157:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver       
 165:enable_vxlan = false

7、重启nova服务并启动neutron服务

因为改动了nova主配置文件,需要重启nova服务

同时启动neutron服务,并设置开机启动

[root@linux-node2 neutron]# systemctl restart openstack-nova-compute.service       
 [root@linux-node2 neutron]# systemctl enable neutron-linuxbridge-agent.service       
 Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.       
 [root@linux-node2 neutron]# systemctl start neutron-linuxbridge-agent.service


8、控制节点检查

看到多了个计算节点Linux bridge agent


[root@linux-node1 ~]# source admin-openstack.sh       
 [root@linux-node1 ~]# neutron agent-list       
 +--------------------------------+--------------------+---------------------+-------------------+-------+----------------+---------------------------+       
 | id                             | agent_type         | host                | availability_zone | alive | admin_state_up | binary                    |       
 +--------------------------------+--------------------+---------------------+-------------------+-------+----------------+---------------------------+       
 | 0ebb213b-4933-4a34-be61-2aeeb4 | DHCP agent         | linux-node1.shi.com | nova              | :-)   | True           | neutron-dhcp-agent        |       
 | 6574a6                         |                    |                     |                   |       |                |                           |       
 | 4677fa97-6569-4ab1-a3db-       | Linux bridge agent | linux-node1.shi.com |                   | :-)   | True           | neutron-linuxbridge-agent |       
 | 71d5736b40fb                   |                    |                     |                   |       |                |                           |       
 | 509da84b-                      | Linux bridge agent | linux-node2.shi.com |                   | :-)   | True           | neutron-linuxbridge-agent |       
 | 8bd3-4be0-9688-94d45225c3c0    |                    |                     |                   |       |                |                           |       
 | 5ec0f2c1-3dd3-40ba-            | Metadata agent     | linux-node1.shi.com |                   | :-)   | True           | neutron-metadata-agent    |       
 | a42e-e53313864087              |                    |                     |                   |       |                |                           |       
 +--------------------------------+--------------------+---------------------+-------------------+-------+----------------+---------------------------+


下面映射,可以理解为给网卡起个别名,便于区分用途

同时你的物理网卡名字必须是eth0,对应上配置文件。或者说配置文件对用上实际的网卡名


[root@linux-node2 ~]# grep physical_interface_mappings /etc/neutron/plugins/ml2/linuxbridge_agent.ini       
 physical_interface_mappings = public:ens33