​AJ NOURI​

June 9, 2015

Here is a step by step example on how to run the FOSS Quagga in a container to use with GNS3 topologies.

In a container, quagga is running as a process with its own root file system and dependencies, no burden of an entire virtual machine.


This lab does not require knowledge of docker/pipework, (hidden behind bash scripts), nevertheless I recommend watching this short introduction: ​​ What is Docker? - YouTube​​ and look at the references at the end of the post.



Lab Requirements

1.​​Docker​​ (easy to install)

docker -v

Docker version 1.6.2, build 7c8fca2


2.pipework, a simple yet powerful bash script, for advanced docker networking

sudo bash -c "curl https://raw.githubusercontent.com/jpetazzo/pipework/master/pipework > /usr/local/bin/pipework"

/usr/local/bin/pipework

Syntax:

pipework <hostinterface> [-i containerinterface] <guest> <ipaddr>/<subnet>[@default_gateway] [macaddr][@vlan]

pipework <hostinterface> [-i containerinterface] <guest> dhcp [macaddr][@vlan]

pipework --wait [-i containerinterface]


lxterminal is used to run quagga terminal in a new window and get back to the main one to run the subsequent commands.

You can install it (apt-get install lxterminal)



Downloads scripts

mkdir ~/GNS3/projects/quagga-gns3

cd  ~/GNS3/projects/quagga-gns3

sudo git clone ​​ https://github.com/AJNOURI/Quagga_docker_gns3​

cd Quagga_docker_gns3


Build quagga docker image (Dockerfile)

sudo docker build -t quagga .



Manage quagga container

This script allows you to run/stop/delete/attach a container, given its name and and the previously built image tag and configure bridge interfaces to be bound to GNS3.

./startquagga.sh {quagga_image_tag} {container_name}

ex:

./startquagga.sh quagga quagga1

The script will open a new terminal window for quagga container.

Answer "y" to continue to container networking as follow:



Quagga networking...

Continue? [Yy] [Nn]y

Host bridge interface => br2

Container: interface (a new one) connected to host bridge => eth1

Container: interface IP => 192.168.12.1

Container: interface IP mask => 24

Container: interface next-hop IP (GNS3) => 192.168.12.100

command: >> sudo pipework br2 -i eth1 a33b02f03cbf 192.168.12.1/24@192.168.12.100 << successfully executed.

Would you like to continue with network configuration? [Yy] [Nn]  y

Host bridge interface => br3

Container: interface (a new one) connected to host bridge => eth2

Container: interface IP => 192.168.13.1

Container: interface IP mask => 24

Container: interface next-hop IP (GNS3) => 192.168.13.100

command: >> sudo pipework br3 -i eth2 a33b02f03cbf 192.168.13.1/24@192.168.13.100 << successfully executed.

Would you like to continue with network configuration? [Yy] [Nn]  y

Host bridge interface => br4

Container: interface (a new one) connected to host bridge => eth3

Container: interface IP => 192.168.14.1

Container: interface IP mask => 24

Container: interface next-hop IP (GNS3) => 192.168.14.100

command: >> sudo pipework br4 -i eth3 a33b02f03cbf 192.168.14.1/24@192.168.14.100 << successfully executed.

Would you like to continue with network configuration? [Yy] [Nn]  n

ajn:~/GNS3/projects/quagga-gns3/Quagga_docker_gns3$



Note:

  • Closing the running quagga container terminal will stop the container.
  • To push it to the background (daemon mode) do CTRL + P + Q from quagga terminal
  • A stopped container will loose pipework networking configuration, so when restarting a stopped container (from the script startquagga.sh) redo networking settings.



GNS3 topology


Configure 3 devices each connected to a bridge interface. Make sure the bridge interfaces are created from the previous step (./startquagga.sh)


Here is an example of topology connected to previously created bridge interfaces through a cloud:


configure other router devices accordingly: IPs, and OSPF

router ospf 234

router ospf 234

router ospf 234

int e0/0

  ip addr 192.168.12.100 255.255.255.0

   ip ospf 234 area 0

int loo1

  ip addr 20.0.0.2 255.255.255.0

  ip ospf 234 area 0

   ip ospf network point-to-point

int e0/0

  ip addr 192.168.13.100 255.255.255.0

  ip ospf 234 area 0

int loo1

  ip addr 30.0.0.3 255.255.255.0

  ip ospf 234 area 0

  ip ospf network point-to-point

int e0/0

  ip addr 192.168.14.100 255.255.255.0

  ip ospf 234 area 0

int loo1

  ip addr 40.0.0.4 255.255.255.0

  ip ospf 234 area 0

  ip ospf network point-to-point


Check basic connectivity between quagga container and GNS3 topology through bridge interfaces

Because quagga container ip addresses are already configured, you should be able to ping them from Cisco devices.


From quagga container terminal, ping the next-hops (IOU router interfaces)


Check basic connectivity between docker host and quagga container through Docker0 interface


From quagga container terminal, check the default interface IP:

Docker process manage the container through docker0 interface.




SSH to quagga container

Though we can configure quagga from the opened terminal, for demo purpose let's do that through SSH session.

The​​ base image​​ I am using for quagga is specially compiled to manage multiprocess docker containers (with SSHd enabled during the build).

We will use public key for ssh


Copy your host public key to the current directory

cp /home/ajn/.ssh/id_rsa.pub id_rsa.pub

or you can generate a special key pair in the current directory for this purpose:

$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa): ./id_rsa     

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in ./id_rsa.

Your public key has been saved in ./id_rsa.pub.

The key fingerprint is:

b7:94:14:94:1e:ac:9a:47:f8:94:3f:b5:3f:b8:d8:34 ajn@gns3-iouvm

The key's randomart image is:

+--[ RSA 2048]----+

|         oo.     |

|          +.     |

|       . +..     |

|      . =....    |

|       *S.+. .   |

|      o ooo..    |

|       .  ..Eo   |

|           +..o  |

|          . o. . |

+-----------------+

You will find id_rsa (private key) and id_rsa.pub (public key) in the current directory.


From the quagga console, enable sshd:

/usr/sbin/sshd



From your host ssh to the container eth0 (default interface)



Quagga CLI through SSH session

/etc/init.d/quagga


Connect to the unified (all routing daemons) CLI

vtysh



Example of Quagga configuration: OSPF

conf t

router OSPF

net 192.168.12.0/24 area 0

net 192.168.13.0/24 area 0

net 192.168.14.0/24 area 0


Reachability check

From quagga



From IOU3 to IOU2 and IOU4 through quagga router container





Persistent quagga configuration through deletion

During the image build, quagga configuration directory is mapped to host volume. This volume is available even if the container itself is stopped or deleted.

Can be used to backup quagga configuration for example.


From Docker host

sudo ./confdir quagga1

/var/lib/docker/vfs/dir/45eefa1a46bd3b092ca5b5a819c8e4fbac4a9bb749f8bedc4e573a63f42864de


sudo ls /var/lib/docker/vfs/dir/45eefa1a46bd3b092ca5b5a819c8e4fbac4a9bb749f8bedc4e573a63f42864de

babeld.conf  bgpd.conf daemons  debian.conf  isisd.conf  ospf6d.conf  ospfd.conf  ripd.conf  ripngd.conf  vtysh.conf  zebra.conf




GUI Monitoring and management (optional)

The script download and launch “cadvisor” container for monitoring and “Seagull” container for graphically manage created containers:


sudo ./qmonitor.sh

The first time, it will download both images and run the containers:


For “cadvisor” browse:

127.0.0.1:8080

For “Seagull” browse:

127.0.0.1:10086


Look at quagga peformance :


CPU=0.006%

RAM=22,27 mbytes

Space= 82 kbytes







startquagga.sh script leverage the following docker commands:

  • sudo docker images : list of built images
  • sudo docker ps -a : list of containers
  • sudo docker run -t -i --privileged=true --name {container_name} {image_ID} /bin/bash : spawn a container from a given built image tag and assign it a name
  • pipework {bridge} -i {interface} {running_container_ID} {IP/MASK}@{gateway} : create a new interface in the running container bind it to a host docker interface and assign it an IP/mask and a gateway (the last configured gateway will take over).
  • sudo docker stop {running_container_ID} : stop a running image
  • sudo docker rm {stopped_container_ID} : delete a stopped container




Some Docker/pipework references: