overview
to build a virtualized network topology from the most base elements it’s as simple as starting a virtual machine with the necessary number of virtualized interfaces and interconnecting these virtual interfaces to other virtual machines or physical interfaces.
while there are tools1 which will nicely automate the creation of topologies and handle the lifecycle of VMs. all of these are effectively placing a nice wrapper around the following process.
virtual machine preparation
in order to optimize the virtual images it’s preferable to convert the disk images into qcow images which can utilize an overlay filesystem in order to save disk space and simplify routed image setup.
convert vmdk images to qcow images
qemu-img convert router-image.vmdk -O qcow2 router-image.qcow2
individual router disk images should use an overlay file system utilizing the base image which has been converted to qcow2 format.
qemu-img create -b router-image.qcow2 -f qcow2 r1-image.qcow2
where r1-image.qcow2
is the image to be used for the virtualized router based
on the overall router-image.qcow2
.
vm startup
after the overlay file systems for the virtual routers have been created, the virtual machines need to be started with the appropriate virtualization parameters as well as the necessary interfaces to interact in the virtualized topology.
the following describes the KVM invocation for a cisco ios-xr virtualized router image as well as for a veos image and the accompanying interfaces.
these are routers R1 and R2 in the sample foolab topology.
NB: this is a single command. as such line continuation characters (\
) are
used. be careful when running/executing these commands.
the following creates an IOS-Xrv guest VM with 6 interfaces. 3 management interfaces and 3 data plane or “front panel” interfaces. this is an oddity of the XRv-9K image that is being run. (just roll with it.)
each interface is given a unique MAC address. i prefer to encode the router id of the topology into the MAC address to make configuration and troubleshooting easier. this is admittedly a personal preference.
veos sample
the following is a sample invocation for a vEOS image on KVM. there are a few different components that go into this.
we must use the -cdrom
parameter to point to the appropriate Aboot image.
this also requires that we boot to the cdrom first which will do the appropriate
thing when it comes to booting the system. see below for the relevant
parameters.
sudo qemu-system-x86_64 \
# hard drive file - created through the overlay filesystem process above
-hda ${HOME}/topo/images/r2-veos-01.qcow2 \
# pointer to the Aboot iso image
-cdrom ${HOME}/topo/images/Aboot-veos-serial-8.0.0.iso \
-pidfile ${HOME}/topo/pids/r2.pid \
-boot d \
-m 8G \
-enable-kvm \
-display none \
-rtc base=utc \
-name veos:r2 \
-daemonize \
-serial telnet::9102,server,nowait \
-netdev tap,id=r2h1,ifname=sulrichR2Mgmt1,script=no,downscript=no \
-device virtio-net-pci,romfile=,netdev=r2h1,id=r2h1,mac=00:02:00:ff:66:01 \
-netdev tap,id=r2e1,ifname=sulrichR2Eth1,script=no,downscript=no \
-netdev tap,id=r2e2,ifname=sulrichR2Eth2,script=no,downscript=no \
-netdev tap,id=r2e3,ifname=sulrichR2Eth3,script=no,downscript=no \
-device e1000,netdev=r2e1,id=r2e1,mac=00:01:02:ff:66:01 \
-device e1000,netdev=r2e2,id=r2e2,mac=00:01:02:ff:66:02 \
-device e1000,netdev=r2e3,id=r2e3,mac=00:01:02:ff:66:03 &
IOS-XRv sample
sudo qemu-system-x86_64 \
# misc. BIOS params for this VM
-smbios type=1,manufacturer="cisco",product="Cisco IOS XRv 9000" \
# virtual hard drive - pointer to the overlay file system created previously
-drive file=${HOME}/topo/images/r1-iosxrv.qcow2,if=virtio,media=disk,index=1 \
# where the pricess ID of the VM should be put, makes killing these easy
-pidfile ${HOME}/topo/pids/r1.pid \
-cpu host \
n # RAM to alloate to the VM
-m 8G \
# more VM CPU parameters
-smp cores=4,threads=1,sockets=1 \
-enable-kvm \
-daemonize \
-display none \
-rtc base=utc \
# name of the guest VM, also used for VNC server
-name IOS-XRv-9000:r1 \
# binds the serial console to localhost port 9101 - telnet to this to watch the
# guest come online and for initial configuration
-serial telnet::9101,server,nowait \
# network interface configurations
-netdev tap,id=r1h1,ifname=sulrichR1Lx1,script=no,downscript=no \
-netdev tap,id=r1h2,ifname=sulrichR1Lx2,script=no,downscript=no \
-netdev tap,id=r1h3,ifname=sulrichR1Lx3,script=no,downscript=no \
-device virtio-net-pci,romfile=,netdev=r1h1,id=r1h1,mac=00:01:00:ff:66:01 \
-device virtio-net-pci,romfile=,netdev=r1h2,id=r1h2,mac=00:01:00:ff:66:02 \
-device virtio-net-pci,romfile=,netdev=r1h3,id=r1h3,mac=00:01:00:ff:66:03 \
-netdev tap,id=r1g0,ifname=sulrichR1Xr0,script=no,downscript=no \
-netdev tap,id=r1g1,ifname=sulrichR1Xr1,script=no,downscript=no \
-netdev tap,id=r1g2,ifname=sulrichR1Xr2,script=no,downscript=no \
-device virtio-net-pci,romfile=,netdev=r1g0,id=r1g0,mac=00:01:01:ff:66:00 \
-device virtio-net-pci,romfile=,netdev=r1g1,id=r1g1,mac=00:01:01:ff:66:01 \
-device virtio-net-pci,romfile=,netdev=r1g2,id=r1g2,mac=00:01:01:ff:66:02 &
guest VM network configurations
linux tap
interfaces must be used as you’re going to be passing raw ethernet
frames between guest VMs in the topology. the qemu-system-x86_64
command will
automatically create the interfaces based on the ifname parameter that you
provide. in the following example the ifname=sulrichR1Lx1
results in a tap
interface which can be interrogated using the standard iproute2
tools. (see
sample output below)
in the following example we’re not running a start up script or interface down
script these are being set to no
. we will specify the id=
parameter to
associate this with the corresponding -device
parameter where we specify mac
address and the remaining interface parameters.
note: at this time the interface is not actually associated with a peer interface such that traffic is going to pass, we’re simply creating the VM interfaces and giving them usable handles.
...
-netdev tap,id=r1h1,ifname=sulrichR1Lx1,script=no,downscript=no \
-device virtio-net-pci,romfile=,netdev=r1h1,id=r1h1,mac=00:01:00:ff:66:01 \
...
sample stats output
-d
for detailed output-s -s
for detailed stats output (more than one -s provides additional detail.)
sulrich@dork-server2:~$ ip -d -s -s link show dev sulrichR1Lx1
50: sulrichR1Lx1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 9e:4c:f2:92:0b:f0 brd ff:ff:ff:ff:ff:ff promiscuity 1
tun
openvswitch_slave addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
RX: bytes packets errors dropped overrun mcast
17447 145 0 0 0 0
RX errors: length crc frame fifo missed
0 0 0 0 0
TX: bytes packets errors dropped carrier collsns
55791 679 0 0 0 0
TX errors: aborted fifo window heartbeat transns
0 0 0 0 0
interface plumbing
dork-server2 is configured to use openvswitch as its default virtual switching
layer. this is lieu of linux-bridges. this makes a few things easier, but might
be a bit different than you’re used to if you’re accustomed to the brctl
command operations.
in the following example we’re creating a bridge named mgmt0 on the host system
and adding guest VM interfaces to this bridge domain. the ovs-vsctl
command
is used to create vswitch instances/bridges and assign the interfaces into the
bridge.
OVS bridges present to the linux host OS as interfaces which can be manipulated
through the use of the iproute2
/ ip
commands.
management network access
i prefer to create a management network which all of the management networks interfaces are attached to this is typically the first interface on the guest VM. the following demonstrates the creation of management network such that VMs can go through an initial boot process and have reachability to the host OS for configuration, etc.
# management bridge (mgmt0 - 10.0.66.254/24)
sudo ovs-vsctl add-br mgmt0
# add the management interfaces of the guest VMs to the mgmt0 bridge
sudo ovs-vsctl add-port mgmt0 sulrichR1Lx1
sudo ovs-vsctl add-port mgmt0 sulrichR2Mgmt1
...
# bring the interfaces up
sudo ip link set sulrichR1Lx1 up
sudo ip link set sulrichR2Mgmt1 up
...
# add an ip address to the mgmt0 interface to enable us to access the management
# interfaces of the guest VMs when they come online.
sudo ip address add 10.0.66.254/24 dev mgmt0
sudo ip link set mgmt up
data-plane (front-panel) interface configuration
creation of separate bridges for interconnecting the guest VMs is a simple matter of repeating this process for each interconnection.
here’s the interconnection between R1-Xr0
(aka g0/0/0) and R2-Eth1
in the
foolab sample topology.
# net100: r1-Xr0 - r2-Eth1
sudo ovs-vsctl add-br net100
sudo ip link set net100 up
sudo ip link set sulrichR1Xr0 up
sudo ip link set sulrichR2Eth1 up
sudo ovs-vsctl add-port net100 sulrichR1Xr0
sudo ovs-vsctl add-port net100 sulrichR2Eth1
sudo ip link set net100 up
examples of topology building tools (http://www.eve-ng.net/, https://www.gns3.com/, http://virl.cisco.com) ↩︎