This vQFX Platform is supposed to be used with vagrant and ships with a few nice fabrics that spin up on the go. But there's a few problems with that - which may not be instantly obvious if you plan on using it for testing or QA in your NetOps department:
- This Vagrant configuration relies and only supports virtualization via Virtualbox
- so far no plans have been announced to switch from Virtualbox to another out of the box solution, so you'll have to hack this up a bit. libvirt, kvm/qemu makes this relatively simple, but you have to get familiar with their tools (again).
- vQFX comes in two "VMs": a Routing Engine (RE) and a Packet Forwarding Engine (PFE)
- this is due to the way the hardware is actually set-up and working on a real bare metal switch or router (same story for vMX). ASICs are supplied in the form of shim kernel modules that make it possible to unit-test more advanced features or do functional testing before deploying on real (ideally lab hardware).
The following shell input describes what I did to get my vQFX QCOW2 Images up and running with QEMU (if you have downloaded other images this may not work. VMDK might, others may not). You might want to further extend the virtual network virsh help network
. Or, like myself, integrate with Docker and GitLab Runner proccesses to automate CI/CD Pipelines for linting, testing, QA on virtualized "hw" and deployment to your real datacenter fabric. Various testing tools have been around. I found the ansible-lint and ansible-runner image to be helpful, as well as the SAST-IaC and Secret-Detection CI templates provided by GitLab. Juniper has an account worth checking out on Dockerhub, if you're into that. juniper/pyez-ansible and juniper/jsnapy may be of help to you testing your infrastructure or ansible code quality. Tools like PyEZ or Batfish enable in-depth network simulation and testing. You might want to take a look at this article, if you're here for continous delivery in NetOps/NOC environments: https://www.linkedin.com/pulse/using-gitlab-runners-network-pipelines-jorge-romero/
Here goes lots of input (at your own risk — PLEASE have mercy & don't let a junior handle the entire setup unsupervised):
qemu-img convert -f qcow2 vqfx-20.2R1.10-re-qemu.qcow2 -O raw vqfx-20.2R1.10-re-qemu.raw
qemu-img convert -f qcow2 vqfx-20.2R1-2019010209-pfe-qemu.qcow -O raw vqfx-20.2R1-2019010209-pfe-qemu.raw
virsh net-define /etc/libvirt/qemu/networks/dataplane.xml
virsh net-start dataplane
virsh net-autostart dataplane
# files attached to this gist
virsh net-define /etc/libvirt/qemu/networks/qfx-int.xml
virsh net-start qfx-int
virsh net-autostart qfx-int
# filed attached to this gist
virt-install \
--name re-qfx10k-xxx \
--memory 1024 \
--vcpus=1 \
--import \
--disk /home/admaz/vqfx-20.2R1.10-re-qemu.raw,bus=ide,format=raw \
--network network=default,portgroup=mgmt,model=e1000 \
--network bridge=virbr2,model=e1000 \
--network network=default,portgroup=mgmt,model=e1000 \
--network bridge=virbr1,model=e1000 \
--graphics none
virt-install \
--name pfe-qfx10k-xxx \
--memory 2048 \
--vcpus=1 \
--import \
--disk "/home/admaz/vqfx10k-pfe-20160609-2.raw",bus=ide,format=raw,size=2 \
--network network=default,portgroup=mgmt,model=e1000 \
--network bridge=virbr2,model=e1000 \
--graphics none
virsh autostart re-qfx10k-xxx
virsh autostart pfe-qfx10k-xxx
virsh --connect qemu:///system start re-qfx10k-xxx
# [OPTIONAL] if you want to be able to connect from a docker container to your KVM/qemu VM:
docker network create --driver=macvlan --subnet=192.168.0.0/16 -o parent=virbr1 virt2docker
Leave me questions. You can reach me at [email protected].