You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Vmware Tanzu Kubernetes Grid and VMware Telco Cloud Automation
Welcome! This is a series of quick guides to setting up dev/test/lab environment with Tanzu Kubernetes Grid and/or with VMware Telco Cloud Automation.
This isn't a replacement for the official documentation but rather is a curated, streamlined set of "how tos" from several locations based on my experiences.
At least one VDS Portgroup for a combined Management & Data network.
Setting up Linux jumpbox Docker CE with HTTP or SOCKS proxy
For Docker Daemon, create a Systemd configuration file /etc/systemd/system/docker.service.d/http-proxy.conf for setting proxy environment variables
change 192.168.1.5:8118 to your proxy host/port
change http to whatever protocol your proxy is e.g. socks5 or socks5h (remote DNS resolution over socks)
172.17.0.0/16, 172.18.0.0/16, localhost, 127.0.0.1, cluster.local are mandatory in NO PROXY to cover internal docker networks and/or kubernetes KIND cluster networks
change harbor.bigco.lab to the DNS you want for Harbor, if installing Harbor
listen-address=192.168.1.5 (replace with your jumpbox network address)
listen-address=127.0.0.1 (seperate line)
dhcp-range=192.168.1.50,192.168.1.180,12h (change to the range of DHCP addresses you want dnsmasq to manage)
If you ever need to reserve a static IP add:
dhcp-host=00:50:56:ab:6d:db,192.168.1.57 (MAC address, IP address)
Restart DNSmasq after every config change
sudo systemctl restart dnsmasq
This doesn't setup DNSmasq for DNS, the Airgap guide will do that.
Tanzu Kubernetes Grid is now ready for cluster creation!
Now we'll setup NSX ALB (Avi) so that any clusters we create will get HA service type load balancers.
Installing NSX Advanced Load Balancer (Avi)
Tanzu includes the "Essentials" version of NSX ALB which allows for Layer 4 (TCP/UDP) Kubernetes Service Type Load Balancers only. This is an active/passive similar-to-VRRP HA load balancer.
There's also the "Basic" version of Avi if you are an NSX-T customer which also allows for Ingress controllers (Layer 7 HTTP, virtual host routing, on top of a TCP Service Type Load Balalncer) implemented and managed by Avi.
Full NSX ALB Enterprise supports multi-cloud, multi-cluster controllers, BGP ECMP scale-out load balancers, multi-cluster GSLB, etc.
vApp properties should be fairly self-explantory management network settings
Avi controller management IP should be a static IP either via DHCP reservation or by putting the network information in the vApp properties
Leave the key field in the template empty.
Setup the Avi Controller
Conceptual Explanation
Avi will create on-demand VMs, called Service Engines, to serve up traffic. By default this is an "N+M" buffered HA, which is like a more sophisticated version of an Active/Passive VRRP setup. Avi will expose a Virtual IP address for each Service Type Load Balancer using Layer 2 ARP/GARP, and perform some novel Layer 2 load balancing tricks among its service engine members to distribute traffic.
Avi can also be setup for BGP ECMP scale-out load balancing, though that's not discussed here.
Each Avi service engine is "two armed", i.e. it has a Management NIC and a Data NIC. These can be on the same network if you want. One is for the Avi Controller to configure the service engines, the other is for data traffic to be served.
Avi can rely on DHCP or its own static IPAM. The common practice is to use DHCP for the Management network and static IPAM for the data network (SEs and their VIPs).
In this guide we'll just assume one network for everything, and we'll carve out a non-DHCP managed range for the Avi data network VIPs & SEs.
Step by Step Guide (This all can be automated via API too!)
Open a browser to the controler IP
Configure a password for the admin accountr
Set DNS resolvers and NTP information, along with the backup passphase, then -> Next
For System IP Address Management, select DHCP. This assumes that dynamically created VMs for service engines will be assigned management IPs via a DHCP server on their subnet.
For Virtual Service Placement Settings leave both boxes unchecked, then -> Next
Select a distributed virtual switch for the Management network (this should be the same network as the Controller OVA), select DHCP, and then -> Next
For Support Multiple Tenants, Select No
In the main controller UI, navigate to Applications > Templates > Profiles > IPAM/DNS Profiles, then -> Create, select IPAM Profile
Enter an aribtrary name for the IPAM profile. The Type Should be Avi Vantage IPAM, Leave Allocate IP in VRF unchecked.
Click Add Usable Network, Select Default-Cloud, for Usable Network, select the Management VDS portgroup
Click Save
In the main controller UI, navigate to Infrastructure > Networks, and configure your Portgroup to have a Static IP block for your Load Balancer VIPs and/or Service Engine IPs.
In the main controller UI, navigate to Infrastructure > Clouds, select Default-Cloud, and select the IPAM profile in the drop down that we created in steps 13-16.
Finally, we need to create a TLS cert for the Avi controller for a trust-relationship with the TKG management cluster.
In the main controller UI, select Templates > Security > SSL/TLS Certificates, then -> Create and select Controller Certificate
Enter the same name in the Name and Common Name boxes. Select Self-Signed. For Subject Alternative Name, enter the IP address of the avi controller VM, then Save
Select the certificate in the list and click the Export icon so we can import this CA self-signed cert into TKG later.
In the main controller UI, select Admnistration > Settings > Access Settings, click the edit icon in System Access Settings
Delete existing SSL/TLS certificates. Use the SSL/TLS Certifcate Drop down menu to add the newly created custom certificate.
Next steps!
If running airgapped (No Internet access except the jumpbox), follow the "Setting up Tanzu Kuberentes Grid for Airgap" guide
If using Tanzu standalone, follow the "Creating Tanzu Kubernetes Clusters Standalone" guide
If using TCA, keep reading in order!
First you need to "Installing Telco Cloud Automation"
(Then, if airgapped) "Setting up Telco Cloud Automation for Airgap"
Then, follow the "Creating Tanzu Kuberentes Clusters with TCA" guide
Telco Cloud Automation (TCA) provides full infrastructure automation, i.e.
Starting with raw ESXi hardware, it will install/configure most of an SDDC: vCenter, vRealize Orchstrartor, vRealize Log Insight, VSAN, NSX, TCA control planes, and Tanzu Kubernetes management or workload clusters aligned to the Telco Cloud Platform 5G Edition Reference Architecture
Automated lifecycle management (upgrades) for Tanzu Kubernetes clusters, and support to be released in a future release for the rest of the SDDC
TCA provides enhanced Tanzu Kubernets Grid (TKG) for Telco features beyond the standard TKG release
Node pools on different vSphere clusters and different VM sizes
Remote worker nodes (i.e. on remote/edge ESXi nodes)
VM Anti-Affinity rules
CSI NFS provisioner and client
Mutlus CNI (which also is in TKG GA 1.4 forthcoming)
Multi vNIC , including SR-IOV vNICs, on the worker nodes (which TKG 1.4 GA doesn't yet do)
TCA provides for installing CNF software and customizing both the OS and VMs for that software
Assumes that most CNF deployers / overall network managers aren't Kubernetes experts so provides a GUI for knowing the status/health and instantiation of CNFs
Also dramatically simplifies the CI/CD required to manage an entire global rollout of CNFs across domains & clusters - CI/CD like Concourse is great and necessary but should be glue to orchestrate-the-orchestrators, rather than the platform that does everything itself (which never works).
Most CNFs are just Helm charts wrapped with extra files/metadata in a ZIP file called a CSAR (Cloud Service Archive). The main metadata is a TOSCA YAML file, which describes the Helm chart and any VM/OS customizations. TOSCA and CSAR are an OASIS standard that was adopted by ETSI for VNF packaging, and being repurposed for CNF & Kubernetes. You can't really do this kind of node specs/customization solely with Helm/K8s beyond some DaemonSet hackery, this standard feels more clean/standardized though likely isn't the endgame in this space. See https://www.etsi.org/deliver/etsi_gs/NFV-SOL/001_099/004/02.05.01_60/gs_nfv-sol004v020501p.pdf
Some VM customization examples:
NUMA alignment
CPU static scheduling
SR-IOV NIC assignment
Some OS configuration examples
Realtime kernel
Extra packages (e.g. PCI drivers)
SR-IOV OS configuration
DPDK OS configuration
Requirements
TCA has a single OVA in either Manager or Control Plane mode. THe Manager is the GUI and API. The Control Plane is paired with each vCenter you want to manage.
TCA requires vRealize Orchestrator (vRO) though this can be a shared instance across many vCenters
For license, click Activate Later. We'll need an HTTP Proxy to activate the license at some point in the Configuration tab.
Select the location of the Control Plane VM on the map, click continue
Enter an arbitrary system name, click continue
Enter the config details of the vSphere cloud you want to connect to.
a. vCenter Server URL, username & password
a. (optional) NSX Manager URL, username & password
a. SSO Server URL (usually just the vCenter URL again)
a. vRealize Orchestrator URL (leave blank for now until you deploy it - note with vRO 8.x this is port 443)
From here we can create Tanzu management clusters & Tanzu clusters in the "Caas Administration" setting. If this is greyed out, we need to activate the license key. This will be discussed later.
Upload it to vCenter as an OVF template, the vApp properties should be mostly self-explanatory network & admin settings. be sure to use the FQDN of the DNS record you created.
Boot the appliance, and wait a few minutes for it to initialize
Login to the appliance https://vro_fqdn/vco , with the "root" username, and password as configured in the vApp properties. Validate it's up.
Untar the installer tar zxvf harbor-offline-installer-v2.3.0.tgz in the directory you want it in
Generate a TLS cert. Here's how to do a self-CA-signed one with OpenSSL.
openssl genrsa -out ca.key 4096 to generate the CA key
openssl req -x509 -new -nodes -sha512 -days 3650 -subj "/C=CA/L=Toronto/O=bigco/OU=lab/CN=harborCA" -key ca.key -out ca.crt to generate the CA cert, change the values to your preferences
openssl genrsa -out harbor.bigo.lab.key 4096 to generate the server key
openssl req -sha512 -new -subj "/C=CA/L=Toronto/O=bigco/OU=Lab/CN=harbor.bigo.lab" -key yourdomain.com.key -out yourdomain.com.csr to generate the server cert and CSR, edit the CN to your desired FQDN for harbor
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters