• T0 / VRF Gateway
  • Workload / Namespace CIDR
  • Ingress CIDR
  • Egress CIDR (if NAT mode is enabled)
  • NAT mode
  • Namespace network : This is where the vSphere pods attach to. This network is carved out from the namespace network specified in the supervisor activation workflow.
  • AVI data segment : This is a non-routable subnet on the CGNAT range and has DHCP enabled. This is where the AVI SE data interfaces attach to. A dedicated interface on the SEs from the same SE Group for the supervisor will be consumed for each vSphere namespace that is created. Depending on the HA mode and VS scale factor, enough SEs (and/or interfaces) need to be available in the SE Group to cater for the increasing number of vSphere namespaces. For example, in an SE Group with 2 SEs and with a VS scale factor of 2 we could support up to 9 vSphere namespaces (as we have 10 SE interfaces to consume and one has already been consumed for SE management). Additional namespaces require more SEs to be spun up in the SE Group.
  • TKG Service cluster network: This is where TKG service clusters attach to. One segment will be created for each TKG service cluster. This network is carved out from the namespace network specified in the supervisor activation workflow.
  • AVI data network: As discussed previously, AVI SEs will work in a single arm mode under the NSX cloud connector. One SE interface will be consumed for each T1 gateway in the cloud connector and this interface will attach to the respective data network under the T1 gateway. DHCP is enabled on this data network (through segment DHCP in NSX)
  • VIP network: Because we are inheriting the network settings from the supervisor, we should see the same IPAM network being used for the vSphere namespace.
  • L4 VIPs (service of type loadbalancer) of all TKG service clusters (from all vSphere namespaces) lands into the default “admin” tenant in AVI – No multi-tenancy
  • One SE group is used per vSphere supervisor and this SE Group will be shared across all the vSphere namespaces and TKG service clusters – No data plane isolation
  • Services of type Ingress on the TKG service clusters will need to be handled by a different ingress controller (Contour, manual AKO install etc) – Manageability overhead
  • Edit the values.yaml file and update the below settings as per the requirements
    • clusterName: unique name for the AKO cluster
    • layer7Only: This need to be set to “False” to allow service cluster AKO handle L4 requests
    • serviceType: To define the AKO Ingress type. NodePortLocal mode is recommended if the TKG service cluster CNI is Antrea.
    • L4 defaultLBController:  This need to be set to “False” to allow service cluster AKO handle L4 requests using the loadbalancer class.
    • serviceEngineGroupName: A custom SE Group name if a dedicated SE Group need to be assigned to the TKG service cluster (if data plane isolation is a requirement)
    • tenantName: A custom tenant name if all L4 and L7 VIPs need to be steered to a specific AVI tenant (if AVI multitenancy is a requirement)
    • nsxtT1LR: The T1 LR ID that will determine the VRF context under which the VIP will be created. Ideally, this will be the T1 gateway created by NCP for the vSphere namespace where the TKG service cluster is hosted.

Similar Posts