• This architecture has three vSphere clusters each mapped as a vSphere zone. Each vSphere zone is a failure domain. Maximum of three zones are currently supported, and all the three zones / vSphere clusters become one supervisor.
  • The three vSphere zones are configured on the same NSX overlay transport zone.
  • Supervisor control plane VMs and TKG service cluster VMs (control plane and worker-node-pools) are distributed across the vSphere zones for high availability.
  • The edge cluster for the T0 gateway will be deployed on a shared vSphere management – edge cluster or on a dedicated vSphere edge cluster.
  • A dedicated SE Group will be created for the vSphere supervisor. The service engines within the SE group will be deployed across the zones with zone awareness (available from AVI 31.1.1 onwards)
  • Optionally, another dedicated SE Group per vSphere supervisor can be used to host the system DNS for L7 Ingress / GSLB / AMKO use cases.
  •  VxDC01-C01 (Cluster 1)
  • VxDC01-C02 (Cluster 2)
  • VxDC01-C03 (Cluster 3)
  • N+M mode with 2 SEs and 1 buffer: This allows the virtual services to be placed across two SEs with a buffer (hot-spare) SE that can take over in case of an AZ failure. The SEs will be distributed across the three zones.
  • A/A mode with 2 or 3 SEs: This allows the virtual services to be placed across 2 or 3 SEs depending on the VS scale factor. The SEs will be distributed across the three zones.
  • Use a template SE Group in the NSX cloud connector with placement scope set as vCenter instead of Availability zones
  • Run the supervisor activation workflow and wait for it to succeed.
  • The service engines will be deployed without zone awareness and could even land on the same vSphere cluster
  • Disable the supervisor control plane VIPs in AVI
  • Delete the service engines that are deployed, manually from the AVI controller
  • Edit the new cloned SE Group and change the placement settings to Availability zones.
  • Enable the supervisor control plane VIPs in AVI
  • This will allow new service engines to be deployed with zone awareness in vCenter
  • When a TKG service cluster is deployed, the control plane nodes will be deployed across the three vSphere zones.
  • TKG service cluster worker nodes can be deployed across vSphere zones by defining a “failureDomain” for each node pool. Each “failureDomain”  maps to a vSphere Zone, and hence each node-pool will be deployed to a specific vSphere  zone, as shown in the below spec:

Similar Posts