• Workload / Namespace CIDR
  • Ingress CIDR
  • Egress CIDR (if NAT mode is enabled)
  • T0/VRF gateway
  • NAT mode
  • Namespace network : This is where the vSphere pods attach to. This network is carved out from the namespace network specified in the vSphere namespace creation workflow (10.245.0.0/16). This is different from the supervisor workload network which was 10.244.0.0/20.
  • AVI data segment : This is a non-routable subnet on the CGNAT range and has DHCP enabled. This is where the AVI SE data interfaces attach to. A dedicated interface on the SEs from the same SE Group for the supervisor will be consumed for each vSphere namespace that is created.
  • TKG Service cluster network: This is where TKG service clusters attach to. One segment will be created for each TKG service cluster. This network is carved out from the namespace network (10.245.0.0/16) specified in the vSphere namespace creation workflow. Note that for this article, we haven’t deployed any TKG service clusters as we don’t have any networking changes to highlight.
  • AVI data network: This is where the AVI SE data interface for this vSphere namespace attach to. DHCP is enabled on this data network (through segment DHCP in NSX)
  • VIP network: Because we are overriding the network settings from the supervisor with a custom Ingress network, we should see a new IPAM network created and added to the NSX cloud connector.
  • one for the supervisor and all vSphere namespaces inhering network settings from the supervisor
  • and the other for vSphere namespaces with a custom Ingress network

Similar Posts