
Welcome to Part 5 of the blog series on vSphere Supervisor with NSX and AVI. This is going to be a short article around the second vSphere namespace topology where we will override the supervisor network settings with a custom Ingress and Egress network, and will walk through the changes on NSX and AVI objects from what has been implemented in Part 4.
If you missed the previous article, I strongly recommend checking it out before proceeding with this chapter.
Part 4: vSphere namespace with network inheritance
https://vxplanet.com/2025/05/15/vsphere-supervisor-networking-with-nsx-and-avi-part-4-vsphere-namespace-with-network-inheritance/
Let’s get started:
vSphere namespace with Custom Ingress and Egress Network
In this vSphere namespace topology, we will override the supervisor network settings with a custom Ingress and Egress network for the namespace. As such, the below settings are overridden when the vSphere namespace is created:
- Workload / Namespace CIDR
- Ingress CIDR
- Egress CIDR (if NAT mode is enabled)
The below supervisor network settings are inherited:
- T0/VRF gateway
- NAT mode
Let’s login to vCenter Workload Management and create a vSphere namespace for the staging workloads, making sure the flag for “Override supervisor network settings” is checked and that the fields for Namespace network, Ingress and Egress CIDR are updated with custom values.

After basic namespace configuration (permissions, storage class, VM Class assignments etc,) we should have the vSphere namespace up and running.

Reviewing NSX objects
Now let’s review the NSX objects that are created by the workflow.
As discussed in Part 1 – Architecture and Topologies, a dedicated T1 gateway is created for the vSphere namespace up streamed to the provider T0 gateway (which is the same T0 gateway used by the vSphere supervisor, because we inherited the T0 gateway for this vSphere namespace).

Two default segments are created, both attached to the namespace T1 gateway. We also have additional dedicated segments for each TKG service cluster that is created under the vSphere namespace, all up streamed to the same namespace T1 gateway.
- Namespace network : This is where the vSphere pods attach to. This network is carved out from the namespace network specified in the vSphere namespace creation workflow (10.245.0.0/16). This is different from the supervisor workload network which was 10.244.0.0/20.
- AVI data segment : This is a non-routable subnet on the CGNAT range and has DHCP enabled. This is where the AVI SE data interfaces attach to. A dedicated interface on the SEs from the same SE Group for the supervisor will be consumed for each vSphere namespace that is created.
- TKG Service cluster network: This is where TKG service clusters attach to. One segment will be created for each TKG service cluster. This network is carved out from the namespace network (10.245.0.0/16) specified in the vSphere namespace creation workflow. Note that for this article, we haven’t deployed any TKG service clusters as we don’t have any networking changes to highlight.

Since we have created the vSphere namespace with NAT mode enabled, we have an SNAT rule created under the namespace T1 gateway for outbound communication. This SNAT IP is taken from the Egress CIDR that we specified during the namespace creation workflow. Note that east-west communication between supervisor and vSphere namespaces will always happen with No-SNAT (We should see no-SNAT rules under the T1 gateways of supervisor and vSphere namespaces).

Because we advertised this custom Ingress and Egress networks from the T0 gateway with route aggregation, we see the summary route in the TOR physical fabrics.

Reviewing AVI objects
Now let’s review the AVI objects that are created by the workflow.
The dedicated T1 gateway created for the vSphere namespace and the respective AVI data segment will be added by the workflow as data networks to the AVI cloud connector.

We will see two networks:
- AVI data network: This is where the AVI SE data interface for this vSphere namespace attach to. DHCP is enabled on this data network (through segment DHCP in NSX)
- VIP network: Because we are overriding the network settings from the supervisor with a custom Ingress network, we should see a new IPAM network created and added to the NSX cloud connector.

In the IPAM profile, we should see two networks:
- one for the supervisor and all vSphere namespaces inhering network settings from the supervisor
- and the other for vSphere namespaces with a custom Ingress network

The namespace T1 gateway will be mapped as a VRF Context in AVI as per the NSX cloud connector design.

If we have any TKG service clusters deployed, we should see control plane VIPs and application VIPs (L4 / L7 Ingress) created from the custom Ingress network.
TKG Service Clusters and AKO
The guidelines for setting up upstream AKO (service cluster AKO) for L4 / L7 Ingress is similar to what we discussed in Part 4. Please check out the same section at the end of Part 4 for more information.
Now let’s move on to Part 6, where we discuss about vSphere namespaces with a dedicated T0 gateway along with routing considerations for vSphere namespaces to communicate with the supervisor.
Stay tuned!!!
I hope the article was informative. Thanks for reading

Continue reading? Here are the other chapters of this series:
Part 1: Architecture and Topologies
https://vxplanet.com/2025/04/16/vsphere-supervisor-networking-with-nsx-and-avi-part-1-architecture-and-topologies/
Part 2: Environment Build and Walkthrough
https://vxplanet.com/2025/04/17/vsphere-supervisor-networking-with-nsx-and-avi-part-2-environment-build-and-walkthrough/
Part 3: AVI onboarding and Supervisor activation
https://vxplanet.com/2025/04/24/vsphere-supervisor-networking-with-nsx-and-avi-part-3-avi-onboarding-and-supervisor-activation/
Part 4: vSphere namespace with network inheritance
https://vxplanet.com/2025/05/15/vsphere-supervisor-networking-with-nsx-and-avi-part-4-vsphere-namespace-with-network-inheritance/
