Cabling the network

Review the following information to understand how to cable the ThinkAgile VX appliances to the network.

Logical network design for deployment

Figure 1 shows the logical network architecture for the various components in the vSAN cluster deployment.
Note: Figure 3 shows details about physical cabling.

The VX Deployer Appliance is a virtual machine that can run on the VMware vSphere ESXi hypervisor. In the diagram, the Management ESXi host is a designated system on which various management appliances run, including Lenovo xClarity and vCenter Server Appliance (VCSA).

In a preloaded ThinkAgile VX appliance, the VX Deployer virtual appliance is preloaded on the VX appliance. In this case, the Deployer will be running on one of the VX appliances and the cluster deployment will be performed from there.

Figure 1. Logical network design - cluster cabling perspective. From the cluster cabling perspective, the system on which the VX Deployer is running needs to be wired for both ESXi management and XCC management networks as shown in this diagram.
Graphic showing a logical view of the networking

Figure 2 shows the logical network architecture from the cluster operations perspective:
  • Each VX server has dedicated connections to the onboard 10 Gbps Ethernet ports used for in-band management (ESXi management, vCenter, etc.).

  • The XClarity Controller (XCC) interfaces have dedicated connections for out-of-band management access.

  • The VX Deployer virtual appliance needs access to the ESXi management and XCC management networks via the virtual switches. Therefore, the respective port groups on the switch should be configured.

Figure 2. Logical network architecture for cluster deployment operations
Graphic showing a logical view of the networking

Physical network cabling

Figure 3 shows how to physically cable the ThinkAgile VX Appliances to the network.
Note: In Figure 3, the respective network VLAN IDs shown are examples only. You can define your own VLAN IDs on the switches for the different traffic types.
Important: The ThinkAgile VX Deployer automatically assigns internal IP addresses to the VMK adapter for the vSAN network. Therefore, attempting to deploy multiple vSAN clusters on the same network as the ThinkAgile VX Deployer might lead to IP address conflicts. Make sure that each vSAN cluster is isolated in its own VLAN to avoid any issues with IP address overlap.
Figure 3. Physical network cabling for VX cluster deployment
Graphic showing the network cabling

Table 1. Network cabling diagram
Network type Required/optional From To
In-band management network:
  • Communication with ESXi hosts

  • Communication between the vCenter server appliance and ESXi hosts

  • vSAN storage traffic

  • vMotion (virtual machine migration) traffic

  • iSCSI storage traffic (if present)

Required Port 0 on NIC 10 Gbps Data Switch #1
Required Port 1 on NIC 10 Gbps Data Switch #2
Optional Port 2 on NIC 10 Gbps Data Switch #1
Optional Port 3 on NIC 10 Gbps Data Switch #2
Out-of-band management network:
  • Initial server discovery on the network via the SLP protocol

  • Server power control

  • LED management

  • Inventory

  • Events and alerts

  • BMC logs

  • Firmware updates

  • OS provisioning via remote media mount

Required BMC network connector 1 Gbps Management Switch
Data or user network Required 10 Gbps Data Switch #1 and #2 External network
Note:
  • On Out-of-band network

    • The out-of-band management network does not need to be on a dedicated physical network.It can be included as part of a larger management network.

    • The ThinkAgile VX Deployer, Lenovo XClarity Integrator (LXCI) must be able to access this network to communicate with the XCC modules.

    • During the initial cluster deployment and subsequent operations, the XCC interfaces should be accessible over this network to the deployer utility as well as xClarity Integrator (LXCI), xClarity Administrator (LXCA), etc., management software.

  • On network redundancy

    • Active-standby redundancy mode:

      When only 2 ports (ports 0 to 1) are connected to the 2 top-of-rack switches, you can configure the redundancy mode as active-standby mode. If the primary connection fails or the primary switch fails, the connection fails over.

    • Active-active redundancy mode:

      When 4 ports (ports 0 to 3) are connected to the 2 top-of-rack switches, you can configure the redundancy mode as active-active mode. If one connection fails, the other connections are still active. Also, loads are balanced across the ports.

    • Optionally, some switches might also support the virtual link aggregation (vLAG) protocol or equivalent, which connects the two top-of-rack switches via dedicated links and makes the switches appear as a single logical switch to the downstream hosts. In this case, the two connections going to the switches from the host can be configured as active-active links so that you can get load-balancing across the ports as well as a 20 Gb aggregate bandwidth.

Distributed vSwitches

The VX Deployer will create Distributed vSwitches when installing the VX/vSAN cluster.

The distributed vSwitches essentially form a logical switch that spans all hosts in the cluster. The physical ports on each host become logical uplink ports on the distributed vSwitch. As opposed to a standard vSwitch, distributed vSwitches provide advanced configuration options, such as traffic policy, link aggregation (LACP), and traffic shaping.

The number of created distributed switches is determined by the number of physical ports on each host that are connected to top-of-rack switches:

Figure 4 shows the logical design of the distributed vSwitches that will be created by the VX Deployer.

Figure 4. vSAN distributed vSwitch configuration
Graphic showing the port configuration on the data switches