VCF Vxrail
VCF Vxrail
VxRail Guide
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2019-2022 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
9 Certificate Management 44
View Certificate Information 45
Configure VMware Cloud Foundation to Use Microsoft CA-Signed Certificates 45
VMware, Inc. 3
VMware Cloud Foundation on Dell EMC VxRail Guide
Prepare Your Microsoft Certificate Authority to Enable SDDC Manger to Manage Certificates
46
Install Microsoft Certificate Authority Roles 46
Configure the Microsoft Certificate Authority for Basic Authentication 47
Create and Add a Microsoft Certificate Authority Template 47
Assign Certificate Management Privileges to the SDDC Manager Service Account 49
Configure a Microsoft Certificate Authority in SDDC Manager 50
Install Microsoft CA-Signed Certificates using SDDC Manager 51
Configure VMware Cloud Foundation to Use OpenSSL CA-Signed Certificates 53
Configure OpenSSL-signed Certificates in SDDC Manager 53
Install OpenSSL-signed Certificates using SDDC Manager 53
Install Third-Party CA-Signed Certificates 55
Remove Old or Unused Certificates from SDDC Manager 58
Configure Certificates for a Shared Single Sign-On Domain 58
10 License Management 61
Add a License Key 61
Edit License Description 62
Delete License Key 62
12 Storage Management 64
vSAN Storage with VMware Cloud Foundation 65
Fibre Channel Storage with VMware Cloud Foundation 65
Sharing Remote Datastores with HCI Mesh for VI Workload Domains 66
VMware, Inc. 4
VMware Cloud Foundation on Dell EMC VxRail Guide
VMware, Inc. 5
VMware Cloud Foundation on Dell EMC VxRail Guide
Deploy Clustered Workspace ONE Access Instance Using vRealize Suite Lifecycle Manager
123
Configure an Anti-Affinity Rule and a Virtual Machine Group for the Clustered Workspace
ONE Access Instance 126
Configure NTP on the Clustered Workspace ONE Access Instance 127
Configure Identity Source for the Clustered Workspace ONE Access Instance 128
Add the Clustered Workspace ONE Access Cluster Nodes as Identity Provider Connectors
129
Assign Roles to Active Directory Groups for the Clustered Workspace ONE Access Instance
130
Assign Roles to Active Directory Groups for vRealize Suite Lifecycle Manager 131
VMware, Inc. 6
VMware Cloud Foundation on Dell EMC VxRail Guide
VMware, Inc. 7
VMware Cloud Foundation on Dell EMC VxRail Guide
VMware, Inc. 8
VMware Cloud Foundation on Dell EMC VxRail Guide
VMware, Inc. 9
VMware Cloud Foundation on Dell EMC VxRail Guide
Shut Down the VxRail Manager Virtual Machine in the Management Domain 291
Shut Down the vSphere Cluster Services Virtual Machines 291
Shut Down the vCenter Server Instance in the Management Domain 292
Shut Down vSAN and the ESXi Hosts in the Management Domain or for vSphere with
Tanzu 293
Starting Up VMware Cloud Foundation 294
Start the Management Domain 295
Start the vSphere and vSAN Components for the Management Domain 297
Start the vCenter Server Instance in the Management Domain 298
Start the vSphere Cluster Services 299
Start the VxRail Manager Virtual Machine 299
Start the SDDC Manager Virtual Machine 300
Start the NSX Manager Virtual Machines 300
Start the NSX Edge Nodes 301
Start the vRealize Suite Lifecycle Manager Virtual Machine 301
Start the Clustered Workspace ONE Access Virtual Machines 301
Start a Virtual Infrastructure Workload Domain 302
Start the vCenter Server Instance for a VxRail Virtual Infrastructure Workload Domain
303
Start ESXi hosts, vSAN and VxRail Manager in a Virtual Infrastructure Workload Domain
304
Start the NSX Manager Virtual Machines 304
Start the NSX Edge Nodes 305
Start a Virtual Infrastructure Workload Domain with vSphere with Tanzu 305
Start the vSphere and vSAN Components for the Management Domain 306
Start the vCenter Server Instance for a Virtual Infrastructure Workload Domain 307
Start the vSphere Cluster Services 308
Start the VxRail Manager Virtual Machine 309
Start the NSX Manager Virtual Machines 309
Start the NSX Edge Nodes 310
VMware, Inc. 10
About VMware Cloud Foundation
on Dell EMC VxRail 1
The VMware Cloud Foundation on Dell EMC VxRail Guide provides information on managing the
integration of VMware Cloud Foundation and Dell EMC VxRail. As this product is an integration
of VMware Cloud Foundation and Dell EMC VxRail, the expected results are obtained only when
the configuration is done from both the products. This guide covers all the information regarding
the VMware Cloud Foundation workflows. For the instructions on configuration to be done on Dell
EMC VxRail, this guide provides links to the Dell EMC VxRail documentation.
Intended Audience
The VMware Cloud Foundation on Dell EMC VxRail Guide is intended for the system
administrators of the VxRail environments who want to adopt VMware Cloud Foundation. The
information in this document is written for experienced data center system administrators who are
familiar with:
n IP networks
Additionally, you should be familiar with these software products, software components, and their
features:
Related Publications
The Planning and Preparation Workbook provides detailed information about the software, tools,
and external services that are required to deploy VMware Cloud Foundation on Dell EMC VxRail.
VMware, Inc. 11
VMware Cloud Foundation on Dell EMC VxRail Guide
The VMware Cloud Foundation on Dell EMC Release Notes provide information about each
release, including:
n Resolved issues
n Known issues
The VMware Cloud Foundation on Dell EMC VxRail API Reference Guide provides information
about using the API.
VMware, Inc. 12
VMware Cloud Foundation on Dell
EMC VxRail 2
VMware Cloud Foundation on Dell VMC VxRail enables VMware Cloud Foundation on top of the
Dell EMC VxRail platform.
An administrator of a VMware Cloud Foundation on Dell EMC VxRail system performs tasks such
as:
n Manage certificates.
n Troubleshoot issues and prevent problems across the physical and virtual infrastructure.
VMware, Inc. 13
Prepare a VxRail Environment
for Cloud Builder Appliance
Deployment
3
Before you can deploy the VMware Cloud Builder Appliance on the VxRail cluster, you must
complete the following tasks.
Procedure
For detailed information about how to image the VxRail management nodes, contact Dell EMC
Support.
n The discovery of the VxRail Nodes occurs. All the nodes that were imaged are detected.
n vCenter
n VSAN
n VxRail Manager
VMware, Inc. 14
VMware Cloud Foundation on Dell EMC VxRail Guide
VMware, Inc. 15
Deploy VMware Cloud Builder
Appliance 4
The VMware Cloud Builder appliance is a VM that you use to deploy and configure the
management domain and transfer inventory and control to SDDC Manager. During the
deployment process, the VMware Cloud Builder validates network information you provide in the
deployment parameter workbook such as DNS, network (VLANS, IPs, MTUs), and credentials.
This procedure describes deploying the VMware Cloud Builder appliance to the cluster that was
created during the VxRail first run.
Prerequisites
Component Requirement
CPU 4 vCPUs
Memory 4 GB
Storage 150 GB
The VMware Cloud Builder appliance must be on the same management network as the hosts to
be used. It must also be able to access all required external services, such as DNS and NTP.
Procedure
3 In the navigator, select the cluster that was created during the VxRail first run.
6 Browse to the VMware Cloud Builder appliance OVA, select it, and click Open.
7 Click Next.
8 Enter a name for the virtual machine, select a target location, and click Next.
9 Select the cluster you created during the VxRail first run and click Next.
VMware, Inc. 16
VMware Cloud Foundation on Dell EMC VxRail Guide
12 On the Select Storage page, select the storage for the VMware Cloud Builder appliance and
click Next.
13 On the Select networks dialog box, select the management network and click Next.
14 On the Customize template page, enter the following information for the VMware Cloud
Builder appliance and click Next:
Setting Details
Admin Username The admin user name cannot be one of the following pre-defined user
names:
n root
n bin
n daemon
n messagebus
n systemd-bus-proxy
n systemd-journal-gateway
n systemd-journal-remote
n systemd-journal-upload
n systemd-network
n systemd-resolve
n systemd-timesync
n nobody
n sshd
n named
n rpc
n tftp
n ntp
n smmsp
n cassandra
Admin Password/Admin Password The admin password must be a minimum of 8 characters and include at least
confirm one uppercase, one lowercase, one digit, and one special character.
Root password/Root password The root password must be a minimum of 8 characters and include at least
confirm one uppercase, one lowercase, one digit, and one special character.
Hostname Enter the hostname for the VMware Cloud Builder appliance.
Network 1 IP Address Enter the IP address for the VMware Cloud Builder appliance.
Default Gateway Enter the default gateway for the VMware Cloud Builder appliance.
DNS Servers IP address of the primary and secondary DNS servers (comma separated).
Do not specify more than two servers.
DNS Domain Search Paths Comma separated. For example vsphere.local, sf.vsphere.local.
VMware, Inc. 17
VMware Cloud Foundation on Dell EMC VxRail Guide
Note Make sure your passwords meet the requirements specified above before clicking
Finish or your deployment will not succeed.
16 After the VMware Cloud Builder appliance is deployed, SSH in to the VM with the admin
credentials provided in step 14.
18 Verify that the VMware Cloud Builder appliance has access to the required external services,
such as DNS and NTP by performing forward and reverse DNS lookups for each host and the
specified NTP servers.
VMware, Inc. 18
Deploy the Management Domain
Using VMware Cloud Builder 5
The VMware Cloud Foundation deployment process is referred to as bring-up. You specify
deployment information specific to your environment such as networks, hosts, license keys, and
other information in the deployment parameter workbook and upload the file to the VMware
Cloud Builder appliance to initiate bring-up.
During bring-up, the management domain is created on the ESXi hosts specified in the
deployment parameter workbook. The VMware Cloud Foundation software components are
automatically deployed, configured, and licensed using the information provided.
The following procedures describe how to perform bring-up of the management domain using
the deployment parameter workbook. You can also perform bring-up using a custom JSON
specification. See the VMware Cloud Foundation API Reference Guide for more information.
Externalizing the vCenter Server that gets created during the VxRail first run is automated as part
of the bring-up process.
The deployment parameter workbook is downloaded from the VMware Cloud Builder appliance
and the completed workbook is uploaded back to the VM. The deployment parameter workbook
can be reused to deploy multiple VMware Cloud Foundation instances of the same version.
Procedure
1 In a web browser, log in to the VMware Cloud Builder appliance administration interface:
https://Cloud_Builder_VM_FQDN.
2 Enter the admin credentials you provided when you deployed the VMware Cloud Builder
appliance and then click Log In.
3 On the End-User License Agreement page, select the I Agree to the End User License
Agreement check box and click Next.
4 On the Select Platform page, select VMware Cloud Foundation on VxRail and click Next.
VMware, Inc. 19
VMware Cloud Foundation on Dell EMC VxRail Guide
5 On the Review Prerequisites page, review the checklist to ensure the requirements are met,
and click Next.
If there are any gaps, ensure they are fixed before proceeding to avoid issues during the
bring-up process. You can download or print the prerequisite list for reference.
6 On the Prepare Configuration page, in the Download Workbook step, click Download.
7 Complete the deployment parameter workbook. See About the Deployment Parameter
Workbook.
The fields in yellow contain sample values that you should replace with the information for your
environment. If a cell turns red, the required information is missing, or validation input has failed.
Important The deployment parameter workbook is not able to fully validate all inputs due to
formula limitations of Microsoft Excel. Some validation issues may not be reported until you upload
the deployment parameter workbook to the VMware Cloud Builder appliance.
Note Do not copy and paste content between cells in the deployment parameter workbook, since
this may cause issues.
VxRail Prerequistes
n The VxRail first run is completed and vCenter Server and VxRail Manager VMs are deployed.
n The vCenter Server version matches the build listed in the Cloud Foundation Bill of Materials
(BOM). See the VMware Cloud Foundation Release Notes for the BOM.
Credentials Worksheet
The Credentials worksheet details the accounts and initial passwords for the VMware Cloud
Foundation components. You must provide input for each yellow box. A red cell may indicate
that validations on the password length has failed.
Input Required
Update the Default Password field for each user (including the automation user in the last row).
Passwords can be different per user or common across multiple users. The tables below provide
details on password requirements.
VMware, Inc. 20
VMware Cloud Foundation on Dell EMC VxRail Guide
Password Requirements
VxRail Manager service account (mystic) Standard. The service account password must be different than the
VxRail Manager root account password.
ESXi Host root account This is the password which you configured on the hosts during ESXi
installation.
NSX-T user interface and default CLI admin 1 Length 12-127 characters
account 2 Must include:
n mix of uppercase and lowercase letters
n a number
n a special character, such as @ ! # $ % ^ or ?
n at least five different characters
3 Must not include: * { } [ ] ( ) / \ ' " ` ~ , ; : . < >
VMware, Inc. 21
VMware Cloud Foundation on Dell EMC VxRail Guide
Password Requirements
VMware, Inc. 22
VMware Cloud Foundation on Dell EMC VxRail Guide
Management Enter the VLAN Enter a Enter the CIDR Enter the Enter MTU
Network ID. portgroup name. notation for the gateway IP for for management
The VLAN ID can network. network. network.
vMotion Network
be between 0 The MTU can
vSAN Network and 4094. be between 1500
and 9000.
Note Enter 0 for
the management
VLAN if you
imaged the
servers with VIA.
VLAN 0 means
the management
network is
untagged.
System vSphere Distributed Switch - Name Enter the name of the vDS to use for overlay traffic.
System vSphere Distributed Switch - vmnics to be used for Enter the vmnics to use for overlay traffic.
overlay traffic
Secondary vSphere Distributed Switch - Name Enter a name for the secondary vSphere Distributed Switch
(vDS).
Secondary vSphere Distributed Switch - vmnics Enter the vmnics to assign to the secondary vDS. For
example: vmnic4, vmnic5
Secondary vSphere Distributed Switch - MTU Size Enter the MTU size for the secondary vDS. Default value is
9000.
VMware, Inc. 23
VMware Cloud Foundation on Dell EMC VxRail Guide
Enter host names for each of the four ESXi hosts. Enter IP Address for each of the four ESXi hosts.
1 In a web browser, log in to the ESXi host using the VMware Host Client.
2 In the navigation pane, click Manage and click the Services tab.
4 Connect to the VMware Cloud Builder appliance using an SSH client such as Putty.
5 Enter the admin credentials you provided when you deployed the VMware Cloud Builder
appliance.
6 Retrieve the ESXi SSH fingerprints by entering the following command replacing hostname
with the FQDN of the first ESXi host:
7 In the VMware Host Client, select the TSM-SSH service for the ESXi host and click Stop.
9 Retrieve the vCenter Server SSH fingerprint by entering the following command replacing
hostname with the FQDN of your vCenter Server:
10 Retrieve the vCenter Server SSL thumbprint by entering the following command replacing
hostname with the FQDN of your vCenter Server:
openssl s_client -connect hostname:443 < /dev/null 2> /dev/null | openssl x509 -sha256
-fingerprint -noout -in /dev/stdin
VMware, Inc. 24
VMware Cloud Foundation on Dell EMC VxRail Guide
11 Retrieve the VxRail Manager SSH fingerprint by entering the following command replacing
hostname with the FQDN of your VxRail Manager:
12 Retrieve the VxRail Manager SSL thumbprint by entering the following command replacing
hostname with the FQDN of your VxRail Manager:
openssl s_client -connect hostname:443 < /dev/null 2> /dev/null | openssl x509 -sha256
-fingerprint -noout -in /dev/stdin
Caution For L3 aware or stretch clusters, DHCP is required for Host Overlay Network TEP IP
assignment.
For the management domain and VI workload domains with uniform L2 clusters, you can choose
to use static IP addresses instead. Make sure the IP range includes enough IP addresses for the
number of hosts that will use the static IP Pool. The number of IP addresses required depends
on the number of pNICs on the ESXi hosts that are used for the vSphere Distributed Switch that
handles host overlay networking. For example, a host with four pNICs that uses two pNICs for host
overlay traffic requires two IP addresses in the static IP pool..
Caution If you use static IP addresses for the management domain Host Overlay Network TEPs,
you cannot stretch clusters in the management domain or any VI workload domains.
Parameter Value
VLAN ID Enter a VLAN ID for the NSX-T host overlay network. The
VLAN ID can be between 0 and 4094.
Configure NSX-T Host Overlay Using a Static IP Pool Select No to use DHCP.
Parameter Value
VLAN ID Enter a VLAN ID for the NSX-T host overlay network. The
VLAN ID can be between 0 and 4094.
Configure NSX-T Host Overlay Using a Static IP Pool Select Yes to use a static IP pool.
VMware, Inc. 25
VMware Cloud Foundation on Dell EMC VxRail Guide
Parameter Value
CIDR Notation Enter CIDR notation for the NSX-T Host Overlay network.
Gateway Enter the gateway IP address for the NSX-T Host Overlay
network.
NSX-T Host Overlay Start IP Enter the first IP address to include in the static IP pool.
NSX-T Host Overlay End IP Enter the last IP address to include in the static IP pool.
Parameter Value
Note If you have only one DNS server, enter n/a in this cell.
Note If you have only one NTP server, enter n/a in this cell.
Parameter Value
DNS Zone Name Enter root domain name for your SDDC management components.
Note VMware Cloud Foundation expects all components to be part of the same DNS zone.
Parameter Value
Enable Customer Select an option to activate or deactivate CEIP across vSphere, NSX-T, and vSAN during bring-
Experience up.
Improvement
Program (“CEIP”)
VMware, Inc. 26
VMware Cloud Foundation on Dell EMC VxRail Guide
Parameter Value
Enable FIPS Security Select an option to activate or deactivate FIPS security mode during bring-up. VMware Cloud
Mode on SDDC Foundation supports Federal Information Processing Standard (FIPS) 140-2. FIPS 140-2 is a
Manager U.S. and Canadian government standard that specifies security requirements for cryptographic
modules. When you enable FIPS compliance, VMware Cloud Foundation enables FIPS cipher
suites and components are deployed with FIPS enabled.
To learn more about support for FIPS 140-2 in VMware products, see https://
www.vmware.com/security/certifications/fips.html.
Note This option is only available for new VMware Cloud Foundation installations and the
setting you apply during bring-up will be used for future upgrades. You cannot change the
FIPS security mode setting after bring-up.
In the License Keys section, update the red fields with your license keys. Ensure the license
key matches the product listed in each row and that the license key is valid for the version of
the product listed in the VMware Cloud Foundation BOM. The license key audit during bring-up
validates both the format of the key entered and the validity of the key.
During the bring-up process, you can provide the following license keys:
n ESXi
n vSAN
n vCenter Server
n SDDC Manager
Note The ESXi license key is the only mandatory key. If the other license keys are left blank, then
VMware Cloud Builder applies a temporary OEM license for vSAN, vCenter Server, and NSX-T
Data Center.
Important If you do not enter license keys for these products, you will not be able to create or
expand VI workload domains.
VMware, Inc. 27
VMware Cloud Foundation on Dell EMC VxRail Guide
This section of the deployment parameter workbook contains sample configuration information,
but you can update them with names that meet your naming standards.
Note All host names entries within the deployment parameter workbook expect the short name.
VMware Cloud Builder takes the host name and the DNS zone provided to calculate the FQDN
value and performs validation prior to starting the deployment. The specified host names and IP
addresses must be resolvable using the DNS servers provided, both forward (hostname to IP) and
reverse (IP to hostname), otherwise the bring-up process will fail.
vCenter Server Enter a host name for the vCenter Enter the IP address for the vCenter
Server. Server that is part of the management
VLAN.
Parameter Value
Note Enhanced vMotion Compatibility (EVC) is automatically enabled on the VxRail management
cluster.
Select the architecture model you plan to use. If you choose Consolidated, specify the names for
the vSphere resource pools. You do not need to specify resource pool names if you are using
the standard architecture model. See Introducing VMware Cloud Foundation for more information
about these architecture models.
Parameter Value
Resource Pool SDDC Management Specify the vSphere resource pool name for management
VMs.
Resource Pool SDDC Edge Specify the vSphere resource pool name for NSX-T VMs.
VMware, Inc. 28
VMware Cloud Foundation on Dell EMC VxRail Guide
Parameter Value
Resource Pool User Edge Specify the vSphere resource pool name for user deployed
NSX-T VMs in a consolidated architecture.
Resource Pool User VM Specify the vSphere resource pool name for user deployed
workload VMs.
Parameter Value
vSAN Datastore Name Enter vSAN datastore name for your management
components.
Parameter Value
NSX-T Management Cluster VIP Enter the host name and IP address for the NSX Manager
VIP.
The host name can match your naming standards but
must be registered in DNS with both forward and reverse
resolution matching the specified IP.
NSX-T Virtual Appliance Node #1 Enter the host name and IP address for the first node in
the NSX Manager cluster.
NSX-T Virtual Appliance Node #2 Enter the host name and IP address for the second node in
the NSX Manager cluster.
NSX-T Virtual Appliance Node #3 Enter the host name and IP address for the third node in
the NSX Manager cluster.
NSX-T Virtual Appliance Size Select the size for the NSX Manager virtual appliances. The
default is medium.
VMware, Inc. 29
VMware Cloud Foundation on Dell EMC VxRail Guide
Parameter Value
SDDC Manager Hostname Enter a host name for the SDDC Manager VM.
SDDC Manager IP Address Enter an IP address for the SDDC Manager VM.
Cloud Foundation Management Domain Name Enter a name for the management domain. This name will
appear in Inventory > Workload Domains in the SDDC
Manager UI.
Procedure
1 On the Prepare Configuration page, in the Download Workbook step click Next.
2 On the Prepare Configuration page, in the Complete Workbook step, click Next.
3 On the Prepare Configuration page, in the Upload File step, click Select File. Navigate to
your completed deployment parameters workbook and click Open.
4 After the file is uploaded, click Next to begin validation of the uploaded file. You can download
or print the validation list.
To access the bring-up log file, SSH to the VMware Cloud Builder appliance as admin and
open the /opt/vmware/bringup/logs/vcf-bringup-debug.log file.
If there is an error during the validation and the Next button is grayed out, you can either
make corrections to the environment or edit the deployment parameter workbook and upload
it again. Then click Retry to perform the validation again.
If any warnings are displayed and you want to proceed, click Acknowledge and then click
Next.
During the bring-up process, the following tasks are completed. After bring-up is completed, a
green bar is displayed indicating that bring-up was successful. A link to the SDDC Manager UI
is also displayed. If there are errors during bring-up, see
During the bring-up process, the vCenter Server, NSX-T Data Center and SDDC Manager
appliances are deployed and the management domain is created. The status of the bring-up
tasks is displayed in the UI.
VMware, Inc. 30
VMware Cloud Foundation on Dell EMC VxRail Guide
After bring-up is completed, a green bar is displayed indicating that bring-up was successful.
A link to the SDDC Manager UI is also displayed. If there are errors during bring-up, see
Chapter 6 Troubleshooting VMware Cloud Foundation Deployment.
6 Click Download to download a detailed deployment report. This report includes information
on assigned IP addresses and networks that were configured in your environment.
8 In the SDDC Deployment Completed dialog box, click Launch SDDC Manager.
The VMware Cloud Builder appliance includes the VMware Imaging Appliance service, which
you can use to install ESXi on additional servers after bring-up is complete. You can delete the
VMware Cloud Builder appliance to reclaim its resources or keep it available for future server
imaging.
What to do next
If you have multiple instances of SDDC Manager that are joined to the same Single Sign-On (SSO)
domain, you must take steps to ensure that certificates are installed correctly. See Configure
Certificates for a Shared Single Sign-On Domain.
VMware, Inc. 31
Troubleshooting VMware Cloud
Foundation Deployment 6
During the deployment stage of VMware Cloud Foundation you can use log files and the
Supportability and Serviceability (SoS) Tool to help with troubleshooting.
Note After a successful bring-up, you should only run the SoS Utility on the SDDC Manager
appliance. See Supportability and Serviceability (SoS) Tool in the VMware Cloud Foundation
Administration Guide.
The SoS Utility is not a debug tool, but it does provide health check operations that can facilitate
debugging a failed deployment.
To run the SoS Utility in VMware Cloud Builder, SSH in to the VMware Cloud Builder appliance
using the admin administrative account, then enter su to switch to the root user, and navigate to
the /opt/vmware/sddc-support directory and type ./sos followed by the options required for
your desired operation.
VMware, Inc. 32
VMware Cloud Foundation on Dell EMC VxRail Guide
Option Description
Option Description
--force Allows SoS operations from theVMware Cloud Builder appliance after bring-
up.
Note In most cases, you should not use this option. Once bring-up is
complete, you can run the SoS Utility directly from the SDDC Manager
appliance.
--skip-known-host-check Skips the specified check for SSL thumbprint for host in the known host.
VMware, Inc. 33
VMware Cloud Foundation on Dell EMC VxRail Guide
Option Description
--no-clean-old-logs Use this option to prevent the tool from removing any output from a
previous collection run.
By default, before writing the output to the directory, the tool deletes
the prior run's output files that might be present. If you want to retain
the older output files, specify this option.
--rvc-logs Collects logs from the Ruby vSphere Console (RVC) only. RVC is an
interface for ESXi and vCenter.
Note If the Bash shell is not enabled in vCenter, RVC log collection will
be skipped .
Note RVC logs are not collected by default with ./sos log collection.
Option Description
--jsongenerator-input Specify the path to the input file to be used by the JSON generator utility.
JSONGENERATORINPUT For example: /tmp/vcf-ems-deployment-parameter.xlsx.
--jsongenerator-design Use vcf-vxrail for VMware Cloud Foundation on Dell EMC VxRail.
JSONGENERATORDESIGN
VMware, Inc. 34
VMware Cloud Foundation on Dell EMC VxRail Guide
Note The health check options are primarily designed to run on the SDDC Manager appliance.
Running them on the VMware Cloud Builder appliance requires the --force parameter, which
instructs the SoS Utility to identify the SDDC Manager appliance deployed by VMware Cloud
Builder during the bring-up process, and then execute the health check remotely. For example:
Option Description
--certificate-health Verifies that the component certificates are valid (within the expiry date).
--general-health Verifies ESXi entries across all sources, checks the Postgres DB
operational status for hosts, checks ESXi for error dumps, and gets NSX
Manager and cluster status.
--ntp-health Verifies whether the time on the components is synchronized with the
NTP server in the VMware Cloud Builder appliance.
--services-health Performs a services health check to confirm whether services are running
--run-vsan-checks Runs proactive vSAN tests to verify the ability to create VMs within the
vSAN disks.
Sample Output
The following text is a sample output from an --ntp-health operation.
User passed --force flag, Running SOS from Cloud Builder VM, although Bringup is completed
and SDDC Manager is available. Please expe ct failures with SoS operations.
Health Check : /var/log/vmware/vcf/sddc-support/healthcheck-2020-02-11-23-03-53-24681
Health Check log : /var/log/vmware/vcf/sddc-support/healthcheck-2020-02-11-23-03-53-24681/
sos.log
SDDC Manager : sddc-manager.vrack.vsphere.local
NTP : GREEN
+-----+-----------------------------------------+------------+-------+
| SL# | Area | Title | State |
VMware, Inc. 35
VMware Cloud Foundation on Dell EMC VxRail Guide
+-----+-----------------------------------------+------------+-------+
| 1 | ESXi : esxi-1.vrack.vsphere.local | ESX Time | GREEN |
| 2 | ESXi : esxi-2.vrack.vsphere.local | ESX Time | GREEN |
| 3 | ESXi : esxi-3.vrack.vsphere.local | ESX Time | GREEN |
| 4 | ESXi : esxi-4.vrack.vsphere.local | ESX Time | GREEN |
| 5 | vCenter : vcenter-1.vrack.vsphere.local | NTP Status | GREEN |
+-----+-----------------------------------------+------------+-------+
Legend:
The following text is sample output from a --vm-screenshots log collection operation.
User passed --force flag, Running SOS from Cloud Builder VM, although Bringup is completed
and SDDC Manager is available. Please expect failures with SoS operations.
Logs : /var/log/vmware/vcf/sddc-support/sos-2018-08-24-10-50-20-8013
Log file : /var/log/vmware/vcf/sddc-support/sos-2018-08-24-10-50-20-8013/sos.log
Log Collection completed successfully for : [VMS_SCREENSHOT]
VMware Cloud Builder has a number of components which are used during the bring-up process,
each component generates a log file which can be used for the purpose of troubleshooting. The
components and their purpose are:
n JsonGenerator: Used to convert the deployment parameter workbook into the required
configuration file (JSON) that is used by the Bringup Validation Service and Bringup Service.
n Bringup Service: Used to perform the validation of the configuration file (JSON), the ESXi hosts
and infrastructure where VMware Cloud Foundation will be deployed, and to perform the
deployment and configuration of the management domain components and the first cluster.
n Supportability and Serviceability (SoS) Utility: A command line utility for troubleshooting
deployment issues.
VMware, Inc. 36
VMware Cloud Foundation on Dell EMC VxRail Guide
vcf-bringup-debug.log /var/log/vmware/vcf/bringup/
rest-api-debug.log /var/log/vmware/vcf/bringup/
VMware, Inc. 37
Getting Started with SDDC
Manager 7
You use SDDC Manager to perform administration tasks on your VMware Cloud Foundation
instance. The SDDC Manager UI provides an integrated view of the physical and virtual
infrastructure and centralized access to manage the physical and logical resources.
You work with the SDDC Manager UI by loading it in a web browser. For the list of supported
browsers and versions, see the Release Notes.
Prerequisites
To log in, you need the SDDC Manager IP address or FQDN and the password for the single-
sign on user (for example [email protected]). You added this information to the
deployment parameter workbook before bring-up.
Procedure
n https://FQDN where FQDN is the fully-qualified domain name of the SDDC Manager
appliance.
2 Log in to the SDDC Manager UI with vCenter Server Single Sign-On user credentials.
Results
You are logged in to SDDC Manager UI and the Dashboard page appears in the web browser.
VMware, Inc. 38
VMware Cloud Foundation on Dell EMC VxRail Guide
You use the navigation bar to move between the main areas of the user interface.
Navigation Bar
The navigation bar is available on the left side of the interface and provides a hierarchy for
navigating to the corresponding pages.
VMware, Inc. 39
VMware Cloud Foundation on Dell EMC VxRail Guide
VMware, Inc. 40
VMware Cloud Foundation on Dell EMC VxRail Guide
Procedure
1 In the SDDC Manager UI, click the logged-in account name in the upper right corner.
VMware, Inc. 41
Configuring Customer Experience
Improvement Program 8
VMware Cloud Foundation participates in the VMware Customer Experience Improvement
Program (CEIP). You can choose to activate or deactivate CEIP for your VMware Cloud
Foundation instance.
The Customer Experience Improvement Program provides VMware with information that allows
VMware to improve its products and services, to fix problems, and to advise you on how best to
deploy and use our products. As part of the CEIP, VMware collects technical information about
your organization’s use of the VMware products and services regularly in association with your
organization’s VMware license keys. This information does not personally identify any individual.
For additional information regarding the CEIP, refer to the Trust & Assurance Center at http://
www.vmware.com/trustvmware/ceip.html.
You can activate or deactive CEIP across all the components deployed in VMware Cloud
Foundation by the following methods:
n When you log into SDDC Manager for the first time, a pop-up window appears. The Join the
VMware Customer Experience Program option is selected by default. Deselect this option if
you do not want to join CEIP. Click Apply.
n You can activate or deactivate CEIP from the Administration tab in the SDDC Manager UI.
Procedure
VMware, Inc. 42
VMware Cloud Foundation on Dell EMC VxRail Guide
2 To activate CEIP, select the Join the VMware Customer Experience Improvement Program
option.
3 To deactivate CEIP, deselect the Join the VMware Customer Experience Improvement
Program option.
VMware, Inc. 43
Certificate Management
9
You can manage certificates for all user interface and API endpoints in a VMware Cloud
Foundation instance, including integrating a certificate authority, generating and submitting
certificate signing requests (CSR) to a certificate authority, and downloading and installing
certificates.
n vCenter Server
n NSX Manager
n SDDC Manager
n VxRail Manager
Note Use vRealize Suite Lifecycle Manager to manage certificates for the other vRealize Suite
components.
It is recommended that you replace all certificates after completing the deployment of the VMware
Cloud Foundation management domain. After you create a new VI workload domain, you can
replace certificates for the appropriate components as needed.
VMware, Inc. 44
VMware Cloud Foundation on Dell EMC VxRail Guide
Procedure
2 On the Workload Domains page, from the table, in the domain column click the domain you
want to view.
This tab lists the certificates for each resource type associated with the workload domain. It
displays the following details:
n Resource type
n Resource hostname
n Valid From
n Valid Until
4 To view certificate details, expand the resource next to the Resource Type column.
Complete the below tasks to manage Microsoft CA-Signed certificates using SDDC Manager.
VMware, Inc. 45
VMware Cloud Foundation on Dell EMC VxRail Guide
You use SDDC Manager to generate the certificate signing request (CSRs) and request a signed
certificate from the Microsoft Certificate Authority. SDDC Manager is then used to install the
signed certificates to SDDC components it manages. In order to achieve this the Microsoft
Certificate Authority must be configured to enable integration with SDDC Manager.
Note When connecting SDDC Manager to Microsoft Active Directory Certificate Services, ensure
that Web Enrollment role is installed on the same machine where the Certificate Authority role
is installed. SDDC Manager can't request and sign certificates automatically if the two roles
(Certificate Authority and Web Enrollment roles) are installed on different machines.
Procedure
1 Log in to the Microsoft Certificate Authority server by using a Remote Desktop Protocol (RDP)
client.
Password ad_admin_password
b From the Dashboard, click Add roles and features to start the Add Roles and Features
wizard.
f On the Select server roles page, under Active Directory Certificate Services, select
Certification Authority and Certification Authority Web Enrollment and click Next.
VMware, Inc. 46
VMware Cloud Foundation on Dell EMC VxRail Guide
Procedure
1 Log in to the Active Directory server by using a Remote Desktop Protocol (RDP) client.
Password ad_admin_password
b From the Dashboard, click Add roles and features to start the Add Roles and Features
wizard.
f On the Select server roles page, under Web Server (IIS) > Web Server > Security, select
Basic Authentication and click Next.
3 Configure the certificate service template and CertSrv web site, for basic authentication.
a Click Start > Run, enter Inetmgr.exe and click OK to open the Internet Information
Services Application Server Manager.
b Navigate to your_server > Sites > Default Web Site > CertSrv.
f In the Actions pane, under Manage Website, click Restart for the changes to take effect.
VMware, Inc. 47
VMware Cloud Foundation on Dell EMC VxRail Guide
components. After you create the template, you add it to the certificate templates of the Microsoft
Certificate Authority.
Procedure
1 Log in to the Active Directory server by using a Remote Desktop Protocol (RDP) client.
Password ad_admin_password
3 In the Certificate Template Console window, under Template Display Name, right-click Web
Server and select Duplicate Template.
4 In the Properties of New Template dialog box, click the Compatibility tab and configure the
following values.
Setting Value
5 In the Properties of New Template dialog box, click the General tab and enter a name for
example, VMware in the Template display name text box.
6 In the Properties of New Template dialog box, click the Extensions tab and configure the
following.
d Click the Enable this extension check box and click OK.
f Click the Signature is proof of origin (nonrepudiation) check box, leave the defaults for all
other options and click OK.
7 In the Properties of New Template dialog box, click the Subject Name tab, ensure that the
Supply in the request option is selected, and click OK to save the template.
VMware, Inc. 48
VMware Cloud Foundation on Dell EMC VxRail Guide
8 Add the new template to the certificate templates of the Microsoft CA.
b In the Certification Authority window, expand the left pane, right-click Certificate
Templates, and select New > Certificate Template to Issue.
c In the Enable Certificate Templates dialog box, select VMware, and click OK.
Prerequisites
n Create a user account in Active Directory with Domain Users membership. For example, svc-
vcf-ca.
Procedure
1 Log in to the Microsoft Certificate Authority server by using a Remote Desktop Protocol (RDP)
client.
Password ad_admin_password
2 Configure least privilege access for a user account on the Microsoft Certificate Authority.
e In the Permissions for .... section configure the permissions and click OK.
Read Deselected
Manage CA Deselected
VMware, Inc. 49
VMware Cloud Foundation on Dell EMC VxRail Guide
3 Configure least privilege access for the user account on the Microsoft Certificate Authority
Template.
e In the Permissions for .... section configure the permissions and click OK.
Read Selected
Write Deselected
Enroll Selected
Autoenroll Deselected
Prerequisites
n Verify connectivity between SDDC Manager and the Microsoft Certificate Authority Server. See
VMware Ports and Protocols.
n Verify that the Microsoft Certificate Authority Server has the correct roles installed on the
same machine where the Certificate Authority role is installed. See Install Microsoft Certificate
Authority Roles.
n Verify the Microsoft Certificate Authority Server has been configured for basic authentication.
See Configure the Microsoft Certificate Authority for Basic Authentication.
n Verify a valid certificate template has been configured on the Microsoft Certificate Authority.
See Create and Add a Microsoft Certificate Authority Template.
n Verify least privileged user account has been configured on the Microsoft Certificate Authority
Server and Template. See Assign Certificate Management Privileges to the SDDC Manager
Service Account.
n Verify that time is synchronized between the Microsoft Certificate Authority and the SDDC
Manager appliance. Each system can be configured with a different timezone, but it is
recommended that they receive their time from the same NTP source.
VMware, Inc. 50
VMware Cloud Foundation on Dell EMC VxRail Guide
Procedure
Setting Value
CA Server URL Specify the URL for the issuing certificate authority.
This address must begin with https:// and end with
certsrv. For example, https://ca.rainpole.io/certsrv.
Template Name Enter the issuing certificate template name. You must
create this template in Microsoft Certificate Authority.
For example, VMware.
Procedure
2 On the Workload Domains page, from the table, in the domain column click the workload
domain you want to view.
a From the table, select the check box for the resource type for which you want to generate
a CSR.
VMware, Inc. 51
VMware Cloud Foundation on Dell EMC VxRail Guide
Option Description
Key Size Select the key size (2048 bit, 3072 bit, or 4096 bit)
from the drop-down menu.
Organization Name Type the name under which your company is known.
The listed organization must be the legal registrant of
the domain name in the certificate request.
State Type the full name (do not abbreviate) of the state,
province, region, or territory where your company is
legally registered.
d (Optional) On the Subject Alternative Name dialog, enter the subject alternative name(s)
and click Next.
a From the table, select the check box for the resource type for which you want to generate
a signed certificate for.
c In the Generate Certificates dialog box, from the Select Certificate Authority drop-down
menu, select Microsoft.
a From the table, select the check box for the resource type for which you want to install a
signed certificate.
VMware, Inc. 52
VMware Cloud Foundation on Dell EMC VxRail Guide
Complete the following tasks to be able to manage OpenSSL-signed certificates issued by SDDC
Manager.
Procedure
Setting Value
State Enter the full name (do not abbreviate) of the state,
province, region, or territory where your company is
legally registered.
Procedure
VMware, Inc. 53
VMware Cloud Foundation on Dell EMC VxRail Guide
2 On the Workload Domains page, from the table, in the domain column click the workload
domain you want to view.
a From the table, select the check box for the resource type for which you want to generate
a CSR.
Option Description
Key Size Select the key size (2048 bit, 3072 bit, or 4096 bit)
from the drop-down menu.
Organization Name Type the name under which your company is known.
The listed organization must be the legal registrant of
the domain name in the certificate request.
State Type the full name (do not abbreviate) of the state,
province, region, or territory where your company is
legally registered.
d (Optional) On the Subject Alternative Name dialog, enter the subject alternative name(s)
and click Next.
You can enter multiple values separated by comma (,), semicolon (;), or space ( ). For
NSX-T, you can enter the subject alternative name for each node along with the Virtual IP
(primary) node.
VMware, Inc. 54
VMware Cloud Foundation on Dell EMC VxRail Guide
a From the table, select the check box for the resource type for which you want to generate
a signed certificate.
c In the Generate Certificates dialog box, from the Select Certificate Authority drop-down
menu, select OpenSSL.
a From the table, select the check box for the resource type for which you want to install a
signed certificate.
Prerequisites
Uploading CA-Signed certificates from a Third Party Certificate Authority requires that you collect
the relevant certificate files in the correct format and then create a single .tar.gz file with the
contents. It's important that you create the correct directory structure within the .tar.gz file as
follows:
n The name of the top-level directory must exactly match the name of the workload domain as it
appears in the list on the Inventory > Workload Domains. For example, sfo-m01.
n The PEM-encoded root CA certificate chain file (must be named rootca.crt) must
reside inside this top-level directory. The rootca.crt chain file contains a root certificate
authority and can have n number of intermediate certificates.
For example:
-----BEGIN CERTIFICATE-----
<Intermediate1 certificate content>
-----END CERTIFICATE------
-----BEGIN CERTIFICATE-----
<Intermediate2 certificate content>
-----END CERTIFICATE------
-----BEGIN CERTIFICATE-----
<Root certificate content>
-----END CERTIFICATE-----
VMware, Inc. 55
VMware Cloud Foundation on Dell EMC VxRail Guide
In the above example, there are two intermediate certificates, intermediate1 and
intermediate2, and a root certificate. Intermediate1 must use the certificate issued by
intermediate2 and intermediate2 must use the certificate issued by Root CA.
n The root CA certificate chain file, intermediate certificates, and root certificate must contain
the Basic Constraints field with value CA:TRUE.
n This directory must contain one sub-directory for each component resource for which you
want to replace the certificates.
n Each sub-directory must exactly match the resource hostname of a corresponding component
as it appears in the Resource Hostname column in the Inventory > Workload Domains >
Security tab.
n Each sub-directory must contain the corresponding .csr file, whose name must exactly
match the resource as it appears in the Resource Hostname column in the Inventory >
Workload Domains > Security tab.
n Each sub-directory must contain a corresponding .crt file, whose name must exactly match
the resource as it appears in the Resource Hostname column in the Inventory > Workload
Domains > Security tab. The content of the .crt files must end with a newline character.
n Server certificate (NSXT_FQDN.crt) must contain the Basic Constraints field with value
CA:FALSE.
n If the NSX-T certificate contains HTTP or HTTPS based CRL Distribution Point it must be
reachable from the server.
n The extended key usage (EKU) of the generated certificate must contain the EKU of the
CSR generated.
Note All resource and hostname values can be found in the list on the Inventory > Workload
Domains > Security tab.
Procedure
2 On the Workload Domains page, from the table, in the domain column click the workload
domain you want to view.
VMware, Inc. 56
VMware Cloud Foundation on Dell EMC VxRail Guide
a From the table, select the check box for the resource type for which you want to generate
a CSR.
Option Description
Key Size Select the key size (2048 bit, 3072 bit, or 4096 bit)
from the drop-down menu.
Organization Name Type the name under which your company is known.
The listed organization must be the legal registrant of
the domain name in the certificate request.
State Type the full name (do not abbreviate) of the state,
province, region, or territory where your company is
legally registered.
d (Optional) On the Subject Alternative Name dialog, enter the subject alternative name(s)
and click Next.
You can enter multiple values separated by comma (,), semicolon (;), or space ( ). For
NSX-T, you can enter the subject alternative name for each node along with the Virtual IP
(primary) node.
Note Wildcard subject alternative name, such as *.example.com are not recommended.
5 Download and save the CSR files to the directory by clicking Download CSR.
a Verify that the different .csr files have successfully generated and are allocated in the
required directory structure.
b Request signed certificates from a Third-party Certificate authority for each .csr.
VMware, Inc. 57
VMware Cloud Foundation on Dell EMC VxRail Guide
c Verify that the newly acquired .crt files are correctly named and allocated in the required
directory structure.
d Create a new .tar.gz file of the directory structure ready for upload to SDDC Manager. For
example: <domain name>.tar.gz.
8 In the Upload and Install Certificates dialog box, click Browse to locate and select the newly
created <domain name>.tar.gz file and click Open.
9 Click Upload.
10 If the upload is successful, click Install Certificate. The Security tab displays a status of
Certificate Installation is in progress.
Procedure
Setting Value
Password vcf_password
3 Using the sddcmanager-ssl-util.sh script retrieve a list of the names of the certificates in the
trust store.
4 Using the name of the certificate, delete the old or unused certificate.
5 (Optional) Clean out root certificates in VMware Endpoint Certificate Store from the Platform
Services Controller node.
VMware, Inc. 58
VMware Cloud Foundation on Dell EMC VxRail Guide
By default, each vCenter Server that you deploy uses VMCA-signed certificates. VMware
recommends that you replace the default VMCA-signed certificates for each management domain
vCenter Server, across all SDDC Manager instances, with certificates signed by the same external
Certificate Authority (CA). After you deploy a new VI workload domain in any of the SDDC
Manager instances, install a certificate in the VI workload domain vCenter Server that is signed
by the same external CA as the management domain vCenter Servers.
If you plan to use the default VMCA-signed certificates for each vCenter Server across all SDDC
Manager instances, you must take the following steps every time an additional vCenter Server
Appliance is introduced to the SSO domain by any SDDC Manager instance:
n Import the VMCA machine certificate for the new vCenter Server Appliance into the trust store
of all other SDDC Manager instances participating in that SSO domain.
n You deploy a new SDDC Manager instance that shares the same SSO domain as an existing
SDDC Manager instance.
n You deploy a new VI workload domain in any of the SDDC Manager instances that share an
SSO domain.
Procedure
1 Get the certificate for the new management or VI workload domain vCenter Server.
a SSH to the new vCenter Server Appliance using the root user account.
b Enter Shell.
c Retrieve the certificate from the VMware Certificate Store (VECS) and send it to an output
file.
2 Copy the certificate (<new-vcenter>.cer) to a computer that has access to the SDDC
Manager instance(s) to which you want to import the certificate.
3 Import the certificate to the trust store of the SDDC Manager instance(s).
b SSH in to the SDDC Manager appliance using the vcf user account.
VMware, Inc. 59
VMware Cloud Foundation on Dell EMC VxRail Guide
trustedKey=$(cat /etc/vmware/vcf/commonsvcs/trusted_certificates.key)
4 Restart all SDDC Manager services on each SDDC Manager instance to which you imported a
trusted certificate.
VMware, Inc. 60
License Management
10
When deploying management components, VMware Cloud Foundation requires access to valid
license keys. You add license keys to the SDDC Manager inventory so that they can be consumed
at deployment time, but they are not synchronized between SDDC Manager and the underlying
components.
In the deployment parameter workbook that you completed before bring-up, you entered license
keys for the following components:
n VMware vSphere
n VMware vSAN
After bring-up, these license keys appear in the Licensing screen of the SDDC Manager UI.
You must have adequate license units available before you create a VI workload domain, add a
host to a vSphere cluster, or add a vSphere cluster to a workload domain. Add license keys as
appropriate before you begin any of these tasks.
Procedure
VMware, Inc. 61
VMware Cloud Foundation on Dell EMC VxRail Guide
6 Click Add.
What to do next
If you want to replace an existing license with a newly added license, you must add and assign
the new license in the management UI (for example, vSphere Client or NSX Manager) of the
component whose license you are replacing.
Procedure
2 Click the vertical ellipsis (three dots) next to the license key and click Edit Description.
3 On the Edit License Key Description dialog, edit the description and click Save.
Procedure
2 Click the vertical ellipsis (three dots) next to the license key you want to delete and click
Remove.
Results
VMware, Inc. 62
ESXi Lockdown Mode
11
You can activate or deactivate normal lockdown mode in VMware Cloud Foundation to increase
the security of your ESXi hosts.
To activate or deactivate normal lockdown mode in VMware Cloud Foundation, you must perform
operations through the vCenter Server. For information on how to activate or deactivate normal
lockdown mode, see "Lockdown Mode" in vSphere Security at https://docs.vmware.com/en/
VMware-vSphere/index.html.
You can activate normal lockdown mode on a host after the host is added to workload domain.
VMware Cloud Foundation creates service accounts that can be used to access the hosts. Service
accounts are added to the Exception Users list during the bring-up or host commissioning. You
can rotate the passwords for the service accounts using the password management functionality in
the SDDC Manager UI.
VMware, Inc. 63
Storage Management
12
To create and manage a workload domain, VMware Cloud Foundation requires at least one
shared storage type for all ESXi hosts within a cluster. This initial shared storage type, known
as principal storage, is configured during VxRail first run. Additional shared storage, known as
supplemental storage, can be added using the vSphere Client after a cluster has been created.
Although the management domain requires vSAN as its principal storage, vSAN is not required for
VI workload domains or vSphere clusters.
For a VI workload domain, the initial storage type can be one of the following:
n vSAN
This initial shared storage type is known as principal storage. Principal storage is configured
during the VxRail first run. Once created, the principal storage type for a cluster cannot be
changed. However, a VI workload domain can include multiple clusters with unique principal
storage types.
Additional shared storage types can be added to a cluster in the management domain or a VI
workload domain after it has been created. The additional supported shared storage options
include:
n vSAN
Additional shared storage types are known as supplemental storage. All supplemental storage
must be listed in the VMware Compatibility Guide. Supplemental storage can be manually added
or removed after a cluster has been created using the vSphere Client. Multiple supplemental
storage types can be presented to a cluster in the management domain or any VI workload
domain.
VMware, Inc. 64
VMware Cloud Foundation on Dell EMC VxRail Guide
vSAN is typically used as principal storage, however it can be used as supplemental storage in a
cluster when HCI Mesh is implemented.
Consolidated Workload
Storage Type Domain Management Domain VI Workload Domain
Supplemental No No Yes
n A minimum of three ESXi hosts that meet the vSAN hardware, cluster, software, networking
and license requirements. For information, see the vSAN Planning and Deployment Guide.
n Perform a VxRail first run specifying the vSAN configuration settings. For information on the
VxRail first run, contact Dell EMC Support.
In some instances SDDC Manager may be unable to automatically mark the host disks as capacity.
Follow the Mark Flash Devices as Capacity Using ESXCLI procedure in the vSAN Planning and
Deployment Guide.
n To use vSAN as principal storage for a new cluster, perform the VxRail first run and then add
the VxRail cluster. See Add a VxRail Cluster to a Workload Domain Using the SDDC Manager
UI.
VMware, Inc. 65
VMware Cloud Foundation on Dell EMC VxRail Guide
Fibre Channel can only be used as supplemental storage for the management domain and
consolidated workload domains, however it can be used as principal storage for VI workload
domain.
Consolidated Workload
Storage Type Domain Management Domain VI Workload Domain
Principal No No Yes
n Perform a VxRail first run specifying the VMFS on FC configuration settings. For information
on the VxRail first run, contact Dell EMC Support.
n To use Fibre Channel as principal storage for a new cluster, perform the VxRail first run and
then add the VxRail cluster. See Add a VxRail Cluster to a Workload Domain Using the SDDC
Manager UI
n To use Fibre Channel as supplemental storage follow the Create an NFS Datastore procedure
in the vSphere Storage Guide.
VMware Cloud Foundation supports sharing remote datastores with HCI Mesh for VI workload
domains.
You can create HCI Mesh by mounting remote vSAN datastores on vSAN clusters and enable
data sharing from the vCenter Server. It can take upto 5 minutes for the mounted remote vSAN
datastores to appear in the .
VMware, Inc. 66
VMware Cloud Foundation on Dell EMC VxRail Guide
It is recommended that you do not mount or configure remote vSAN datastores for vSAN clusters
in the management domain.
For more information on sharing remote datastores with HCI Mesh, see "Sharing Remote
Datastores with HCI Mesh" in Administering VMware vSAN 7.0 at https://docs.vmware.com/en/
VMware-vSphere/index.html.
Note After enabling HCI Mesh by mounting remote vSAN datastores, you can migrate VMs from
the local datastore to a remote datastore. Since each cluster has its own VxRail Manager VM, you
should not migrate VxRail Manager VMs to a remote datastore.
VMware, Inc. 67
Workload Domain Management
13
Workload domains are logical units that carve up the compute, network, and storage resources
of the VMware Cloud Foundation system. The logical units are groups of ESXi hosts managed by
vCenter Server instances with specific characteristics for redundancy and VMware best practices.
The first workload domain, referred to as the management domain, is created by during bring-
up. The VMware Cloud Foundation software stack is deployed within the management domain.
Additional infrastructure virtual machines which provide common services, such as backup or
security appliances, can also be deployed in the management domain.
n VMware vSAN
n Using the Workflow Optimization Script to Create a VxRail VI Workload Domain or Add a
VxRail Cluster
VMware, Inc. 68
VMware Cloud Foundation on Dell EMC VxRail Guide
You must be careful when adding virtual machines to the management domain. You do not want
to consume excessive resources that would obstruct standard management operations. Excess
capacity consumption can prevent successful virtual machine fail overs in the event of a host failure
or maintenance action.
You can add capacity to the management domain by adding a host(s). To expand the
management domain, see Expand a Workload Domain.
Procedure
2 In the workload domains table, click the name of the management domain.
5 Create a new virtual machine within correct resource pool (Resource Pool User VM).
Note Do not move any of the VMware Cloud Foundation management virtual machines out of
the resource pools they were placed in during bring-up.
n Deploys a vCenter Server Appliance for the new VI workload domain within the management
domain. By using a separate vCenter Server instance per VI workload domain, software
updates can be applied without impacting other VI workload domains. It also allows for each
VI workload domain to have additional isolation as needed.
VMware, Inc. 69
VMware Cloud Foundation on Dell EMC VxRail Guide
n For the first VI workload domain, the workflow deploys a cluster of three NSX Managers in the
management domain and configures a virtual IP (VIP) address for the NSX Manager cluster.
The workflow also configures an anti-affinity rule between the NSX Manager VMs to prevent
them from being on the same host for high availability. Subsequent VI workload domains can
share an existing NSX Manager cluster or deploy a new one.
n By default, VI workload domains do not include any NSX Edge clusters and are isolated. To
provide north-south routing and network services, add one or more NSX Edge clusters to a VI
workload domain. See Chapter 14 NSX Edge Cluster Management .
Note You can only perform one VI workload domain operation at a time. For example, while
you are deploying a new VI workload domain, you cannot add a cluster to any other VI workload
domain.
n If you plan to use DHCP for the NSX host overlay network, a DHCP server must be configured
on the NSX host overlay VLAN for the VI workload domain. When NSX-T Data Center
creates NSX Edge tunnel endpoints (TEPs) for the VI workload domain, they are assigned
IP addresses from the DHCP server.
Note If you do not plan to use DHCP, you can use a static IP pool for the NSX host overlay
network. The static IP pool is created or selected as part of VI workload domain creation.
n If the management domain in your environment has been upgraded to a version different from
the original installed version, you must download a VI workload domain install bundle for the
current version before you can create a VI workload domain.
n Decide on a name for your VI workload domain. Each VI workload domain must have a unique
name. It is good practice to include the region and site information in the name because
resource object names (such as host and vCenter names) are generated based on the VI
workload domain name. The name can be three to 20 characters long and can contain any
combination of the following:
n Numbers
Note Spaces are not allowed in any of the names you specify when creating a VI workload
domain.
VMware, Inc. 70
VMware Cloud Foundation on Dell EMC VxRail Guide
Although the individual VMware Cloud Foundation components support different password
requirements, you must set passwords following a common set of requirements across all
components:
n Minimum length: 12
n Maximum length: 16
n At least one lowercase letter, one uppercase letter, a number, and one of the following
special characters: ! @ # $ ^ *
n A dictionary word
n A palindrome
n Verify that you have the completed Planning and Preparation Workbook with the VI workload
domain deployment option included.
n The IP addresses and Fully Qualified Domain Names (FQDNs) for the vCenter Server and NSX
Manager instances must be resolvable by DNS.
n You must have valid license keys for the following products:
n vSAN
n vSphere
Because vSphere and vSAN licenses are per CPU, ensure that you have sufficient licenses
for the ESXi hosts to be used for the VI workload domain. See Chapter 10 License
Management.
When you use the product UIs, you complete some of the steps in the SDDC Manager UI and
some of the steps in the VxRail Manager UI:
n Add the primary VxRail cluster to the VI workload domain (SDDC Manager UI)
This following documentation describes the process of creating a workload domain using the
product UIs.
VMware, Inc. 71
VMware Cloud Foundation on Dell EMC VxRail Guide
Alternatively, you can use the Workflow Optimization script to perform all of the steps to create a
VI workload domain in one place. See Create a VxRail VI Workload Domain Using the Workflow
Optimization Script.
Procedure
1 In the navigation pane, click + Workload Domain and then select VI-VxRail Virtual
Infrastructure Setup.
The name must contain between 3 and 20 characters. It is a good practice to include location
information in the name as resource object names (such as host and vCenter names) are
generated on the basis of the VI workload domain name.
3 Type a name for the organization that requested or will use the virtual infrastructure, such as
Finance and click Next.
4 On the Compute page of the wizard, enter the vCenter Server DNS name.
6 Type and re-type the vCenter Server root password and click Next.
8 On the Validation page, wait until all of the inputs have been successfully validated and then
click Finish.
If validation is unsuccessful, you cannot proceed. Use the Back button to modify your settings
and try again.
What to do next
Add the primary VxRail cluster to the workload domain. The status of the VI workload domain
creation task will be Activating until you do so. See Add the Primary VxRail Cluster to a VI
Workload Domain Using the SDDC Manager UI.
There are two ways to add the primary VxRail cluster to a workload domain, depending on your
use case.
VMware, Inc. 72
VMware Cloud Foundation on Dell EMC VxRail Guide
You have a single system vSphere Distributed Switch (vDS) SDDC Manager UI
used for both system and overlay traffic.
You have two system vSphere Distributed Switches. One is MultiDvsAutomator script
used for system traffic and one is used for overlay traffic.
You have one or two system vSphere Distributed switches MultiDvsAutomator script
for system traffic and a separate vDS for overlay traffic.
Add the Primary VxRail Cluster to a VI Workload Domain Using the SDDC Manager UI
You can add the primary VxRail cluster to a VI workload domain using the SDDC Manager UI.
Prerequisites
n Create a local user in vCenter Server. This is required for the VxRail first run.
n Image the VI workload domain nodes. For information on imaging the nodes, refer to Dell
EMC VxRail documentation.
n Perform a VxRail first run of the VI workload domain nodes using the vCenter Server for
that workload domain. For information on VxRail first run, refer to the Dell EMC VxRail
documentation.
Procedure
2 In the workload domains table, click the vertical ellipsis (three dots) next to the VI workload
domain in the Activating state and click Add VxRail Cluster.
3 On the Discovered Clusters page, select a VxRail cluster and click Next.
4 On the Discovered Hosts page, enter the SSH password for the discovered hosts and click
Next.
5 On the VxRail Manager page, enter the Admin and Root user names and passwords.
6 On the Thumbprint Verification page, click to confirm the SSH thumbprints for VxRail
Manager and the ESXi hosts.
7 The Networking page displays all the networking details for the cluster.
For the first VI workload domain, you must create an NSX Manager cluster.
b If you are reusing an existing NSX Manager cluster, select the cluster and click Next.
The networking information for the selected cluster displays and cannot be edited.
c If you are creating a new NSX Manager cluster, enter the VLAN ID for the NSX-T host
overlay (host TEP) network.
VMware, Inc. 73
VMware Cloud Foundation on Dell EMC VxRail Guide
Note You can only use a static IP pool for the management domain and VI workload
domains with uniform L2 clusters. For L3 aware or stretch clusters, DHCP is required for
Host Overlay Network TEP IP assignment.
Option Description
DHCP With this option VMware Cloud Foundation uses DHCP for the Host
Overlay Network TEPs.
A DHCP server must be configured on the NSX-T host overlay (Host TEP)
VLAN. When NSX creates TEPs for the VI workload domain, they are
assigned IP addresses from the DHCP server.
Static IP Pool With this option VMware Cloud Foundation uses a static IP pool for the
Host Overlay Network TEPs. You can re-use an existing IP pool or create
a new one.
To create a new static IP Pool provide the following information:
n Pool Name
n Description
n CIDR
n IP Range.
n Gateway IP
Make sure the IP range includes enough IP addresses for the number of
hosts that will use the static IP Pool. The number of IP addresses required
depends on the number of pNICs on the ESXi hosts that are used for
the vSphere Distributed Switch that handles host overlay networking. For
example, a host with four pNICs that uses two pNICs for host overlay
traffic requires two IP addresses in the static IP pool.
Note You cannot stretch a cluster that uses static IP addresses for the
NSX-T Host Overlay Network TEPs.
f Click Next.
8 Enter the license keys for NSX-T Data Center and VMware vSAN and click Next.
10 On the Validation page, wait until all of the inputs have been successfully validated.
If validation is unsuccessful, you cannot proceed. Use the Back button to modify your settings
and try again.
11 Click Finish.
VMware, Inc. 74
VMware Cloud Foundation on Dell EMC VxRail Guide
What to do next
If you have multiple instances of SDDC Manager that are joined to the same Single Sign-On (SSO)
domain, you must take steps to ensure that certificates are installed correctly. See Configure
Certificates for a Shared Single Sign-On Domain.
Add the Primary VxRail Cluster to a VI Workload Domain Using the MultiDvsAutomator Script
If you have a single system vSphere Distributed Switch (vDS) used for both system and overlay
traffic, you can use the SDDC Manager UI to add the primary VxRail cluster. Otherwise, you can
add the primary VxRail cluster to a VI workload domain using the MultiDvsAutomator Script.
Use the MultiDvsAutomator script to add the primary VxRail cluster if:
n You have two system vSphere Distributed Switches. One is used for system traffic and one is
used for overlay traffic.
n Or, you have one or two system vSphere Distributed switches for system traffic and a separate
vDS for overlay traffic.
Prerequisites
n Create a local user in vCenter Server. This is required for the VxRail first run.
n Image the VI workload domain nodes. For information on imaging the nodes, refer to Dell
EMC VxRail documentation.
n Perform a VxRail first run of the VI workload domain nodes using the vCenter Server for
that workload domain. For information on VxRail first run, refer to the Dell EMC VxRail
documentation.
Procedure
1 Using SSH, log in to the SDDC Manager VM with the user name vcf and the password you
specified in the deployment parameter sheet.
5 When prompted, select a workload domain to which you want to import the cluster.
6 Select a cluster from the list of clusters that are ready to be imported.
VMware, Inc. 75
VMware Cloud Foundation on Dell EMC VxRail Guide
8 Choose the vSphere Distributed Switch (vDS) to use for overlay traffic.
2 Select a portgroup on the vDS. The vmnics mapped to the selected port group are
used to configure overlay traffic.
VMware, Inc. 76
VMware Cloud Foundation on Dell EMC VxRail Guide
11 Select the IP allocation method for the Host Overlay Network TEPs.
Option Description
DHCP With this option VMware Cloud Foundation uses DHCP for the Host Overlay
Network TEPs.
A DHCP server must be configured on the NSX-T host overlay (Host TEP)
VLAN. When NSX creates TEPs for the VI workload domain, they are
assigned IP addresses from the DHCP server.
Static IP Pool With this option VMware Cloud Foundation uses a static IP pool for the Host
Overlay Network TEPs. You can re-use an existing IP pool or create a new
one.
To create a new static IP Pool provide the following information:
n Pool Name
n Description
n CIDR
n IP Range.
n Gateway IP
Make sure the IP range includes enough IP addresses for the number of
hosts that will use the static IP Pool. The number of IP addresses required
depends on the number of pNICs on the ESXi hosts that are used for
the vSphere Distributed Switch that handles host overlay networking. For
example, a host with four pNICs that uses two pNICs for host overlay traffic
requires two IP addresses in the static IP pool.
Note You cannot stretch a cluster that uses static IP addresses for the NSX-T
Host Overlay Network TEPs.
12 Enter and confirm the VxRail Manager root and admin passwords.
13 Confirm the SSH thumbprints for VxRail Manager and the ESXi hosts.
14 Select the license keys for VMware vSAN and NSX-T Data Center.
16 When validation succeeds, press Enter to import the primary VxRail cluster.
What to do next
If you have multiple instances of SDDC Manager that are joined to the same Single Sign-On (SSO)
domain, you must take steps to ensure that certificates are installed correctly. See Configure
Certificates for a Shared Single Sign-On Domain.
VMware, Inc. 77
VMware Cloud Foundation on Dell EMC VxRail Guide
n VMware Cloud Foundation supports a single remote cluster per VMware Cloud Foundation
instance.
n A VI workload domain can include local clusters or a remote cluster, but not both.
SDDC Manager
10 Mbs bandwith
50 ms latency
The prerequisites for deploying a VI workload domain with a remote cluster are:
n Ensure that you meet the general prerequisites for deploying a VI workload domain. See
Prerequisites for a Workload Domain.
n Dedicated WAN connectivity is required between central site and VMware Cloud Foundation
Remote Clusters site.
n Primary and secondary active WAN links are recommended for connectivity from the central
site to the VMware Cloud Foundation Remote Clusters site. The absence of WAN links can
lead to two-failure states, WAN link failure, or NSX Edge node failure, which can result in
unrecoverable VMs and application failure at the VMware Cloud Foundation Remote Clusters
site.
n Minimum bandwidth of 10 Mbps and latency of 50 Ms is required between the central VMware
Cloud Foundation instance and VMware Cloud Foundation Remote Clusters site.
n The network at the VMware Cloud Foundation Remote Clusters site must be able to reach the
management network at the central site.
VMware, Inc. 78
VMware Cloud Foundation on Dell EMC VxRail Guide
n DNS and NTP server must be available locally at or reachable from the VMware Cloud
Foundation Remote Clusters site
For information on enabling Workload Management (vSphere with Tanzu) on a cluster deployed at
a remote site, see Chapter 16 Workload Management .
Deleting a VI workload domain also removes the components associated with the VI workload
domain from the management domain. This includes the vCenter Server instance and the NSX
Manager cluster instances.
Note If the NSX Manager cluster is shared with any other VI workload domains, it will not be
deleted.
Caution Deleting a workload domain is an irreversible operation. All clusters and virtual machines
within the VI workload domain are deleted and the underlying datastores are destroyed.
It can take up to 20 minutes for a VI workload domain to be deleted. During this process, you
cannot perform any operations on workload domains.
Prerequisites
n If remote vSAN datastores are mounted on a cluster in the VI workload domain, then the
VI workload domain cannot be deleted. To delete such VI workload domains, you must
first migrate any virtual machines from the remote datastore to the local datastore and then
unmount the remote vSAN datastores from vCenter Server.
n If you require access after deleting a VI workload domain, back up the data. The datastores on
the VI workload domain are destroyed when it is deleted.
n Migrate the virtual machines that you want to keep to another workload domain using cross
vCenter vMotion.
n Delete any workload virtual machines created outside VMware Cloud Foundation before
deleting the VI workload domain.
n Delete any NSX Edge clusters hosted on the VI workload domain. See KB 78635.
Procedure
2 Click the vertical ellipsis (three dots) next to the VI workload domain you want to delete and
click Delete Domain.
3 On the Delete Workload Domain dialog box, click Delete Workload Domain.
A message indicating that the VI workload domain is being deleted appears. When the
removal process is complete, the VI workload domain is removed from the domains table.
VMware, Inc. 79
VMware Cloud Foundation on Dell EMC VxRail Guide
Procedure
2 In the workload domains table, click the name of the workload domain.
The workload domain details page displays CPU, memory, and storage allocated to the
workload domain. The tabs on the page display additional information as described in the
table below.
Services SDDC software stack components deployed for the workload domain's virtual environment
and their IP addresses. Click a component name to navigate to that aspect of the virtual
environment. For example, click vCenter Server to reach the vSphere Client for that workload
domain.
All the capabilities of a VMware SDDC are available to you in the VI workload domain's
environment, such as creating, provisioning, and deploying virtual machines, configuring the
software-defined networking features, and so on.
Hosts Names, IP addresses, status, associated clusters, and capacity utilization of the hosts in the
workload domain and the network pool they are associated with.
Clusters Names of the clusters, number of hosts in the clusters, and their capacity utilization.
Edge Clusters Names of the NSX Edge clusters, NSX Edge nodes, and their status.
Security Default certificates for the VMware Cloud Foundation components. For more information, see
Chapter 9 Certificate Management.
When you use the product UIs, you complete some of the steps in the SDDC Manager UI and
some of the steps in the VxRail Manager UI:
n After imaging the workload domain nodes, perform the VxRail first run (VxRail Manager UI)
VMware, Inc. 80
VMware Cloud Foundation on Dell EMC VxRail Guide
n Add the VxRail cluster to the workload domain (SDDC Manager UI)
The following documentation describes the process of expanding a workload domain using the
product UIs.
Alternatively, you can use the Workflow Optimzation script to perform all of the steps to expand a
workload domain in one place. See Add a VxRail Cluster Using the Workflow Optimization Script.
There are two ways to add a new VxRail cluster to a workload domain, depending on your use
case.
You have a single system vSphere Distributed Switch (vDS) SDDC Manager UI
used for both system and overlay traffic.
You have two system vSphere Distributed Switches. One is MultiDvsAutomator script
used for system traffic and one is used for overlay traffic.
You have one or two system vSphere Distributed switches MultiDvsAutomator script
for system traffic and a separate vDS for overlay traffic.
Use the SDDC Manager UI to add a VxRail cluster if you have a single system vSphere Distributed
Switch (vDS) used for both system and overlay traffic.
Prerequisites
n Create a local user in vCenter Server as this is an external server deployed by VMware Cloud
Foundation. This is required for the VxRail first run.
n Image the workload domain nodes. For information on imaging the nodes, refer to Dell EMC
VxRail documentation.
n Perform a VxRail first run of the workload domain nodes using the vCenter Server for
that workload domain. For information on VxRail first run, refer to the Dell EMC VxRail
documentation.
Procedure
1 In the navigation pane, click Inventory > Workload Domains. The Workload Domains page
displays information for all workload domains.
2 In the workload domains table, hover your mouse in the VxRail workload domain row.
A set of three dots appears on the left of the workload domain name.
VMware, Inc. 81
VMware Cloud Foundation on Dell EMC VxRail Guide
4 On the Discovered Clusters page, the VxRail cluster in the vCenter is discovered. Click Next.
5 On the Discovered Hosts page, enter the SSH password for the discovered hosts and click
Next.
6 On the VxRail Manager page, enter the Admin and Root user names and passwords.
7 On the Thumbprint Verification page, click to confirm the SSH thumbprints for VxRail
Manager and the ESXi hosts.
8 On the Networking page, enter the NSX-T host overlay (Host TEP) VLAN of the management
domain
9 Select the IP allocation method, provide the required information, and click Next.
Note You can only use a static IP pool for the management domain and VI workload domains
with uniform L2 clusters. For L3 aware or stretch clusters, DHCP is required for Host Overlay
Network TEP IP assignment.
Option Description
DHCP With this option VMware Cloud Foundation uses DHCP for the Host Overlay
Network TEPs.
Static IP Pool With this option VMware Cloud Foundation uses a static IP pool for the Host
Overlay Network TEPs. You can re-use an existing IP pool or create a new
one.
To create a new static IP Pool provide the following information:
n Pool Name
n Description
n CIDR
n IP Range.
n Gateway IP
Make sure the IP range includes enough IP addresses for the number of
hosts that will use the static IP Pool. The number of IP addresses required
depends on the number of pNICs on the ESXi hosts that are used for
the vSphere Distributed Switch that handles host overlay networking. For
example, a host with four pNICs that uses two pNICs for host overlay traffic
requires two IP addresses in the static IP pool.
Note You cannot stretch a cluster that uses static IP addresses for the NSX-T
Host Overlay Network.
10 Enter the license keys for NSX-T Data Center and VMware vSAN. Click Next.
12 On the Validation page, wait until all of the inputs have been successfully validated.
If validation is unsuccessful, you cannot proceed. Use the Back button to modify your settings
and try again.
VMware, Inc. 82
VMware Cloud Foundation on Dell EMC VxRail Guide
13 Click Finish.
n You have two system vSphere Distributed Switches and want to use one of them for overlay
traffic.
n Or, you have one or two system vSphere Distributed switches for system traffic and want to
use a separate vDS for overlay traffic.
Prerequisites
n Create a local user in vCenter Server as this is an external server deployed by VMware Cloud
Foundation. This is required for the VxRail first run.
n Image the workload domain nodes. For information on imaging the nodes, refer to Dell EMC
VxRail documentation.
n Perform a VxRail first run of the workload domain nodes using the vCenter Server for
that workload domain. For information on VxRail first run, refer to the Dell EMC VxRail
documentation.
Procedure
1 Using SSH, log in to the SDDC Manager VM with the user name vcf and the password you
specified in the deployment parameter sheet.
5 When prompted, select a workload domain to which you want to import the cluster.
6 Select a cluster from the list of clusters that are ready to be imported.
VMware, Inc. 83
VMware Cloud Foundation on Dell EMC VxRail Guide
8 Choose the vSphere Distributed Switch (vDS) to use for overlay traffic.
2 Select a portgroup on the vDS. The vmnics mapped to the selected port group are
used to configure overlay traffic.
10 Select the IP allocation method for the Host Overlay Network TEPs.
Option Description
DHCP With this option VMware Cloud Foundation uses DHCP for the Host Overlay
Network TEPs.
A DHCP server must be configured on the NSX-T host overlay (Host TEP)
VLAN. When NSX creates TEPs for the VI workload domain, they are
assigned IP addresses from the DHCP server.
Static IP Pool With this option VMware Cloud Foundation uses a static IP pool for the Host
Overlay Network TEPs. You can re-use an existing IP pool or create a new
one.
To create a new static IP Pool provide the following information:
n Pool Name
n Description
n CIDR
n IP Range.
n Gateway IP
Make sure the IP range includes enough IP addresses for the number of
hosts that will use the static IP Pool. The number of IP addresses required
depends on the number of pNICs on the ESXi hosts that are used for
the vSphere Distributed Switch that handles host overlay networking. For
example, a host with four pNICs that uses two pNICs for host overlay traffic
requires two IP addresses in the static IP pool.
Note You cannot stretch a cluster that uses static IP addresses for the NSX-T
Host Overlay Network TEPs.
11 Enter and confirm the VxRail Manager root and admin passwords.
12 Confirm the SSH thumbprints for VxRail Manager and the ESXi hosts.
13 Select the license keys for VMware vSAN and NSX-T Data Center.
VMware, Inc. 84
VMware Cloud Foundation on Dell EMC VxRail Guide
The process of expanding the VxRail cluster for a workload domain involves three steps:
2 Discover and add new node to the cluster using the VxRail Manager plugin for vCenter Server.
See the Dell EMC documentation.
3 Add the host to the VMware Cloud Foundation domain cluster. The next section provides
more details about this task.
If the vSphere cluster hosts an NSX-T Edge cluster, you can only add new hosts with the same
management, uplink, host TEP, and Edge TEP networks (L2 uniform) as the existing hosts.
If the cluster to which you are adding hosts uses a static IP pool for the Host Overlay Network
TEPs, that pool must include enough IP addresses for the hosts you are adding. The number of
IP addresses required depends on the number of pNICs on the ESXi hosts that are used for the
vSphere Distributed Switch that handles host overlay networking. For example, a host with four
pNICs that uses two pNICs for host overlay traffic requires two IP addresses in the static IP pool.
Procedure
2 In the workload domains table, click the name of the workload domain that you want to
expand.
4 Click the name of the cluster where you want to add a host.
This option only appears if the vSphere cluster hosts an NSX-T Edge cluster.
Option Description
L2 Uniform Select if all hosts you are adding to the vSphere cluster have the same
management, uplink, host TEP, and Edge TEP networks as the existing hosts
in the vSphere cluster.
L2 non-uniform and L3 You cannot proceed if you any of the hosts you are adding to the vSphere
cluster have different networks than the existing hosts in the vSphere cluster.
VMware Cloud Foundation does not support adding hosts to L2 non-uniform
and L3 vSphere clusters that host an NSX-T Edge cluster.
VMware, Inc. 85
VMware Cloud Foundation on Dell EMC VxRail Guide
7 On the Discovered Hosts page, enter the SSH password for the host and click Add.
8 On the Thumbprint Verification page, click to confirm the SSH thumbprints for the ESXi
hosts.
9 On the Validation page, wait until all of the inputs have been successfully validated.
If validation is unsuccessful, you cannot proceed. Use the Back button to modify your settings
and try again.
10 Click Finish.
When a host is removed, the vSAN members are reduced. Ensure that you have enough hosts
remaining to facilitate the configured vSAN availability. Failure to do so might result in the
datastore being marked as read-only or in data loss.
Prerequisites
Use the vSphere Client to make sure that there are no critical alarms on the cluster from which you
want to remove the host.
Procedure
2 In the workload domains table, click the name of the workload domain that you want to
modify.
4 Click the name of the cluster from which you want to remove a host.
The details page for the cluster appears with a message indicating that the host is being
removed. When the removal process is complete, the host is removed from the hosts table
and deleted from vCenter Server.
VMware, Inc. 86
VMware Cloud Foundation on Dell EMC VxRail Guide
You cannot delete the last cluster in a workload domain. Instead, delete the workload domain.
Prerequisites
n If vSAN remote datastores are mounted on the cluster, the cluster cannot be deleted. To
delete such clusters, you must first migrate any VMs from the remote datastore to the local
datastore and then unmount the vSAN remote datastores from vCenter Server.
n Delete any workload VMs created outside of VMware Cloud Foundation before deleting the
cluster.
n Migrate or backup the VMs and data on the datastore associated with the cluster to another
location.
n Delete the NSX Edge clusters hosted on the VxRail cluster or shrink the NSX Edge cluster
by deleting Edge nodes hosted on the VxRail cluster. You cannot delete Edge nodes if doing
so would result in an Edge cluster with fewer than two Edge nodes. For information about
deleting an NSX Edge cluster, see KB 78635.
Procedure
The Workload Domains page displays information for all workload domains.
2 Click the name of the workload domain that contains the cluster you want to delete.
3 Click the Clusters tab to view the clusters in the workload domain.
5 Click the three dots next to the cluster name and click Delete VxRail Cluster.
6 Click Delete Cluster to confirm that you want to delete the cluster.
The details page for the workload domain appears with a message indicating that the cluster is
being deleted. When the removal process is complete, the cluster is removed from the clusters
table.
Use the script to avoid jumping back and forth between the SDDC Manager UI and the VxRail
Manager UI to complete these tasks.
VMware, Inc. 87
VMware Cloud Foundation on Dell EMC VxRail Guide
The Workflow Optimzation script uses the VMware Cloud Foundation on Dell EMC VxRail API to
perform all of the steps to create a VI workload domain in one place. See Create a Domain with
Workflow Optimization for more information about the API.
Prerequisites
In addition to the standard Prerequisites for a Workload Domain, using the Workflow Optimization
script requires the following:
Procedure
3 Using SSH, log in to the SDDC Manager VM with the user name vcf and the password you
specified in the deployment parameter sheet.
The Workflow Optimzation script uses the VMware Cloud Foundation on Dell EMC VxRail API to
perform all of the steps to add a VxRail cluster in one place. See Create a Cluster with Workflow
Optimization for more information about the API.
Prerequisites
n Image the workload domain nodes. For information on imaging the nodes, refer to Dell EMC
VxRail documentation.
VMware, Inc. 88
VMware Cloud Foundation on Dell EMC VxRail Guide
n The IP addresses and Fully Qualified Domain Names (FQDNs) for the ESXi hosts, VxRail
Manager, and NSX Manager instances must be resolvable by DNS.
n If you are using DHCP for the NSX Host Overlay Network, a DHCP server must be configured
on the NSX Host Overlay VLAN of the management domain. When NSX-T Data Center creates
TEPs for the VI workload domain, they are assigned IP addresses from the DHCP server.
Procedure
3 Using SSH, log in to the SDDC Manager VM with the user name vcf and the password you
specified in the deployment parameter sheet.
Prerequisites
n The VxRail Manager static IP, 192.168.10.200, must be reachable and the UI available
Procedure
2 Select Network.
3 From the Servers drop-down menu, select /rest/vxm - VxRail Manager Server.
VMware, Inc. 89
VMware Cloud Foundation on Dell EMC VxRail Guide
Option Description
7 Click Execute.
What to do next
Update the VxRail Manager certificate. See Update the VxRail Manager Certificate.
Prerequisites
Procedure
1 Using SSH, log in to VxRail Manager VM using the management IP address, with the user
name mystic and default mystic password.
2 Type su to switch to the root account and enter the default root password.
./generate_ssl.sh VxRail-Manager-FQDN
Procedure
2 Click the vertical ellipsis (three dots) in the Domain row for the workload domain you want to
rename and click Rename Domain.
VMware, Inc. 90
VMware Cloud Foundation on Dell EMC VxRail Guide
3 Enter a new name for the workload domain and click Rename.
Procedure
The cluster detail page appears. The tabs on the page display additional information as
described in the table below.
Summary Organization, vSAN storage parameters, and overlay networking VLAN ID.
Hosts Summary details about each host in the vSphere cluster. You can click a name in the FQDN
column to access the host summary page.
What to do next
You can add or remove a host, or access the vSphere Client from this page.
Rename a Cluster
You can use the vSphere Client to rename a cluster managed by SDDC Manager. The SDDC
Manager UIis updated with the new name.
Prerequisites
n When the cluster belongs to a failed VI workload domain workflow, cluster workflow or host
workflow. If you try to rename a cluster that belongs to a failed workflow, restart of the failed
workflow will not be supported.
VMware, Inc. 91
VMware Cloud Foundation on Dell EMC VxRail Guide
Procedure
6 In the vSphere Client, right-click the cluster and then click Rename.
Note It takes up to two minutes for the new name to appear on the SDDC Manager UI.
VMware, Inc. 92
NSX Edge Cluster Management
14
You can deploy NSX Edge clusters with 2-tier routing to provide north-south routing and network
services in the management domain and VI workload domains.
An NSX Edge cluster is a logical grouping of NSX Edge nodes run on a vSphere cluster. NSX-T
Data Center supports a 2-tier routing model.
By default, workload domains do not include any NSX Edge clusters and workloads are isolated,
unless VLAN-backed networks are configured in vCenter Server. Add one or more NSX Edge
clusters to a workload domain to provide software-defined routing and network services.
Note You must create an NSX Edge cluster on the default management vSphere cluster in order
to deploy vRealize Suite products.
VMware, Inc. 93
VMware Cloud Foundation on Dell EMC VxRail Guide
You can add multiple NSX Edge clusters to the management or VI workload domains for scalability
and resiliency. VMware Cloud Foundation supports creating a maximum of 32 Edge clusters per
NSX Manager cluster and 16 Edge clusters per vSphere cluster for Edge clusters deployed through
SDDC Manager or the VMware Cloud Foundation API. For scaling beyond these limits, you can
deploy additional NSX Edge clusters through NSX Manager and scale up-to the NSX-T Data
Center supported maximums limits. For VMware Cloud Foundation configuration maximums refer
to the VMware Configuration Maximums website.
Note Unless explicitly stated in this matrix, VMware Cloud Foundation supports the configuration
maximums of the underlying products. Refer to the individual product configuration maximums as
appropriate.
The north-south routing and network services provided by an NSX Edge cluster created for a
workload domain are shared with all other workload domains that use the same NSX Manager
cluster.
n Verify that separate VLANs and subnets are available for the NSX host overlay VLAN and NSX
Edge overlay VLAN. You cannot use DHCP for the NSX Edge overlay VLAN.
n Verify that the NSX host overlay VLAN and NSX Edge overlay VLAN are routed to each other.
n For dynamic routing, set up two Border Gateway Protocol (BGP) peers on Top of Rack (ToR)
switches with an interface IP, BGP autonomous system number (ASN), and BGP password.
n Reserve a BGP ASN to use for the NSX Edge cluster’s Tier-0 gateway.
n Verify that DNS entries for the NSX Edge nodes are populated in the customer-managed DNS
server.
n The vSphere cluster hosting an NSX Edge cluster must include hosts with identical
management, uplink, NSX Edge overlay TEP, and NSX Edge overlay TEP networks (L2
uniform).
n You cannot deploy an NSX Edge cluster on a vSphere cluster that is stretched. You can stretch
an L2 uniform vSphere cluster that hosts an NSX Edge cluster.
n The management network and management network gateway for the NSX Edge nodes must
be reachable from the NSX host overlay and NSX Edge overlay VLANs.
VMware, Inc. 94
VMware Cloud Foundation on Dell EMC VxRail Guide
SDDC Manager does not enforce rack failure resiliency for NSX Edge clusters. Make sure that the
number of NSX Edge nodes that you add to an NSX Edge cluster, and the vSphere clusters to
which you deploy the NSX Edge nodes, are sufficient to provide NSX Edge routing services in case
of rack failure.
After you create an NSX Edge cluster, you can use SDDC Manager to expand or shrink it by
adding or deleting NSX Edge nodes.
This procedure describes how to use SDDC Manager to create an NSX Edge cluster with NSX
Edge node virtual appliances. If you have latency intensive applications in your environment, you
can deploy NSX Edge nodes on bare-metal servers. See Deployment of VMware NSX-T Edge
Nodes on Bare-Metal Hardware for VMware Cloud Foundation 4.0.x.
Prerequisites
Procedure
2 In the Workload Domains page, click a domain name in the Domain column.
5 Enter the configuration settings for the NSX Edge cluster and click Next.
Setting Description
Edge Cluster Name Enter a name for the NSX Edge cluster.
MTU Enter the MTU for the NSX Edge cluster. The MTU can be 1600-9000.
ASN Enter an autonomous system number (ASN) for the NSX Edge cluster.
Edge Cluster Profile Type Select Default or, if your environment requires specific Bidirectional
Forwarding Detection (BFD) configuration, select Custom.
Edge Cluster Profile Name Enter an NSX Edge cluster profile name. (Custom Edge cluster profile only)
BFD Allowed Hop Enter the number of multi-hop Bidirectional Forwarding Detection (BFD)
sessions allowed for the profile. (Custom Edge cluster profile only)
BFD Declare Dead Multiple Enter the number of number of times the BFD packet is not received before
the session is flagged as down. (Custom Edge cluster profile only)
VMware, Inc. 95
VMware Cloud Foundation on Dell EMC VxRail Guide
Setting Description
BFD Probe Interval (milliseconds) BFD is detection protocol used to identify the forwarding path failures. Enter
a number to set the interval timing for BFD to detect a forwarding path
failure. (Custom Edge cluster profile only)
Standby Relocation Threshold Enter a standby relocation threshold in minutes. (Custom Edge cluster profile
(minutes) only)
Edge Root Password Enter and confirm the password to be assigned to the root account of the
NSX Edge appliance.
Edge Admin Password Enter and confirm the password to be assigned to the admin account of the
NSX Edge appliance.
Edge Audit Password Enter and confirm the password to be assigned to the audit account of the
NSX Edge appliance.
n At least 12 characters
n No dictionary words
n No palindromes
VMware, Inc. 96
VMware Cloud Foundation on Dell EMC VxRail Guide
Setting Description
Edge Form Factor n Small: 4 GB memory, 2 vCPU, 200 GB disk space. The NSX Edge Small
VM appliance size is suitable for lab and proof-of-concept deployments.
n Medium: 8 GB memory, 4 vCPU, 200 GB disk space. The NSX Edge
Medium appliance size is suitable for production environments with load
balancing.
n Large: 32 GB memory, 8 vCPU, 200 GB disk space. The NSX Edge
Large appliance size is suitable for production environments with load
balancing.
n XLarge: 64 GB memory, 16 vCPU, 200 GB disk space. The NSX Edge
Extra Large appliance size is suitable for production environments with
load balancing.
Tier-0 Service High Availability In the active-active mode, traffic is load balanced across all members. In
active-standby mode, all traffic is processed by an elected active member. If
the active member fails, another member is elected to be active.
Workload Management requires Active-Active.
Some services are only supported in Active-Standby: NAT, load balancing,
stateful firewall, and VPN. If you select Active-Standby, use exactly two NSX
Edge nodes in the NSX Edge cluster.
Tier-0 Routing Type Select Static or EBGP to determine the route distribution mechanism for the
tier-0 gateway. If you select Static, you must manually configure the required
static routes in NSX Manager. If you select EBGP, VMware Cloud Foundation
configures eBGP settings to allow dynamic route distribution.
7 Enter the configuration settings for the first NSX Edge node and click Add Edge Node.
Setting Description
Edge Node Name (FQDN) Enter the FQDN for the NSX Edge node. Each node must have a unique
FQDN.
Management IP (CIDR) Enter the management IP for the NSX Edge node in CIDR format. Each node
must have a unique management IP.
Management Gateway Enter the IP address for the management network gateway.
Edge TEP 1 IP (CIDR) Enter the CIDR for the first NSX Edge TEP. Each node must have a unique
Edge TEP 1 IP.
VMware, Inc. 97
VMware Cloud Foundation on Dell EMC VxRail Guide
Setting Description
Edge TEP 2 IP (CIDR) Enter the CIDR for the second NSX Edge TEP. Each node must have a unique
Edge TEP 2 IP. The Edge TEP 2 IP must be different than the Edge TEP 1 IP.
Edge TEP Gateway Enter the IP address for the NSX Edge TEP gateway.
Edge TEP VLAN Enter the NSX Edge TEP VLAN ID.
Cluster Type Select L2 Uniform if all hosts in the vSphere cluster have identical
management, uplink, host TEP, and Edge TEP networks.
Select L2 non-uniform and L3 if any of the hosts in the vSphere cluster have
different networks.
First NSX VDS Uplink Click Advanced Cluster Settings to map the first NSX Edge node uplink
network interface to a physical NIC on the host, by specifying the ESXi uplink.
The default is uplink1.
When you create an NSX Edge cluster, SDDC Manager creates two trunked
VLAN port groups. The information you enter here determines the active
uplink on the first VLAN port group. If you enter uplink3, then uplink3 is the
active uplink and the uplink you specify for the second NSX VDS uplink is the
standby uplink.
The uplink must be prepared for overlay use.
Second NSX VDS Uplink Click Advanced Cluster Settings to map the second NSX Edge node uplink
network interface to a physical NIC on the host, by specifying the ESXi uplink.
The default is uplink2.
When you create an NSX Edge cluster, SDDC Manager creates two trunked
VLAN port groups. The information you enter here determines the active
uplink on the second VLAN port group. If you enter uplink4, then uplink4 is
the active uplink and the uplink you specify for the first NSX VDS uplink is the
standby uplink.
The uplink must be prepared for overlay use.
First Tier-0 Uplink VLAN Enter the VLAN ID for the first uplink.
This is a link from the NSX Edge node to the first uplink network.
First Tier-0 Uplink Interface IP Enter the CIDR for the first uplink. Each node must have unique uplink
(CIDR) interface IPs.
Peer IP (CIDR) Enter the CIDR for the first uplink peer. (EBGP only)
Peer ASN Enter the ASN for the first uplink peer. (EBGP only)
BGP Peer Password Enter and confirm the BGP password. (EBGP only).
Second Tier-0 Uplink VLAN Enter the VLAN ID for the second uplink.
This is a link from the NSX Edge node to the second uplink network.
Second Tier-0 Uplink Interface IP Enter the CIDR for the second uplink. Each node must have unique uplink
(CIDR) interface IPs. The second uplink interface IP must be different than the first
uplink interface IP.
Peer IP (CIDR) Enter the CIDR for the second uplink peer. (EBGP only)
VMware, Inc. 98
VMware Cloud Foundation on Dell EMC VxRail Guide
Setting Description
ASN Peer Enter the ASN for the second uplink peer. (EBGP only)
BGP Peer Password Enter and confirm the BGP password. (EBGP only).
8 Click Add More Edge Nodes to enter configuration settings for additional NSX Edge nodes.
A minimum of two NSX Edge nodes is required. NSX Edge cluster creation allows up to 8 NSX
Edge nodes if the Tier-0 Service High Availability is Active-Active and two NSX Edge nodes
per NSX Edge cluster if the Tier-0 Service High Availability is Active-Standby.
9 When you are done adding NSX Edge nodes, click Next.
11 If validation fails, use the Back button to edit your settings and try again.
To edit or delete any of the NSX Edge nodes, click the three vertical dots next to an NSX Edge
node in the table and select an option from the menu.
Example
The following example shows a scenario with sample data. You can use the example to guide you
in creating NSX Edge clusters in your environment.
VMware, Inc. 99
VMware Cloud Foundation on Dell EMC VxRail Guide
Legend:
VLANs
Tier-1 to Tier-0 Connection
Segment
ECMP
NSX-T
ASN 65005 Edge Cluster
Edge VM 1 Edge VM 2
Tier-0
Gateway
Active/ Active
Tier-1
Gateway
VM VM VM VM VM VM
What to do next
In NSX Manager, you can create segments connected to the NSX Edge cluster's tier-1 gateway.
You can connect workload virtual machines to these segments to provide north-south and east-
west connectivity.
You might want to add NSX Edge nodes to an NSX Edge cluster, for:
n When the Tier-0 Service High Availability is Active-Standby and you require more than two
NSX Edge nodes for services.
n When the Tier-0 Service High Availability is Active-Active and you require more than 8 NSX
Edge nodes for services.
n When you add Supervisor Clusters to a Workload Management workload domain and need to
support additional tier-1 gateways and services.
The available configuration settings for a new NSX Edge node vary based on:
n The Tier-0 Service High Availability setting (Active-Active or Active-Standby) of the NSX Edge
cluster.
n The Tier-0 Routing Type setting (static or EBGP) of the NSX Edge cluster.
n Whether the new NSX Edge node is going to be hosted on the same vSphere cluster as the
existing NSX Edge nodes (in-cluster) or on a different vSphere cluster (cross-cluster).
Prerequisites
n Verify that separate VLANs and subnets are available for the NSX host overlay VLAN and NSX
Edge overlay VLAN. You cannot use DHCP for the NSX Edge overlay VLAN.
n Verify that the NSX host overlay VLAN and NSX Edge overlay VLAN are routed to each other.
n For dynamic routing, set up two Border Gateway Protocol (BGP) peers on Top of Rack (ToR)
switches with an interface IP, BGP autonomous system number (ASN), and BGP password.
n Reserve a BGP ASN to use for the NSX Edge cluster’s Tier-0 gateway.
n Verify that DNS entries for the NSX Edge nodes are populated in the customer-managed DNS
server.
n The vSphere cluster hosting the NSX Edge nodes must include hosts with identical
management, uplink, NSX Edge overlay TEP, and NSX Edge overlay TEP networks (L2
uniform).
n The vSphere cluster hosting the NSX Edge nodes must have the same pNIC speed for NSX-
enabled VDS uplinks chosen for Edge overlay.
n All NSX Edge nodes in an NSX Edge cluster must use the same set of NSX-enabled VDS
uplinks. These uplinks must be prepared for overlay use.
n The NSX Edge cluster must be hosted on one or more vSphere clusters from the same
workload domain.
Procedure
2 In the Workload Domains page, click a domain name in the Domain column.
4 Click the vertical ellipsis menu for the Edge Cluster you want to expand and select Expand
Edge Cluster.
6 Enter and confirm the passwords for the NSX Edge cluster.
8 Enter the configuration settings for the new NSX Edge node and click Add Edge Node.
Setting Description
Edge Node Name (FQDN) Enter the FQDN for the NSX Edge node. Each node must have a unique
FQDN.
Management IP (CIDR) Enter the management IP for the NSX Edge node in CIDR format. Each node
must have a unique management IP.
Management Gateway Enter the IP address for the management network gateway.
Edge TEP 1 IP (CIDR) Enter the CIDR for the first NSX Edge TEP. Each node must have a unique
Edge TEP 1 IP.
Edge TEP 2 IP (CIDR) Enter the CIDR for the second NSX Edge TEP. Each node must have a unique
Edge TEP 2 IP. The Edge TEP 2 IP must be different than the Edge TEP 1 IP.
Edge TEP Gateway Enter the IP address for the NSX Edge TEP gateway.
Edge TEP VLAN Enter the NSX Edge TEP VLAN ID.
Cluster Type Select L2 Uniform if all hosts in the vSphere cluster have identical
management, uplink, host TEP, and Edge TEP networks.
Select L2 non-uniform and L3 if any of the hosts in the vSphere cluster have
different networks.
Setting Description
First NSX VDS Uplink Specify an ESXi uplink to map the first NSX Edge node uplink network
interface to a physical NIC on the host. The default is uplink1.
The information you enter here determines the active uplink on the first
VLAN port group used by the NSX Edge node. If you enter uplink3, then
uplink3 is the active uplink and the uplink you specify for the second NSX
VDS uplink is the standby uplink.
(cross-cluster only)
Note For in-cluster NSX Edge cluster expansion, new NSX Edge nodes use
the same NSX VDS uplinks as the other Edge nodes hosted on the vSphere
cluster.
Second NSX VDS Uplink Specify an ESXi uplink to map the second NSX Edge node uplink network
interface to a physical NIC on the host. The default is uplink2.
The information you enter here determines the active uplink on the second
VLAN port group used by the NSX Edge node. If you enter uplink4, then
uplink4 is the active uplink and the uplink you specify for the first NSX VDS
uplink is the standby uplink.
(cross-cluster only)
Note For in-cluster NSX Edge cluster expansion, new NSX Edge nodes use
the same NSX VDS uplinks as the other Edge nodes hosted on the vSphere
cluster.
Add Tier-0 Uplinks Optional. Click Add Tier-0 Uplinks to add tier-0 uplinks.
(Active-Active only)
First Tier-0 Uplink VLAN Enter the VLAN ID for the first uplink.
This is a link from the NSX Edge node to the first uplink network.
(Active-Active only)
First Tier-0 Uplink Interface IP Enter the CIDR for the first uplink. Each node must have unique uplink
(CIDR) interface IPs.
(Active-Active only)
Peer IP (CIDR) Enter the CIDR for the first uplink peer.
(EBGP only)
Peer ASN Enter the ASN for the first uplink peer.
(EBGP only)
Second Tier-0 Uplink VLAN Enter the VLAN ID for the second uplink.
This is a link from the NSX Edge node to the second uplink network.
(Active-Active only)
Second Tier-0 Uplink Interface Enter the CIDR for the second uplink. Each node must have unique uplink
IP(CIDR) interface IPs. The second uplink interface IP must be different than the first
uplink interface IP.
(Active-Active only)
Peer IP (CIDR) Enter the CIDR for the second uplink peer.
(EBGP only)
Setting Description
ASN Peer Enter the ASN for the second uplink peer.
(EBGP only)
9 Click Add More Edge Nodes to enter configuration settings for additional NSX Edge nodes.
n For an NSX Edge cluster with a Tier-0 Service High Availability setting of Active-Active, up
to 8 of the NSX Edge nodes can have uplink interfaces.
n For an NSX Edge cluster with a Tier-0 Service High Availability setting of Active-Standby,
up to 2 of the NSX Edge nodes can have uplink interfaces.
10 When you are done adding NSX Edge nodes, click Next.
12 If validation fails, use the Back button to edit your settings and try again.
To edit or delete any of the NSX Edge nodes, click the three vertical dots next to an NSX Edge
node in the table and select an option from the menu.
13 If validation succeeds, click Finish to add the NSX Edge node(s) to the NSX Edge cluster.
Prerequisites
n The NSX Edge cluster must be available in the SDDC Manager inventory and must be Active.
n The NSX Edge node must be available in the SDDC Manager inventory.
n The NSX Edge cluster must be hosted on one or more vSphere clusters from the same
workload domain.
n The NSX Edge cluster must contain more than two NSX Edge nodes.
n If the NSX Edge cluster was deployed with a Tier-0 Service High Availability of Active-Active,
the NSX Edge cluster must contain two or more NSX Edge nodes with two or more Tier-0
routers (SR component) after the NSX Edge nodes are removed.
n If selected edge cluster was deployed with a Tier-0 Service High Availability of Active-Standby,
you cannot remove NSX Edge nodes that are the active or standby node for the Tier-0 router.
Procedure
2 In the Workload Domains page, click a domain name in the Domain column.
4 Click the vertical ellipsis menu for the Edge Cluster you want to expand and select Shrink Edge
Cluster.
7 If validation fails, use the Back button to edit your settings and try again.
Note You cannot remove the active and standby Edge nodes of a Tier-1 router at the same
time. You can remove one and then remove the other after the first operation is complete.
8 If validation succeeds, click Finish to remove the NSX Edge node(s) from the NSX Edge
cluster.
You can create overlay-backed NSX segments or VLAN-backed NSX segments. Both options
create two NSX segments (Region-A and X-Region) on the NSX Edge cluster deployed in the
default management vSphere cluster. Those NSX segments are used when you deploy the
vRealize Suite products. Region-A segments are local instance NSX segments and X-Region
segments are cross-instance NSX segments.
Important You cannot create AVNs if the NSX-T Data Center for the management domain is part
of an NSX Federation.
In an overlay-backed segment, traffic between two VMs on different hosts but attached to the
same overlay segment have their layer-2 traffic carried by a tunnel between the hosts. NSX-T
Data Center instantiates and maintains this IP tunnel without the need for any segment-specific
configuration in the physical infrastructure. As a result, the virtual network infrastructure is
decoupled from the physical network infrastructure. That is, you can create segments dynamically
without any configuration of the physical network infrastructure.
This procedure describes creating overlay-backed NSX segments. If you want to create VLAN-
backed NSX segments instead, see Deploy VLAN-Backed NSX Segments.
Prerequisites
Create an NSX Edge cluster for Application Virtual Networks, using the recommended settings, in
the default management vSphere cluster. See Deploy an NSX Edge Cluster.
Procedure
6 Enter information for each of the NSX segments (Region-A and X-Region):
Option Description
Name Enter a name for the NSX segment. For example, Mgmt-RegionA01.
If validation does not succeed, verify and update the information you entered for the NSX
segments and click Validate Settings again.
Example
This procedure describes creating VLAN-backed NSX segments. If you want to create overlay-
backed NSX segments instead, see Deploy Overlay-Backed NSX Segments.
Prerequisites
Create an NSX Edge cluster for Application Virtual Networks, using the recommended settings, in
the default management vSphere cluster. See Deploy an NSX Edge Cluster.
Procedure
6 Enter information for each of the NSX segments (Region-A and X-Region):
Option Description
Name Enter a name for the NSX segment. For example, Mgmt-RegionA01.
If validation does not succeed, verify and update the information you entered for the NSX
segments and click Validate Settings again.
Example
When enabled on a vSphere cluster, vSphere with Tanzu provides the capability to run Kubernetes
workloads directly on ESXi hosts and to create upstream Kubernetes clusters within dedicated
resource pools. vSphere with Tanzu can also be enabled on the management domain default
cluster.
You validate the underlying infrastructure for vSphere with Tanzu from the SDDC Manager UI and
then complete the deployment in the vSphere Client. The SDDC Manager UI refers to the vSphere
with Tanzu functionality as Kubernetes - Workload Management.
For more information on vSphere with Tanzu, see What Is vSphere with Tanzu?.
You can create a Subscribed Content Library using the vSphere Client or using PowerShell.
Procedure
a In a web browser, log in to the workload domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
d On the Name and location page, configure the settings and click Next.
Setting Value
Name Kubernetes
e On the Configure content library page, select Subscribed content library, configure the
settings and click Next.
Setting Value
f In the Kubernetes - Unable to verify authenticity dialog box, click Yes to accept the SSL
certificate thumbprint.
g On the Add Storage page, select your vSAN datastore, click Next.
h On the Ready to Complete page, review the settings and click Finish.
a Open a PowerShell Console, define variables for the inputs by entering the following
commands:
$sddcManagerFqdn = "sfo-vcf01.sfo.rainpole.io"
$sddcManagerUsername = "[email protected]"
$sddcManagerPassword = "VMw@re1!"
$wldName = "sfo-w01"
$contentLibraryUrl = "https://wp-content.vmware.com/v2/latest/lib.json"
$contentLibraryName = "Kubernetes"
$wldDatastoreName = "sfo-w01-cl01-ds-vsan01"
Prerequisites
n An Workload Management ready NSX Edge cluster must be deployed on the workload
domain.
You must select Workload Management on the Use Case page of the Add Edge Cluster
wizard. See step 6 in Deploy an NSX Edge Cluster.
n All hosts in the vSphere cluster for which you enable Workload Management must have a
vSphere with Tanzu license.
n Workload Management requires a vSphere cluster with a minimum of three ESXi hosts.
Procedure
3 Review the Workload Management prerequisites, click Select All, and click Begin.
4 Select the workload domain associated with the vSphere cluster where you want to enable
Workload Management.
The Workload Domain drop-down menu displays all Workload Management ready workload
domains, including the management domain.
vSphere clusters in the selected workload domain that are compatible with Workload
Management are displayed in the Compatible section. Incompatible clusters are displayed in
the Incompatible section, along with the reason for the incompatibility. If you want to get an
incompatible cluster to a usable state, you can exit the Workload Management deployment
wizard while you resolve the issue.
5 From the list of compatible clusters on the workload domain, select the cluster where you want
to enable Workload Management and click Next.
6 On the Validation page, wait for validation to complete successfully and click Next.
n vCenter Server validation (vCenter Server credentials, vSphere cluster object, and version)
7 On the Review page, review your selections and click Complete in vSphere.
What to do next
Follow the deployment wizard within the vSphere Client to complete the Workload Management
deployment and configuration steps.
Procedure
Prerequisites
You must have added the vSphere with Tanzu license key to the Cloud Foundation license
inventory. See Add a License Key.
Procedure
2 Click the dots to the left of the cluster for which you want to update the license and click
Update Workload Management license.
After the license update processing is completed, the Workload Management page is
displayed. The task panel displays the licensing task and its status.
vRealize Suite Lifecycle Manager in VMware Cloud Foundation mode introduces the following
features:
n Automatic load balancer configuration. Load balancer preparation and configuration are no
longer a prerequisite when you use vRealize Suite Lifecycle Manager to deploy or perform a
cluster expansion on Workspace ONE Access, vRealize Operations, or vRealize Automation.
Load balancer preparation and configuration take place as part of the deploy or expand
operation.
n Cluster deployment for a new environment. You can deploy vRealize Log Insight, vRealize
Operations, or vRealize Automation in clusters. You can deploy Workspace ONE Access either
as a cluster or a single node. If you deploy Workspace ONE Access as a single node, you can
expand it to a cluster later.
n Consistent Bill Of Materials (BOM). vRealize Suite Lifecycle Manager in VMware Cloud
Foundation mode only displays product versions that are compatible with VMware Cloud
Foundation to ensure product interoperability.
n Inventory synchronization between vRealize Suite Lifecycle Manager and SDDC Manager.
vRealize Suite Lifecycle Manager can detect changes made to vRealize Suite products and
update its inventory through inventory synchronization. When VMware Cloud Foundation
mode is enabled in vRealize Suite Lifecycle Manager, inventory synchronization in vRealize
Suite Lifecycle Manager also updates SDDC Manager’s inventory to get in sync with the
current state of the system.
n Product versions. You can only access the versions for the selected vRealize products that are
specifically supported by VMware Cloud Foundation itself.
n Resource pool and advanced properties. The resources in the Resource Pools under the
Infrastructure Details are blocked by the vRealize Suite Lifecycle Manager UI, so that the
VMware Cloud Foundation topology does not change. Similarly, the Advanced Properties
are also blocked for all products except for Remote Collectors. vRealize Suite Lifecycle
Manager also auto-populates infrastructure and network properties by calling VMware Cloud
Foundation deployment API.
n Watermark.
By default, VMware Cloud Foundation uses NSX-T Data Center to create NSX segments and
deploys vRealize Suite Lifecycle Manager and the vRealize Suite products to these NSX segments.
Starting with VMware Cloud Foundation 4.3, NSX segments are no longer configured during the
management domain bring-up process, but instead are configured using the SDDC Manager UI.
The new process offers the choice of using either overlay-backed or VLAN-backed segments. See
Chapter 15 Deploying Application Virtual Networks.
vRealize Suite Lifecycle Manager runs in VMware Cloud Foundation mode, the integration ensures
awareness between the two components. You launch the deployment of vRealize Suite products
from the SDDC Manager UI and are redirected to the vRealize Suite Lifecycle Manager UI where
you complete the deployment process.
Prerequisites
n Download the VMware Software Install Bundle for vRealize Suite Lifecycle Manager from the
VMware Depot to the local bundle repository. See Download VMware Cloud Foundation on
Dell EMC VxRail Bundles.
n Allocate an IP address for the vRealize Suite Lifecycle Manager virtual appliance on the cross-
instance NSX segment and prepare both forward (A) and reverse (PTR) DNS records.
n Allocate an IP address for the NSX-T Data Center standalone Tier-1 Gateway on the cross-
instance NSX segment. This address is used for the service interface of the standalone NSX-T
Data Center Tier 1 Gateway created during the deployment. The Tier 1 Gateway is used for
load-balancing of specific vRealize Suite products and Workspace ONE Access.
n Verify the Prerequisite Checklist sheet in the Planning and Preparation Workbook.
Procedure
2 Click Deploy.
4 On the Network Settings page, review the settings and click Next.
5 On the Virtual Appliance Settings page, enter the settings and click Next.
Setting Description
Virtual Appliance: FQDN The FQDN for the vRealize Suite Lifecycle Manager
virtual appliance.
NSX-T Tier 1 Gateway: IP Address A free IP Address within the cross-instance virtual
network segment.
System Administrator Create and confirm the password for the vRealize
Suite Lifecycle Manager administrator account,
vcfadmin@local. The password created is the credential
that allows SDDC Manager to connect to vRealize Suite
Lifecycle Manager.
SSH Root Account Create and confirm a password for the vRealize Suite
Lifecycle Manager virtual appliance root account.
6 On the Review Summary page, review the installation configuration settings and click Finish.
The vRealize Suite page displays the following message: Deployment in progress.
If the deployment fails, this page displays a deployment status of Deployment failed. In this
case, you can click Restart Task or Rollback.
7 (Optional) To view details about the individual deployment tasks, in the Tasks panel at the
bottom, click each task.
Procedure
2 On the Workload Domain page, from the table, in the domain column click the management
domain.
4 From the table, select the check box for the vrslcm resource type, and click Generate CSRs.
5 On the Details page, enter the following settings and click Next.
Settings Description
Key Size Select the key size (2048 bit, 3072 bit, or 4096 bit) from
the drop-down menu.
Organization Name Type the name under which your company is known.
The listed organization must be the legal registrant of
the domain name in the certificate request.
State Type the full name (do not abbreviate) of the state,
province, region, or territory where your company is
legally registered.
6 On the Subject Alternative Name page, leave the default SAN and click Next.
8 After the successful return of the operation, click Generate signed certificates.
9 In the Generate Certificates dialog box, from the Select Certificate Authority drop-down
menu, select Microsoft.
You add the cross-instance data center, and the associated management domain vCenter Server
for the deployment of the global components, such as the clustered Workspace ONE Access.
Procedure
1 In a web browser, log in to vRealize Suite Lifecycle Manager with the vcfadmin@local user by
using the user interface (https://<vrslcm_fqdn>).
4 Click Add datacenter, enter the values for the global data center, and click Save.
Setting Value
5 Add the management domain vCenter Server to the global data center.
a On the Datacenters page, expand the global data center and click Add vCenter.
b Enter the management domain vCenter Server information and click Validate.
Setting Value
7 In the navigation pane, click Requests and verify that the state of the vCenter data collection
request is Completed.
Prerequisites
n Download download the installation binary directly from vRealize Suite Lifecycle Manager. See
"Configure Product Binaries" in the vRealize Suite Lifecycle Manager Installation, Upgrade,
and Management Guide for the version of vRealize Suite Lifecycle Manager listed in the
VMware Cloud Foundation BOM.
n Allocate 5 IP addresses from the cross-instance NSX segment and prepare both forward (A)
and reverse (PTR) DNS records.
n An IP address for embedded Postgres database for Workspace ONE Access instance
n An IP address for the NSX-T Data Center external load balancer virtual server for clustered
Workspace ONE Access instance.
n Verify the Prerequisite Checklist sheet in the Planning and Preparation Workbook.
n Download the CertGenVVS tool and generate the signed certificate for the clustered
Workspace ONE Access instance. See KB 85527.
Procedure
1 In a web browser, log in to vRealize Suite Lifecycle Manager with the vcfadmin@local user by
using the user interface (https://<vrslcm_fqdn>).
5 On the Import certificate page, configure the settings and click Import.
Setting Value
Select Certificate file Click Browse file, navigate to the clustered Workspace
ONE Access certificate PEM file, and click Open.
You add the following passwords for the corresponding local administrative accounts.
Value for
Value for Global Value for Local Local Configuration
Setting Environment Administrator Administrator Administrator
Password description vRealize Suite Lifecycle Clustered Workspace ONE Clustered Workspace
Manager global Access administrator ONE Access configuration
environment administrator administrator
password
Procedure
1 In a web browser, log in to vRealize Suite Lifecycle Manager with the vcfadmin@local user by
using the user interface (https://<vrslcm_fqdn>).
5 On the Add password page, configure the settings and click Add.
Procedure
1 In a web browser, log in to vRealize Suite Lifecycle Manager with the vcfadmin@local user by
using the user interface (https://<vrslcm_fqdn>).
4 On the Create environment page, configure the settings and click Next.
Setting Value
5 On the Select product page, select the check box for VMware Identity Manager, configure
these values, and click Next.
Setting Value
6 On the Accept license agreements page, scroll to the bottom and accept the license
agreement, and then click Next.
7 On the Certificate page, from the Select certificate drop-down menu, select the Clustered
Workspace One Certificate, and click Next.
8 On the Infrastructure page, verify and accept the default settings, and click Next.
9 On the Network page, verify and accept the default settings, and click Next.
10 On the Products page, configure the deployment properties of clustered Workspace ONE
Access and click Next.
Setting Value
Setting Value
VM Name Enter a VM Name for Enter a VM Name for Enter a VM Name for
vidm-primary. vidm-secondary-1. vidm-secondary-2.
FQDN Enter the FQDN for Enter the FQDN for Enter the FQDN for
vidm-primary vidm-secondary-1. vidm-secondary-2.
IP address Enter the IP Address for Enter the IP Address for Enter the IP Address for
vidm-primary. vidm-secondary-1. vidm-secondary-2.
d For each node, click advanced configuration and click Select Root Password.
12 On the Manual validations page, select the I took care of the manual steps above and am
ready to proceed check box and click Run precheck.
13 Review the validation report, remediate any errors, and click Re-run precheck.
14 Wait for all prechecks to complete with Passed messages and click Next.
15 On the Summary page, review the configuration details. To back up the deployment
configuration, click Export configuration.
17 Monitor the steps of the deployment graph until all stages become Completed.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
2 In the Hosts and Clusters inventory, expand the management domain vCenter Server and data
center.
4 Create the anti-affinity rule for the clustered Workspace ONE Access virtual machines.
Setting Value
Name <management-domain-name>-anti-affinity-rule-wsa
n vidm-secondary-1_VM
n vidm-secondary-2_VM
5 Create a virtual machine group for the clustered Workspace ONE Access nodes.
Setting Value
Type VM Group
n vidm-secondary-1_VM
n vidm-secondary-2_VM
You configure the time synchronization for all nodes in the clustered Workspace ONE Access
instance.
Role FQDN
Node 1 vidm-primary_VM
Node 2 vidm-secondary-1_VM
Node 3 vidm-secondary-2_VM
Procedure
1 In a web browser, log in to the Workspace ONE Access instance with the admin user by using
the appliance configuration interface (https://<wsa_node_fqdn>:8443/cfg/login).
Setting Description
4 Repeat this procedure for the remaining clustered Workspace ONE Access nodes.
Procedure
1 In a web browser, log in to the clustered Workspace ONE Access instance by using
the administration interface to the System Domain with configadmin user (https://
<wsa_cluster_fqdn>/admin).
3 Click the Directories tab, and from the Add directory drop-down menu, select Add Active
Directory over LDAP/IWA.
4 On the Add directory page, configure the following settings, click Test connection and click
Save and next.
Setting Value
This Directory requires all connections to use STARTTLS If you want to secure communication between
(Optional) Workspace ONE Access and Active Directory select this
option and paste the Root CA certificate in the SSL
Certificate box.
Bind user password Enter the password for the Bind user.
For example: svc-wsa-ad_password.
5 On the Select the domains page, review the domain name and click Next.
6 On the Map user attributes page, review the attribute mappings and click Next.
7 On the Select the groups (users) you want to sync page, enter the
distinguished name for the folder containing your groups (For example OU=Security
Groups,DC=sfo,DC=rainpole,DC=io) and click Select.
8 For each Group DN you want to include, select the group to use by the clustered Workspace
ONE Access instance for each of the roles, and click Save then Next.
Directory Admin
ReadOnly Admin
Content Admin
Content Developers
9 On the Select the Users you would like to sync page, enter the distinguished name for the
folder containing your users (e.g. OU=Users,DC=sfo,DC=rainpole,DC=io) and click Next.
10 On the Review page, click Edit, from the Sync frequency drop-down menu, select Every 15
minutes, and click Save.
Procedure
1 In a web browser, log in to the clustered Workspace ONE Access instance by using
the administration interface to the System Domain with configadmin user (https://
<wsa_cluster_fqdn>/admin).
5 On the WorkspaceIDP__1 details page, under Connector(s) from the Add a connector drop-
down menu, select vidm-secondary-1_VM, configure the settings, and click Add connector.
Setting Value
Connector vidm-secondary-1_VM
Bind to AD Checked
7 In the IdP Hostname text box, enter the FQDN of the NSX-T Data Center load balancer virtual
server for Workspace ONE Access cluster.
8 Click Save.
You assign the following administrator roles to the corresponding user groups.
Procedure
b In the Users / User Groups search box, enter the name of the Active Directory group you
want to assign the role to, select the group, and click Save.
c Repeat this step to configure the Directory Admin and the ReadOnly Admin roles.
You assign the following administrative roles to corresponding Active Directory groups.
vRealize Suite Lifecycle Manager Role Example Active Directory Group Name
Procedure
1 In a web browser, log in to vRealize Suite Lifecycle Manager with the vcfadmin@local user by
using the user interface (https://<vrslcm_fqdn>).
3 In the navigation pane, click User management and click Add user / group.
4 On the Select users / groups page, in the search box, enter the name of the group you want
to assign the role too, select the Active Directory group, and click Next.
5 On the Select roles page, select the VCF Role role, and click Next.
7 Repeat this procedure to assign roles to the Content Release Manager and Content
Developer user groups.
Important If you plan to deploy vRealize Suite components, you must deploy Application Virtual
Networks before you configure NSX Federation. See Chapter 15 Deploying Application Virtual
Networks.
n Password Management for NSX Global Manager Cluster in VMware Cloud Foundation
n Backup and Restore of NSX Global Manager Cluster in VMware Cloud Foundation
Global Manager: a system similar to NSX Manager that federates multiple Local Managers.
Local Manager: an NSX Manager system in charge of network and security services for a VMware
Coud Foundation instance.
Cross-instance: the object spans more than one instance. You do not directly configure the span of
a segment. A segment has the same span as the gateway it is attached to.
Tunnel End Point (TEP): the IP address of a transport node (Edge node or Host) used for Geneve
encapsulation within an instance.
Remote Tunnel End Points (RTEP): the IP address of a transport node (Edge node only) used for
Geneve encapsulation across instances.
standalone tier-1 gateway Configured in the Local Local Manager Single VMware Cloud
Manager and used for Foundation instance
services such as the Load
Balancer.
local-instance tier-1 Configured in the Global Global Manager Single VMware Cloud
gateway Manager at a single Foundation instance
location, this is a global
tier-1 gateway used for
segments that exist within
a single VMware Cloud
Foundation Instance.
cross-instance tier-1 Configured in the Global Global Manager Multiple VMware Cloud
gateway Manager, this is a global Foundation instance
Tier-1 gateway used for
segments that exist across
multiple VMware Cloud
instances.
Some tasks described in this section are to be performed on the first NSX Data Center instance
while others need to be performed on each NSX Data Center instance that is being federated. See
the table below for more information.
Enable high availability for NSX Federation Control Plane 1 Creating a Global Manager Cluster in VMware Cloud
on one additional instance Foundation
2 Replacing Global Manager Cluster Certificates in
VMware Cloud Foundation
Each additional instance 1 Prepare Local Manager for NSX Federation in VMware
Cloud Foundation
2 Add Location to Global Manager
3 Stretching Segments between VMware Cloud
Foundation Instances:
a Delete Existing Tier-0 Gateways in Additional
Instances
b Connect Additional VCF Instances to Cross-
Instance Tier-0 Gateway
c Connect Local Tier-1 Gateway to Cross-Instance
Tier-0 Gateway
d Add Additional Instance as Locations to the Cross-
Instance Tier-1 Gateway
Procedure
Procedure
3 Create Anti-Affinity Rule for Global Manager Cluster in VMware Cloud Foundation
Create an anti-affinity rule to ensure that the Global Manager nodes run on different ESXi
hosts. If an ESXi host is unavailable, the Global Manager nodes on the other hosts continue to
provide support for the NSX management and control planes.
Procedure
1 Download the NSX-T Data Center OVF file from the VMware download portal.
5 Select Local file, click Upload files, and navigate to the OVA file..
6 Click Next.
7 Enter a name and a location for the NSX Manager VM, and click Next.
The name you enter appears in the vSphere and vCenter Server inventory.
8 Select the compute resource on which to deploy the NSX Manager appliance page and click
Next.
9 Review and verify the OVF template details and click Next.
If you are configuring NSX Federation on the management domain, select Medium. For VI
workload domain, select Large. The Description panel on the right side of the wizard shows
the details of selected configuration.
n Click Next.
13 Select the management network as the destination network and click Next.
The following steps are all located in the Customize Template section of the Deploy OVF
Template wizard.
14 In the Application section, enter the system root, CLI admin, and audit passwords for the NSX
Manager. The root and admin credentials are mandatory fields.
n At least 12 characters
16 In the Network Properties section, enter the hostname of the NSX Manager.
Note The host name must be a valid domain name. Ensure that each part of the host name
(domain/subdomain) that is separated by dot starts with an alphabet character.
18 Enter the default gateway, management network IPv4, and management network netmask.
19 In the DNS section, enter the DNS Server list and Domain Search list.
21 Verify that all your custom OVF template specification is accurate and click Finish to initiate
the deployment.
Right-click the Global Manager VM and, from the Actions menu, select Power > Power on.
Procedure
2 Run the following command to retrieve the Global Manager cluster ID.
4 Run the following command to retrieve the thumbprint of the Global Manager API certificate.
6 Log in to the second Global Manager node and run the following command to join this node to
the cluster:
where cluster_ID is the value from step 3 and certificate_thumbprint is the value from step 5.
c Verify that the Cluster status is green that the cluster node is Available.
Results
The cluster formation and stabilization may take up to 15 minutes. Run the get cluster status
command to view the cluster status.
Procedure
1 In a web browser, log in to the management domain or VI workload domain vCenter Server at
https://vcenter_server_fqdn/ui.
4 Select the Global Manager cluster and click the Configure tab.
Option Description
Members Click Add, select the three Global Manager nodes, and
click OK.
7 Click OK and then click OK again in the Create VM/Host rule dialog box.
Procedure
3 Click Set Virtual IP and enter the VIP address for the cluster. Ensure that VIP is part of the
same subnet as the other management nodes.
4 Click Save.
From a browser, log in to the Global Manager using the virtual IP address assigned to the
cluster at https://gm_vip_fqdn/.
Procedure
1 In a web browser, log in to Local Manager cluster for the management domain or VI workload
domain at https://lm_vip_fqdn/).
a In the navigation pane, select IP Address Pools and click Add IP address pool.
b Enter a name.
d In the Set Subnets dialog box, click Add subnet > IP Ranges.
g Click Save.
c Under Global Fabric Settings, Click Edit for Remote Tunnel Endpoint.
Procedure
Procedure
1 In a web browser, log in to Global Manager cluster for the management or VI workload domain
at https://gm_vip_fqdn/.
3 Click Make Active and enter a name for the active Global Manager.
4 Click Save.
Procedure
c Run the command below to retrieve the Local Manager cluster VIP thumbprint.
b Select System > Location Manager and click Add On-Prem Location.
c In the Add New Location dialog box, enter the location details.
Option Description
Username and Password Provide the admin user's credentials for the NSX
Manager at the location.
d Click Save
a On the Location Manager page, in the Locations section, click Networking under the
location you are adding then click Configure.
b On the Configure Edge Nodes for Stretch Networking page, click Select All
c In the Remote Tunnel Endpoint Configuration pane enter the following details.
Option Value
d Click Save.
a Select the Global Manager context from the drop down menu.
d Verify that you have a recent backup and click Proceed to import.
e In the Preparing for import dialog box, click Next and then click Import.
Local Manager objects imported into the Global Manager are owned by the Global Manager
and appear in the Local Manager with a GM icon. You can modify these objects only from the
Global Manager.
Procedure
Procedure
1 In a web browser, log in to Global Manager for the management or VI workload domain at
https://gm_vip_fqdn/.
Tier-1 Gateway Name Enter a name for the new tier-1 gateway.
5 Click Save.
b Enable all available sources, click Save, and click Close editing.
Procedure
4 On the Segments tab, click the vertical eclipses for the cross-instance_nsx_segment and click
Edit.
5 Change the Connected Gateway from instance_tier1 to cross-instance_tier1, click Save, and
then click Close editing.
Procedure
b On the Tier-0 Gateways tab, click the vertical eclipses for the
additional_instance_tier1_gateway and click Edit.
c Under Linked Tier-0 gateway, click the X to disconnect the
additional_instance_tier0_gateway, click Save, and click Close editing.
5 On the Tier-0 Gateway page, click the vertical eclipses for the
additional_instance_tier0_gateway and click Delete.
6 Click Delete.
Procedure
c On the Tier-0 Gateway page, click the vertical eclipses for the cross-instance_tier0
gateway and click Edit.
Setting Value
Edge Cluster Enter the Edge cluster name of instance being added.
e Click Save.
c Enter a name for the interface and select the instance location.
d Set the type to External and enter the IP address for the interface.
e Select the segment that the interface is connected to and the Edge node corresponding to
the instance.
You can enable BFD if the network supports it and is configured for BFD.
b Enter the IP address for the neighbor and select the instance location.
a Expand Route Re-Distribution and next to the location you are adding, click Set.
d In the Set route redistribution dialog box, select all listed sources and click Apply.
e Click Add to finish editing the default route redistribution and click Apply.
f Click Save
Procedure
4 On the Tier-1 Gateway page, click the vertical eclipses menu for the
this_instance_tier1_gateway and click Edit.
5 Change the Connected Gateway to cross_instance_tier0_gateway and click Save.
7 Under Locations, delete all locations except the location of the instance you are working with.
Procedure
4 On the Tier-1 Gateway page, click the vertical eclipses for the cross-instance_tier1 gateway and
click Edit.
Setting Value
Edge Cluster Select the NSX Edge cluster of the this instance
6 Click Save.
Procedure
c Run the command below to retrieve the Global Manager cluster thumbprint.
d Enter the location name, FQDN, username and password, and the SHA-256 thumbprint
you had retrieved earlier.
Prerequisites
Procedure
c In the Import CA Certificate dialog box, enter a name for the root CA certificate.
d For Certificate Contents, select the root CA certificate you created in step 2c and click
Import.
3 Import certificates for the Global Manager nodes and the load balanced virtual server address.
c In the Certificate Contents, browse to the previously created certificate file with the
extension chain.pem and select the file.
d In the Private Key, browse to the previously created private key with the extension .key,
select the file, and click Import.
Procedure
4 Replace the default certificate on the first Global Manager node with the CA-signed certificate.
a Start the Postman application in your web browser and log in.
Setting Value
Setting Value
Key Content-Type
e In the request pane at the top, send the following HTTP request.
Setting Value
After the Global Manager sends a response, a 200 OK status is displayed on the Body tab.
c Right-click the node and select Actions > Power > Restart guest OS.
Table 18-1. URLs for Replacing the Global Manager Node Certificates
gm_node2_fqdn https://gm_node2_fqdn/api/v1/node/services/http?
action=apply_certificate&certificate_id=gm_vip_fqdn_certificat
e_ID
gm_node3_fqdn https://gm_node3_fqdn/api/v1/node/services/http?
action=apply_certificate&certificate_id=gm_fqdn_certificate_ID
gm_vip_fqdn https://gm_vip_fqdn/api/v1/cluster/api-certificate?
action=set_cluster_certificate&certificate_id=gm_vip_fqdn_cert
ificate_ID
Procedure
3 Replace the default certificate for the second Global Manager node with the CA-signed
certificate by using the first Global Manager node as a source.
a Start the Postman application in your web browser and log in.
Setting Value
Key Content-Type
Setting Value
After the NSX Manager appliance responds, the Body tab displays a 200 OK status.
4 To upload the CA-signed certificate on the third Global Manager node, repeat steps 2 to step 4
with appropriate values.
c Right-click the second and third Global Manager nodes and click Actions > Power >
Restart guest OS.
b For each node, navigate to System > Global Manager Appliances > View Details and
confirm that the status is REPO_SYNC = SUCCESS.
a Start the Postman application in your web browser and log in.
Setting Value
Setting Value
Key Content-Type
Setting Value
After the NSX Global Manager sends a response, a 200 OK status is displayed on the Body tab.
Procedure
c Run the command to retrieve the SHA-256 thumbprint of the virtual IP for the NSX
Manager cluster certificate.
c Under Locations, select the the Local Manager instance, and click Networking.
d Click Edit Settings and update NSX Local Manager Certificate Thumbprint.
f Wait for the Sync Status to display success and verify that all Local Manager nodes appear.
4 Under Locations, update the Local Manager certificate thumbprint for all the instances.
Procedure
a Start the Postman application in your web browser and log in.
Setting Value
Setting Value
Key Content-Type
d In the request pane at the top, send the following HTTP request.
Setting Value
a Log in to the Global Manager node by using a Secure Shell (SSH) client.
passwd admin
<enter admin password> <confirm admin password>
passwd audit
<enter audit password> <confirm audit password>
passwd root
<enter root password> <confirm root password>
a In the request pane at the top in Postman, send the following HTTP request.
Setting Value
You use the lookup_list to retrieve the NSX Local Manager Passwords from SDDC Manager
Procedure
c In the Local Manager instance, click Actions > Edit Settings and update admin password.
e Wait for the sync status to show success and verify that all Local Manager nodes appear.
3 Under Location Manager, update Local Manager passwords for all instances.
The Global Manager cluster stores the configured state of the segments. If the Global Manager
appliances become unavailable, the network traffic in the data plane is intact but you can make no
configuration changes.
Procedure
6 The protocol text box is already filled in. SFTP is the only supported protocol.
7 In the Directory Path text box, enter the absolute directory path where the backups will be
stored.
8 Enter the user name and password required to log in to the backup file server.
The first time you configure a file server, you must provide a password. Subsequently, if you
reconfigure the file server, and the server IP or FQDN, port, and user name are the same, you
do not need to enter the password again.
9 Leave the SSH Fingerprint blank and accept the fingerprint provided by the server after you
click Save in a later step.
10 Enter a passphrase.
Note You will need this passphrase to restore a backup. If you forget the passphrase, you
cannot restore any backups.
You can schedule recurring backups or trigger backups for configuration changes.
b Click Weekly and set the days and time of the backup, or click Interval and set the interval
between backups.
c Enabling the Detect NSX configuration change option will trigger an unscheduled full
configuration backup when it detects any runtime or non-configuration related changes, or
any change in user configuration. For Global Manager, this setting triggers backup if any
changes in the database are detected, such as the addition or removal of a Local Manager
or Tier-0 gateway or DFW policy.
d You can specify a time interval for detecting database configuration changes. The valid
range is 5 minutes to 1,440 minutes (24 hours). This option can potentially generate a large
number of backups. Use it with caution.
e Click Save.
What to do next
After you configure a backup file server, you can click Backup Now to manually start a backup
at any time. Automatic backups run as scheduled. You see a progress bar of your in-progress
backup.
Do not change the configuration of the NSX Global Manager cluster while the restore process is in
progress.
Prerequisites
n Verify that you have the login credentials for the backup file server.
n Verify that you have the SSH fingerprint of the backup file server. Only SHA256 hashed
ECDSA (256 bit) host key is accepted as a fingerprint.
Procedure
1 If any nodes in the appliance cluster that you are restoring are online, power them off.
n If the backup listing for the backup you are restoring contains an IP address, you must
deploy the new Global Manager node with the same IP address. Do not configure the node
to publish its FQDN.
n If the backup listing for the backup you are restoring contains an FQDN, you must
configure the new appliance node with this FQDN and publish the FQDN. Only lowercase
FQDN is supported for backup and restore.
4 Make the Global Manager active. You can restore a backup only on an active Global Manager.
c On the Location Manager page, click Make Active, enter a name for the Global Manager,
and click Save.
5 On the main navigation bar, click System > Backup & Restore and then click Edit.
9 In the Destination Directory text box, enter the absolute directory path where the backups are
stored.
10 Enter the passphrase that was used to encrypt the backup data.
11 Leave the SSH Fingerprint blank and accept the fingerprint provided by the server after you
click Save in a later step.
14 After the restored manager node is up and functional, deploy additional nodes to form a NSX
Global Manager cluster.
The default management cluster must be stretched before a workload domain cluster can be
stretched. This ensures that the NSX control plane and management VMs (vCenter, NSX, SDDC
Manager) remain accessible if the stretched cluster in the second availability zone goes down.
n If a cluster uses static IP addresses for the NSX-T Host Overlay Network TEPs.
n Planned maintenance
You can perform a planned maintenance on an availability zone without any downtime and
then migrate the applications after the maintenance is completed.
n Automated recovery
Stretching a cluster automatically initiates VM restart and recovery, and has a low recovery
time for the majority of unplanned failures.
n Disaster avoidance
With a stretched cluster, you can prevent service outages before an impending disaster.
This release of VMware Cloud Foundation does not support deleting or unstretching a cluster.
Availability Zones
An availability zone is a collection of infrastructure components. Each availability zone runs on its
own physically distinct, independent infrastructure, and is engineered to be highly reliable. Each
zone should have independent power, cooling, network, and security.
Additionally, these zones should be physically separate so that disasters affect only one zone. The
physical distance between availability zones is short enough to offer low, single-digit latency (less
than 5 ms) and large bandwidth (10 Gbps) between the zones.
Availability zones can either be two distinct data centers in a metro distance, or two safety or fire
sectors (data halls) in the same large-scale data center.
Regions
Regions are in two distinct locations - for example, region A can be in San Francisco and region
B in Los Angeles (LAX). The distance between regions can be rather large. The latency between
regions must be less than 150 ms.
Note The management network VLAN can be the same for the management domain and VI
workload domains, although the table below shows an example where these VLANs are different
(1611 vs 1631).
Note If a VLAN is stretched between AZ1 and AZ2, then the data center needs to provide
appropriate routing and failover of the gateway for that network.
Component Requirement
Layer 3 gateway availability For VLANs that are stretched between available zones,
configure data center provided method, for example,
VRRP or HSRP, to failover the Layer 3 gateway between
availability zones.
DHCP availability For VLANs that are stretched between availability zones,
provide high availability for the DHCP server so that a
failover operation of a single availability zone will not
impact DHCP availability.
BGP routing Each availability zone data center must have its own
Autonomous System Number (ASN).
Ingress and egress traffic n For VLANs that are stretched between availability
zones, traffic flows in and out of a single zone. Local
egress is not supported.
n For VLANs that are not stretched between availability
zones, traffic flows in and out of the zone where the
VLAN is located.
n For NSX-T virtual network segments that are stretched
between regions, traffic flows in and out of a single
availability zone. Local egress is not supported.
You deploy the vSAN witness host using an appliance instead of using a dedicated physical ESXi
host as a witness host. The witness host does not run virtual machines and must run the same
version of ESXi as the ESXi hosts in the stretched cluster. It must also meet latency and Round Trip
Time (RTT) requirements.
See the Physical Network Requirements for Multiple Availability Zone table within VxRail Stretched
Cluster Requirements.
Prerequisites
Procedure
5 On the Select an OVF template page, select Local file, click Upload files, browse to the
location of the vSAN witness host OVA file, and click Next.
6 On the Select a name and folder page, enter a name for the virtual machine and click Next.
8 On the Review details page, review the settings and click Next.
9 On the License agreements page, accept the license agreement and click Next.
12 On the Select networks page, select a portgroup for the witness and management network,
and click Next.
13 On the Customize template page, enter the root password for the witness and click Next.
14 On the Ready to complete page, click Finish and wait for the process to complete.
a In the inventory panel, navigate to vCenter Server > Datacenter > Cluster.
b Right-click the vSAN witness host and from the Actions menu, select Power > Power on.
Procedure
1 In the inventory panel of the vCenter Server Client, select vCenter Server > Datacenter.
a Right-click the vSAN witness host and click Open remote console.
c Select Set static IPv4 address and network configuration and press the Space bar.
d Enter IPv4 Address, Subnet Mask and Default Gateway and press Enter.
f Select Use the following DNS Server address and hostname and press the Space bar.
g Enter Primary DNS Server, Alternate DNS Server and Hostname and press Enter.
Procedure
1 Use the vSphere Client to log in to the vCenter Server containing the cluster that you want to
stretch.
Important You must add the vSAN Witness Host to the datacenter. Do not add it to a folder.
4 Enter the Fully Qualified Domain Name (FQDN) of the vSAN Witness Host and click Next.
Procedure
1 In the inventory panel of the vCenter Server Client, select vCenter Server > Datacenter.
2 Select the vSAN witness host and click the Configure tab.
a In the System section, click Time configuration and click the Edit button.
Setting Value
Procedure
1 In the inventory panel of the vCenter Server Client, select vCenter Server > Datacenter.
2 Select the vSAN witness host and click the Configure tab.
3 Remove the dedicated witness traffic VMkernel adapter on the vSAN Witness host.
b Select the kernel adapter vmk1 with witnessPg as Network label and click Remove.
4 Remove the virtual machine network port group on the vSAN witness host.
c Click the vertical ellipsis and from the drop-down menu, select Remove.
f In the VM Network pane, click the vertical ellipsis and from the drop-down menu, select
Remove.
5 Enable witness traffic on the VMkernel adapter for the management network of the vSAN
witness host.
a On the VMkernel adapters page, select the vmk0 adapter and click Edit.
b In the vmk0 - edit settings dialog box, click Port properties, select the vSAN check box,
and click OK.
This example use case has two availability zones in two buildings in an office campus - AZ1 and
AZ2. Each availability zone has its own power supply and network. The management domain is on
AZ1 and contains the default cluster, SDDC-Cluster1. This cluster contains four ESXi hosts.
vSAN network VLAN ID=1623
MTU=9000
Network=172.16.234.0
netmask 255.255.255.0
gateway 172.16.23.253
IP range=172.16.23.11 - 172.16.234.59
MTU=9000
Network=172.16.22.0
netmask 255.255.255.0
gateway 172.16.22.253
IP range=172.16.22.11 - 172.16.22.59
There are four ESXi hosts in AZ2 that are not in the VMware Cloud Foundation inventory yet.
We will stretch the default cluster SDDC-Cluster1 in the management domain from AZ1 to AZ2.
vSAN
L3 routing between AZ1 & AZ2 Hosts
L3 routing between AZ1/AZ2 hosts & witness
VMotion
L3 routing between AZ1 & AZ2 hosts
Stretched Networks
Host 1 Host 5
Host 2 Management cluster stretched Host 6
Host 3 across AZ1 and AZ2 Host 7
Host 4 Host 8
vMotion: NSX-T Host Overlay: vSAN: vMotion: NSX-T Host Overlay: vSAN:
VLAN 1612 VLAN 1614 VLAN 1613 VLAN 1622 VLAN 1624 VLAN 1623
172.16.12.0/24 172.16.14.0/24 172.16.13.0/24 172.16.22.0/24 172.16.24.0/24 172.16.23.0/24
GW 172.16.12.253 GW 172.16.14.253 GW 172.16.13.253 GW 172.16.22.253 GW 172.16.24.253 GW 172.16.23.253
AZ1 AZ2
To stretch a cluster for VMware Cloud Foundation on Dell EMC VxRail, perform the following
steps:
Prerequisites
n Verify that you have completed the Planning and Preparation Workbook with the
management domain or VI workload domain deployment option included.
n Verify that your environment meets the requirements listed in the Prerequisite Checklist sheet
in the Planning and Preparation Workbook.
n Ensure that you have enough hosts such that there is an equal number of hosts on each
availability zone. This is to ensure that there are sufficient resources in case an availability zone
goes down completely.
n Deploy and configure a vSAN witness host. See Deploy and Configure vSAN Witness Host.
n If you are stretching a cluster in a VI workload domain, the default management vSphere
cluster must have been stretched.
n Download initiate_stretch_cluster_vxrail.py.
Important You cannot deploy an NSX Edge cluster on a vSphere cluster that is stretched. If
you plan to deploy an NSX Edge cluster, you must do so before you execute the stretch cluster
workflow.
n If a cluster uses static IP addresses for the NSX-T Host Overlay Network TEPs
Procedure
2 Using SSH, log in to the SDDC Manager appliance with the user name vcf and the password
you specified in the deployment parameter workbook.
3 Run the script with -h option for details about the script options.
python initiate_stretch_cluster_vxrail.py -h
4 Run the following command to prepare the cluster to be stretched. The command creates
affinity rules for the VMs to run on the preferred site:
Enter the SSO user name and password when prompted to do so.
Once the workflow is triggered, track the task status in the SDDC Manager UI. If the task fails,
debug and fix the issue and retry the task from the SDDC Manager UI. Do not run the script
again.
5 Use the VxRail vCenter plug-in to add the additional hosts in Availability Zone 2 to the cluster
by performing the VxRail Manager cluster expansion work flow.
n vSAN gateway IP for the preferred (primary) and non-preferred (secondary) site
n vSAN CIDR for the preferred (primary) and non-preferred (secondary) site
Once the workflow is triggered, the task is tracked in the SDDC Manager UI. If the task fails,
debug and fix the issue and retry from SDDC Manager UI. Do not run the script again.
8 Monitor the progress of the AZ2 hosts being added to the cluster.
9 Validate that stretched cluster operations are working correctly by logging in to the vSphere
Web Client.
1 On the home page, click Host and Clusters and then select the stretched cluster.
3 Click Retest.
1 On the home page, click Policies and Profiles > VM Storage Policies > vSAN Default
Storage Policies.
2 Select the policy associated with the vCenter Server for the stretched cluster and click
Check Compliance.
3 Click VM Compliance and check the Compliance Status column for each VM.
Procedure
1 In a web browser, log in to NSX Manager for the management or workload domain to be
stretched at https://nsx_manager_fqdn/login.jsp?local=true.
4 Select the gateway and from the ellipsis menu, click Edit.
a Expand the Routing section and in the IP prefix list section, click Set.
b In the Set IP prefix list dialog box, click Add IP prefix list.
c Enter Any as the prefix name and under Prefixes, click Set.
d In the Set prefixes dialog box, click Add Prefix and configure the following settings.
Setting Value
Network any
Action Permit
6 Repeat step 5 to create the default route IP prefix set with the following configuration.
Setting Value
Network 0.0.0.0/0
Action Permit
Procedure
3 Select the gateway, and from the ellipsis menu, click Edit.
a Expand the Routing section and in the Route maps section, click Set.
b In the Set route maps dialog box, click Add route map.
e On the Set match criteria dialog box, click Add match criteria and configure the following
settings.
Local Preference 80 90
5 Repeat step 4 to create a route map for outgoing traffic from availability zone 2 with the
following configuration.
Setting Value
Type IP Prefix
Members Any
Action Permit
You configure two BGP neighbors with route filters for the uplink interfaces in availability zone 2.
Hold downtime 12 12
Table 19-5. Route Filters for BGP Neighbors for Availability Zone 2
Maximum Routes - -
Procedure
3 Select the gateway and from the ellipsis menu, click Edit.
b In the Set BGP neighbors dialog box, click Add BGP neighbor and configure the following
settings.
Setting Value
IP address ip_bgp_neighbor1
BFD Deactivated
Remote AS asn_bgp_neighbor1
Hold downtime 12
Password bgp_password
d In the Set route filter dialog box, click Add route filter and configure the following settings.
Setting Value
Enabled Activated
In Filter rm-in-az2
Maximum Routes -
By default, when you stretch a cluster, the vSAN-tagged VMkernel adapter is used to carry traffic
destined for the vSAN witness host. With witness traffic separation, you can use a separately
tagged VMkernel adapter instead of extending the vSAN data network to the witness host. This
feature allows for a more flexible network configuration by allowing for separate networks for
node-to-node and node-to-witness communication.
Prerequisites
You must have a stretched cluster before you can configure it for witness traffic separation.
Procedure
Procedure
3 Right-click the vSphere distributed switch for the cluster and select Distributed Port Group >
New Distributed Port Group.
4 Enter a name for the port group for the first availability zone and click Next.
9 On the Teaming and failover page, modify the failover order of the uplinks to match the
existing failover order of the management traffic and click Next.
12 On the Ready to Complete page, review your selections and click Finish.
Procedure
1 In a web browser, log in to the first ESXi host in the stretched cluster using the VMware Host
Client.
2 In the navigation pane, click Manage and click the Services tab.
4 Open an SSH connection to the first ESXi host in the stretched cluster.
5 Log in as root.
8 In the VMware Host Client, select the TSM-SSH service for the ESXi host and click Stop.
9 Repeat these steps for each ESXi host in the stretched cluster.
Procedure
3 Right-click the witness distributed port group for the first availability zone, for example,
AZ1_WTS_PG, and select Add VMkernel Adapters.
4 Click + Attached Hosts, select the availability zone 1 hosts from the list, and click OK.
5 Click Next.
7 Select Use static IPv4 settings and enter the IP addresses and the subnet mask to use for the
witness traffic separation network.
8 Click Next.
10 Repeat these steps for the witness distributed port group for the second availability zone.
Procedure
3 For each host in the stretched cluster, click Configure > Networking > VMkernel adapters to
determine which VMkernel adapter to use for witness traffic. For example, vmk5.
4 In a web browser, log in to the first ESXi host in the stretched cluster using the VMware Host
Client.
5 In the navigation pane, click Manage and click the Services tab.
For example:
10 Verify that the ESXi host can access the witness host:
Replace <vmkernel_adapter> with the VMkernel adapter configured for witness traffic, for
example vmk5. Replace <witness_host_ip_address> with the witness host IP address.
11 In the VMware Host Client, select the TSM-SSH service for the ESXi host and click Stop.
Prerequisites
Procedure
1 Use the VxRail vCenter plug-in to add the additional hosts in availability zone 1 or availability
zone 2 to the cluster by performing the VxRail Manager cluster expansion work flow.
2 Log in to SDDC Manager and run the script to trigger the workflow to import the newly added
hosts in the SDDC Manager inventory.
In the script, provide the root credentials for each host and specify which fault domain the host
should be added to.
3 Using SSH, log in to the SDDC Manager VM with the username vcf and the password you
specified in the deployment parameter workbook.
n vSAN gateway IP for the preferred (primary) and non-preferred (secondary) site
n vSAN CIDR for the preferred (primary) and non-preferred (secondary) site
6 Once the workflow is triggered, track the task status in the SDDC Manager UI.
If the task fails, debug and fix the issue and retry from SDDC Manager UI. Do not run the script
again.
What to do next
If you add hosts to a stretched cluster configured for witness traffic separation, perform the
following tasks for the added hosts:
Prerequisites
Procedure
Results
You use the built-in monitoring capabilities for these typical scenarios.
Scenario Examples
Are the systems online? A host or other component shows a failed or unhealthy status.
Why did a storage drive fail? Hardware-centric views spanning inventory, configuration, usage, and event history
to provide for diagnosis and resolution.
Is the infrastructure meeting Analysis of system and device-level metrics to identify causes and resolutions.
tenant service level agreements
(SLAs)?
At what future time will the Trend analysis of detailed system and device-level metrics, with summarized
systems get overloaded? periodic reporting.
What person performed which History of secured user actions, with periodic reporting.
action and when? Workflow task history of actions performed in the system.
In addition to the most recent tasks, you can view and search for all tasks by clicking View All
Tasks at the bottom of the Recent Tasks widget. This opens the Tasks panel.
Note For more information about controlling the widgets that appear on the Dashboard page of
SDDC Manager UI, see Tour of the SDDC Manager User Interface.
n Search tasks by clicking the filter icon in the Task column header and entering a search string.
n Filter tasks by status by clicking the filter icon in Status column. Select by category All, Failed,
Successful, Running, or Pending.
Note Each category also displays the number of tasks with that status.
n Clear all filters by clicking Reset Filter at the top of the Tasks panel.
Note You can also sort the table by the contents of the Status and Last Occurrence columns.
n If a task is in a Failed state, you can also attempt to restart it by clicking Restart Task.
n If a task is in a Failed state, click on the icon next to the Failed status to view a detailed report
on the cause.
Note You can filter subtasks in the same way you filter tasks.
Note You can also sort the table by the contents of the Status and Last Occurrence columns.
sddc-manager-ui-activity.log /var/log/vmware/vcf/sddc-manager-ui-app
domainmanager-activity.log /var/log/vmware/vcf/domainmanager
operationsmanager-activity.log /var/log/vmware/vcf/operationsmanager
lcm-activity.log /var/log/vmware/vcf/lcm
vcf-commonsvcs-activity.log /var/log/vmware/vcf/commonsvcs
{
"timestamp":"", "username":"", "clientIP":"", "userAgent":"", "api":"", "httpMethod":"",
"httpStatus" :"", "operation" :"", "remoteIP" :""
}
n username: The username of the system from which the API request is triggered. For example:
"[email protected]".
n timestamp: Date and time of the operation performed in the UTC format "YYYY-MM-
DD'T'HH:MM:SS.SSSXXX". For example: "2022-01-19T16:59:01.9192".
n client IP: The IP address of the user’s system. For example: "10.0.0.253".
n userAgent: The user’s system information such as the web browser name, web browser
version, operating system name, and operating system architecture type. For example:
"Mozilla/5.0 (Windows NT 6.3; Win 64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/97.0.4692.71 Safari/537.36".
n api: The API invoked to perform the opeartion. For example: "/domainmanager/vl/vra/
domains".
n httpStatus: The response code received after invoking the API. For example: 200.
n operation: The operation or activity that was performed. For example: "Gets vRealize
Automation integration status for workload domains".
The log history is stored for 30 days. The maximum file size of the log retention file is set to 100
MB.
Log Analysis
You can perform log aggregation and analysis by integrating vRealize Log Insight with VMware
Cloud Foundation. For more information, see Implementation of Intelligent Logging and Analytics
for VMware Cloud Foundation.
When you initially deploy VMware Cloud Foundation, you complete the deployment parameter
workbook to provide the system with the information required for bring-up. This includes up to
two DNS servers and up to two NTP servers. You can reconfigure these settings at a later date,
using the SDDC Manager UI.
SDDC Manager uses DNS servers to provide name resolution for the components in the system.
When you update the DNS server configuration, SDDC Manager performs DNS configuration
updates for the following components:
n SDDC Manager
n vCenter Servers
n ESXi hosts
n NSX Managers
n vRealize Operations
n vRealize Automation
n VxRail Manager
If the update fails, SDDC Manager rolls back the DNS settings for the failed component. Fix the
underlying issue and retry the update starting with the failed component.
Note There is no rollback for vRealize Suite Lifecycle Manager. Check the logs, resolve any
issues, and retry the update.
Updating the DNS server configuration can take some time to complete, depending on the size of
your environment. Schedule DNS updates at a time that minimizes the impact to the system users.
Prerequisites
n Verify that both forward and reverse DNS resolution are functional for each VMware Cloud
Foundation component using the updated DNS server information.
n Verify that the new DNS server is reachable from each of the VMware Cloud Foundation
components.
n Verify all VMware Cloud Foundation components are reachable from SDDC Manager.
n Verify that all VMware Cloud Foundation components are in an Active state.
Procedure
c Expand the Edit DNS configuration section, update the Primary DNS server and
Alternative DNS server, and click Save.
SDDC Manager uses NTP servers to synchronize time between the components in the system.
You must have at least one NTP server. When you update the NTP server configuration, SDDC
Manager performs NTP configuration updates for the following components:
n SDDC Manager
n vCenter Servers
n ESXi hosts
n NSX Managers
n vRealize Operations
n vRealize Automation
n VxRail Manager
If the update fails, SDDC Manager rolls back the NTP settings for the failed component. Fix the
underlying issue and retry the update starting with the failed component.
Note There is no rollback for the vRealize Suite Lifecycle Manager. Check the logs, resolve any
issues, and retry the update.
Updating the NTP server configuration can take some time to complete, depending on the size of
your environment. Schedule NTP updates at a time that minimizes the impact to the system users.
Prerequisites
n Verify the new NTP server is reachable from the VMware Cloud Foundation components.
n Verify the time skew between the new NTP servers and the VMware Cloud Foundation
components is less than 5 minutes.
n Verify all VMware Cloud Foundation components are reachable from SDDC Manager.
Procedure
c Expand the Edit NTP configuration section, update the NTP server, and click Save.
To run the SoS utility, SSH in to the SDDC Manager appliance using the vcf user account. For
basic operations, enter the following command:
To list the available command options, use the --help long option or the -h short option.
Note You can specify options in the conventional GNU/POSIX syntax, using -- for the long option
and - for the short option.
For privileged operations, enter su to switch to the root user, and navigate to the /opt/vmware/
sddc-support directory and type ./sos followed by the options required for your desired
operation.
For information about collecting log files using the SoS utility, see Collect Logs for Your VMware
Cloud Foundation System.
Option Description
Option Description
--short Display detailed health results only for failures and warnings.
--domain-name DOMAINNAME Specify the name of the workload domain name on which to perform the SoS operation.
To run the operation on all workload domains, specify --domain-name ALL.
Note If you omit the --domain-name flag and workload domain name, the SoS
operation is performed only on the management domain.
Option Description
--clusternames Specify the vSphere cluster names associated with a workload domain for which you
CLUSTERNAMES want to collect ESXi and Workload Management (WCP) logs.
Enter a comma-separated list of vSphere clusters. For example, --clusternames
cluster1, cluster2.
Note If you specify --domain-name ALL then the --clusternames option is ignored.
--skip-known-host-check Skips the specified check for SSL thumbprint for host in the known host.
--include-free-hosts Collect logs for free ESXi hosts, in addition to in-use ESXi hosts.
Option Description
--get-vcf-summary Returns information about your VMware Cloud Foundation system, including
CEIP,workload domains, vSphere clusters, ESXi hosts, licensing, network pools, SDDC
Manager, and VCF services.
--get-vcf-tasks-summary Returns information about VMware Cloud Foundation tasks, including the time the task
was created and the status of the task.
--get-vcf-services- Returns information about SDDC Manager uptime and when VMware Cloud Foundation
summary services (for example, LCM) started and stopped.
./sos --option-name
Note For Fix-It-Up options, if you do not specify a workload domain, the command affects only
the management domain.
Option Description
n To enable SSH on ESXi nodes in all workload domains, include the flag --domain-
name ALL.
n To deactivate SSH on ESXi nodes in all workload domains, include the flag --
domain-name ALL.
n To enable SSH on vCenter Servers in all workload domains, include the flag --
domain-name ALL.
--enable-lockdown-esxi Applies lockdown mode on ESXi nodes in the specified workload domains.
n To enable lockdown on ESXi nodes in a specific workload domain, include the flag
--domain-name DOMAINNAME.
n To enable lockdown on ESXi nodes in all workload domains, include the flag --
domain-name ALL.
--disable-lockdown-esxi Deactivates lockdown mode on ESXi nodes in the specified workload domains.
n To deactivate lockdown on ESXi nodes in a specific workload domain, include the
flag --domain-name DOMAINNAME.
n To deactivate lockdown on ESXi nodes in all workload domains, include the flag
--domain-name ALL.
--ondemand-service Include this flag to execute commands on all ESXi hosts in a workload domain.
--ondemand-service JSON Include this flag to execute commands in the JSON format on all ESXi hosts in a
file path workload domain. For example, /opt/vmware/sddc-support/<JSON file name>
A green status indicates that the health is normal, yellow provides a warning that attention might
be required, and red (critical) indicates that the component needs immediate attention.
Option Description
--connectivity-health Performs connectivity checks and validations for SDDC resources (NSX Managers, ESXi
hosts, vCenter Servers, and so on). This check performs a ping status check, SSH
connectivity status check, and API connectivity check for SDDC resources.
--services-health Performs a services health check to confirm whether services within the SDDC
Manager (like Lifecycle Management Server) and vCenter Server are running.
--compute-health Performs a compute health check, including ESXi host licenses, disk storage, disk
partitions, and health status.
--storage-health Performs a check on the vSAN disk health of the ESXi hosts and vSphere clusters.
Can be combined with --run-vsan-checks. For example:
--run-vsan-checks This option cannot be run on its own and must be combined with --health-check or
--storage-health.
Runs a VM creation test to verify the vSAN cluster health. Running the test creates a
virtual machine on each host in the vSAN cluster. The test creates a VM and deletes
it. If the VM creation and deletion tasks are successful, assume that the vSAN cluster
components are working as expected and the cluster is functional.
Note You must not conduct the proactive test in a production environment as it
creates network traffic and impacts the vSAN workload.
--ntp-health Verifies whether the time on the components is synchronized with the NTP server in
the SDDC Manager appliance. It also ensures that the hardware and software time
stamp of ESXi hosts are within 5 minutes of the SDDC Manager appliance.
--general-health Checks ESXi for error dumps and gets NSX Manager and cluster status.
--certificate-health Verifies that the component certificates are valid (within the expiry date).
Option Description
--get-inventory-info Returns inventory details for the VMware Cloud Foundation components, such as
vCenter Server NSX-T Data Center, SDDC Manager, and ESXi hosts. Optionally, add
the flag --domain-name ALL to return details for all workload domains.
--password-health Returns the status of all current passwords, such as Last Changed Date, Expiry Date,
and so on.
--hardware-compatibility- Validates ESXi hosts and vSAN devices and exports the compatibility report.
report
--json-output-dir JSONDIR Outputs the results of any health check as a JSON file to the specified directory,
JSONDIR.
./sos --password-health
n Check the DNS health for the workload domain named sfo-w01:
Use these options when retrieving support logs from your environment's various components.
n If you run the SoS utility from SDDC Manager without specifying any component-specific
options, the SoS tool collects SDDC Manager, API, and VMware Cloud Foundation summary
logs. To collect all logs, use the --collect-all-logs options.
n If you run the SoS utility from Cloud Builder without specifying any component-specific
options, the SoS tool collects SDDC Manager, API, and Cloud Builder logs.
n To collect logs for a specific component, run the utility with the appropriate options.
For example, the --domain-name option is important. If omitted, the SoS operation is
performed only on the management domain. See SoS Utility Options.
After running the SoS utility, you can examine the resulting logs to troubleshoot issues, or provide
to VMware Technical Support if requested. VMware Technical Support might request these logs
to help resolve technical issues when you have submitted a support request. The diagnostic
information collected using the SoS utility includes logs for the various VMware software
components and software products deployed in your VMware Cloud Foundation environment.
Option Description
--sddc-manager-logs Collects logs from the SDDC Manager only. sddc<timestamp>.tgz contains logs from the
SDDC Manager file system's etc, tmp, usr, and var partitions.
--psc-logs Collects logs from the Platform Services Controller instances only.
--nsx-logs Collects logs from the NSX Manager and NSX Edge instances only.
--no-clean-old-logs Use this option to prevent the utility from removing any output from a previous collection
run. By default, the SoS utility.
By default, before writing the output to the directory, the utility deletes the prior run's output
files that might be present. If you want to retain the older output files, specify this option.
--api-logs Collects output from REST endpoints for SDDC Manager inventory and LCM.
--rvc-logs Collects logs from the Ruby vSphere Console (RVC) only. RVC is an interface for ESXi and
vCenter.
Note If the Bash shell is not enabled in vCenter Server, RVC log collection will be skipped .
Note RVC logs are not collected by default with ./sos log collection. You must enable RVC
to collect RVC logs.
--collect-all-logs Collects logs for all components, except Workload Management and system debug logs. By
default, logs are collected for the management domain components.
To collect logs for all workload domain, specify --domain-name ALL.
To collect logs for a specific workload domain, specify --domain-name domain_name.
Option Description
--domain-name Specify the name of the workload domain name on which the SoS operation is to be
DOMAINNAME performed.
To run the operation on all domains, specify --domain-name ALL.
Note If you omit the --domain-name flag and domain name, the SoS operation is performed
only on the management domain.
Procedure
1 Using SSH, log in to the SDDC Manager appliance as the vcf user.
2 To collect the logs, run the SoS utility without specifying any component-specific options.
sudo /opt/vmware/sddc-support/sos
Note By default, before writing the output to the directory, the utility deletes the prior run's
output files that might be present. If you want to retain the older output files, specify the
--no-clean-old-logs option.
If you do not specify the --log-dir option, the utility writes the output to the /var/log/
vmware/vcf/sddc-support directory in the SDDC Manager appliance
Results
The utility collects the log files from the various software components in all of the racks and
writes the output to the directory named in the --log-dir option. Inside that directory, the utility
generates output in a specific directory structure.
Example
What to do next
The SoS utility writes the component log files into an output directory structure within the file
system of the SDDC Manager instance in which the command is initiated, for example:
Log Collection completed successfully for : [HEALTH-CHECK, SDDC-MANAGER, NSX_MANAGER, API-LOGS, ESX,
VMS_SCREENSHOT, VCENTER-SERVER, VCF-SUMMARY]
File Description
esx-FQDN.tgz Diagnostic information from running the vm-support command on the ESXi host.
An example file is esx-esxi-1.vrack.vsphere.local.tgz.
SmartInfo- S.M.A.R.T. status of the ESXi host's hard drive (Self-Monitoring, Analysis, and Reporting
FQDN.txt Technology).
An example file is SmartInfo-esxi-1.vrack.vsphere.local.txt.
vsan-health- vSAN cluster health information from running the standard command python /usr/lib/vmware/
FQDN.txt vsan/bin/vsan-health-status.pyc on the ESXi host.
An example file is vsan-health-esxi-1.vrack.vsphere.local.txt.
The number of files in this directory depends on the number of NSX Manager and NSX Edge
instances that are deployed in the rack. In a given rack, each management domain has a cluster
of three NSX Managers. The first VI workload domain has an additional cluster of three NSX
Managers. Subsequent VI workload domains can deploy their own NSX Manager cluster, or use
the same cluster as an existing VI workload domain. NSX Edge instances are optional.
File Description
VMware-NSX-Manager-tech-support- Standard NSX Manager compressed support bundle, generated using the
nsxmanagerIPaddr.tar.gz NSX API POST https://nsxmanagerIPaddr/api/1.0/appliance-management/
techsupportlogs/NSX, where nsxmanagerIPaddr is the IP address of the NSX
Manager instance.
An example is VMware-NSX-Manager-tech-support-10.0.0.8.tar.gz.
VMware-NSX-Edge-tech-support- Standard NSX Edge support bundle, generated using the NSX API to query
nsxmanagerIPaddr-edgeId.tgz the NSX Edge support logs: GET https://nsxmanagerIPaddr/api/4.0/edges/
edgeId/techsupportlogs, where nsxmanagerIPaddr is the IP address of the
Note This information is only collected
NSX Manager instance and edgeID identifies the NSX Edge instance.
if NSX Edges are deployed.
An example is VMware-NSX-Edge-tech-support-10.0.0.7-edge-1.log.gz.
vc Directory Contents
In each rack-specific directory, the vc directory contains the diagnostic information files collected
for the vCenter Server instances deployed in that rack.
The number of files in this directory depends on the number of vCenter Server instances that are
deployed in the rack. In a given rack, each management domain has one vCenter Server instance,
and any VI workload domains in the rack each have one vCenter Server instance.
File Description
vc-vcsaFQDN-vm- Standard vCenter Server support bundle downloaded from the vCenter Server Appliance
support.tgz instance having a fully qualified domain name vcsaFQDN. The support bundle is obtained from
the instance using the standard vc-support.sh command.
You provided a password for the superuser account (user name vcf) in the deployment parameter
workbook before bring-up. After VMware Cloud Foundation is deployed, you can log in with
the superuser credentials and then add vCenter Server or AD users or groups to VMware Cloud
®
Foundation. Authentication to the SDDC Manager UI uses the VMware vCenter Single Sign-
On authentication service that is installed during the bring-up process for your VMware Cloud
Foundation system.
Users and groups can be assigned roles to determine what tasks they can perform from the UI and
API.
In addition to user accounts, VMware Cloud Foundation includes the following accounts:
n Automation accounts for accessing VMware Cloud Foundation APIs. You can use these
accounts in automation scripts.
n Local account for accessing VMware Cloud Foundation APIs when vCenter Server is down.
For a VMware Cloud Foundation 4.1 deployment, you can specify the local account password
in the deployment parameter workbook. If you upgraded to VMware Cloud Foundation 4.1,
you configure the local account through VMware Cloud Foundation API.
n Service accounts are automatically created by VMware Cloud Foundation for inter-product
interaction. These are for system use only.
Prerequisites
Only a user with the ADMIN role can perform this task.
Procedure
3 Select one or more users or group by clicking the check box next to the user or group.
You can either search for a user or group by name, or filter by user type or domain.
Role Description
ADMIN This role has access to all the functionality of the UI and API.
VIEWER This role can only view the SDDC Manager. User management and password
management are hidden from this role.
Prerequisites
Only a user with the ADMIN role can perform this task.
Procedure
2 Click the vertical ellipsis (three dots) next to a user or group name and click Remove.
3 Click Delete.
Procedure
For more information about roles, see Chapter 23 User and Group Management.
You can also download the response by clicking the download icon to the right of
LocalUser (admin@local).
4 If the local account is not configured, perform the following tasks to configure the local
account:
n Minimum length: 12
n At least one lowercase letter, one uppercase letter, a number, and one of the following
special characters ! % @ $ ^ # ? *
Note You must remember the password that you created because it cannot be
retrieved. Local account passwords are used in password rotation.
Procedure
For more about roles, see Chapter 23 User and Group Management.
4 Create a service account with the ADMIN role and get the service account's API key.
[
{
"name": "service_account",
"type": "SERVICE",
"role":
{
"id": "317cb292-802f-ca6a-e57e-3ac2b707fe34"
}
}
]
c Click Execute.
c Click TokenCreationSpec.
{
"apiKey": "qsfqnYgyxXQ892Jk90HXyuEMgE3SgfTS"
}
e Click Execute.
f In the Response, click TokenPair and RefreshToken and save the access and refresh
tokens.
You can update or rotate the password for the root and mystic users of the VxRail Manager and
the root user of ESXi hosts using the SDDC Manager UI. To update or rotate the passwords for
other users refer to the Dell EMC VxRail documentation.
To provide the optimal security and proactively prevent any passwords from expiring, you should
rotate passwords every 80 days.
n Rotate Passwords
n Remediate Passwords
Rotate Passwords
As a security measure, you can rotate passwords for the logical and physical accounts on all
racks in your system. The process of password rotation generates randomized passwords for
the selected accounts. You can rotate passwords manually or set up auto-rotation for accounts
managed by SDDC Manager. By default, auto-rotation is enabled for vCenter Server.
n VxRail Manager
n ESXi
n vCenter Server
n NSX Manager
n vRealize Operations
n vRealize Automation
n 20 characters in length
n At least one uppercase letter, a number, and one of the following special characters: ! @ # $
^ *
If you changed the vCenter Server password length using the vSphere Client or the ESXi password
length using the VMware Host Client, rotating the password for those components from SDDC
Manager generates a password that complies with the password length that you specified.
To update the SDDC Manager root, super user, and API passwords, see Updating SDDC Manager
Passwords.
Prerequisites
n Verify that there are no currently failed workflows in SDDC Manager. To check for failed
workflows, click Dashboard in the navigation pane and expand the Tasks pane at the bottom
of the page.
n Verify that no active workflows are running or are scheduled to run during the brief time
period that the password rotation process is running. It is recommended that you schedule
password rotation for a time when you expect to have no running workflows.
n Only a user with the ADMIN role can perform this task.
Procedure
1 In the navigation pane, click Administration > Security > Password Management.
The Password Management page displays a table of the credentials that SDDC Manager is
able to manage. For each account it lists username, FQDN of the component it belongs
to, workload domain, last modified date, and rotation schedule and next rotation date if
applicable.
You can click the filter icon next to the table header and filter the results by a string value. For
example, click the icon next to User Name and enter admin to display only domains with that
user name value.
2 Select the account for which you want to rotate passwords from the Component drop-down
menu. For example, ESXI.
3 Select one or more accounts and click one of the following operation.
n Rotate Now
n Schedule Rotation
You can set the password rotation interval (30 days, 60 days, or 90 days). You can also
deactivate the schedule.
A message appears at the top of the page showing the progress of the operation. The Tasks
panel also shows detailed status for the password rotation operation. To view sub-tasks, click
the task name. As each of these tasks is run, the status is updated. If the task fails, you can
click Retry.
Results
n Minimum length: 12
n Maximum length: 20
n At least one lowercase letter, one uppercase letter, a number, and one of the following special
characters: ! @ # $ ^ *
n A dictionary word
n A palindrome
Prerequisites
n Verify that there are no currently failed workflows in your VMware Cloud Foundation system.
To check for failed workflows, click Dashboard in the navigation pane and expand the Tasks
pane at the bottom of the page.
n Verify that no active workflows are running or are scheduled to run during the manual
password update.
n Only a user with the ADMIN role can perform this task. For more information about roles, see
Chapter 23 User and Group Management.
Procedure
1 From the navigation pane, select Administration > Security > Password Management.
The Password Management page displays a table with detailed information about all domains,
including their account, credential type, FQDN, IP address, and user name. This table is
dynamic. Each column can be sorted.
You can click the filter icon next to the table header and filter the results by a string value. For
example, click the filter icon next to User Name and enter admin to display only domains with
that user name value.
2 Select the component that includes the account for which you want to update the password
from the drop-down menu.
3 Select the account whose password you want to update, click the vertical ellipsis (three dots),
and click Update.
The Update Password dialog box appears. This dialog box also displays the account name,
account type, credential type, and user name, in case you must confirm you have selected the
correct account.
5 Click Update.
A message appears at the top of the page showing the progress of the operation. The Tasks
panel also shows detailed status of the password update operation. To view sub-tasks, click
the task name.
Results
Remediate Passwords
When an error occurs, for example after a password expires, you must manually reset the
password in the component. After you reset the password in a component, you must remediate
the password in SDDC Manager to update the password in the SDDC Manager database and the
dependent Cloud Foundation workflows.
To resolve any errors that might have occurred during password rotation or update, you must
use password remediation. Password remediation syncs the password of the account stored in the
SDDC Manager with the updated password in the component.
Note You can remediate the password for only one account at a time.
Although the individual VMware Cloud Foundation components support different password
requirements, you must set passwords following a common set of requirements across all
components. For information on updating passwords manually, see Manually Update Passwords.
Prerequisites
n Verify that VMware Cloud Foundation system contain no failed workflows. To check for failed
workflows, click Dashboard in the navigation pane and expand the Tasks pane at the bottom
of the page.
n Verify that no workflows are running or are scheduled to run while you remediate the
password.
n Only a user with the ADMIN role can perform this task. For more information about roles, see
Chapter 23 User and Group Management.
Procedure
1 From the navigation pane, select Administration > Security > Password Management.
The Password Management page displays a table with detailed information about all domains,
including their component, credential type, FQDN, IP address, and user name. This table is
dynamic. Each column can be sorted.
You can click the filter icon next to the table header and filter the results by a string value. For
example, click the filter icon next to User Name and enter admin to display only domains with
that user name value.
2 Select the component that includes the account for which you want to remediate a password
from the drop-down menu.
3 Select the account whose password you want to remediate, click the vertical ellipsis (three
dots), and click Remediate.
The Remediate Password dialog box appears. This dialog box displays the entity name,
account type, credential type, and user name, in case you must confirm you have selected
the correct account.
4 Enter and confirm the password that was set manually on the component.
5 Click Remediate.
A message appears at the top of the page showing the progress of the operation. The Task
panel also shows detailed status of the password remediation operation. To view subtasks, you
can click the task name.
Results
Prerequisites
Only a user with the ADMIN role can perform this task.
Procedure
1 SSH in to the SDDC Manager appliance using the vcf user account.
Note Although the password management CLI commands are located in /usr/bin, you can
run them from any directory.
lookup_passwords
You must enter the user name and password for a user with the ADMIN role.
4 (Optional) Save the command output to a secure location with encryption so that you can
access it later and use it to log in to the accounts as needed.
Procedure
Option Description
Results
n At least 12 characters
Procedure
For more information about roles, see Chapter 23 User and Group Management.
6 In the Value box, type the new and old passwords and click Execute.
A response of Status: 204, No Content indicates that the password was successfully
updated.
n Must include:
n a number
n *{}[]()/\'"`~,;:.<>
Procedure
1 In a web browser, log in to the management domain vCenter Server using the vSphere Client
(https://<vcenter_server_fqdn>/ui).
2 In the VMs and Templates inventory, expand the management domain vCenter Server and the
management virtual machines folder.
3 Right-click the SDDC Manager virtual machine, and select Open Remote Console.
4 Click within the console window and press Enter on the Login menu item.
5 Type root as the user name and enter the current password for the root user.
7 When prompted for a new password, enter a different password than the previous one and
click Enter.
You can backup and restore SDDC Manager with an image-based or a file-based solution. File-
based backup is recommended for customers who are comfortable with configuring backups
using APIs, and are not using composable servers.
For a file-based backup of SDDC Manager VM, the state of the VM is exported to a file that
is stored in a domain different than the one where the product is running. You can configure
a backup schedule for the SDDC Manager VM and enable task-based (state-change driven)
backups. When task-based backups are enabled, a backup is triggered after each SDDC Manager
task (such as workload domain and host operations or password rotation).
You can also define a backup retention policy to comply with your company's retention policy. For
more information, see the VMware Cloud Foundation on Dell EMC VxRail API Reference Guide.
By default, NSX Manager file-based backups are taken on the SFTP server that is built into SDDC
Manager. It is recommended that you configure an external SFTP server as a backup location for
the following reasons:
n An external SFTP server is a prerequisite for restoring SDDC Manager file-based backups.
n Using an external SFTP server provides better protection against failures because it decouples
NSX backups from SDDC Manager backups.
This section of the documentation provides instructions on backing up and restoring SDDC
Manager, and on configuring the built-in automation of NSX backups. For information on backing
up and restoring a full-stack SDDC, see VMware Validated Design Backup and Restore.
n File-Based Restore for SDDC Manager, vCenter Server, and NSX-T Data Center
Procedure
2 On the Backup page, click the Site Settings tab and then click Register External.
To obtain the SSH Fingerprint of the target system to verify, connect to the SDDC Manager
Appliance over ssh and run the following command:
Setting Value
Port 22
Backup Directory The directory on the SFTP server where backups are
saved.
For example: /backups/.
4 In the Confirm your changes to backup settings dialog box, click Confirm.
To ensure that all management components are backed up correctly, you must create a series of
backup jobs that capture the state of a set of related components at a common point in time. With
some components, simultaneous backups of the component nodes ensure that you can restore
the component a state where the nodes are logically consistent with each other and eliminate the
necessity for further logical integrity remediation of the component.
Note
n You must monitor the space utilization on the SFTP server to ensure that you have sufficient
storage space to accommodate all backups taken within the retention period.
n Do not make any changes to the /opt/vmware/vcf directory on the SDDC Manager VM. If
this directory contains any large files, backups may fail.
Prerequisites
Verify that you have an SFTP server on the network to serve as a target of the file-based backups.
Only a user with the Admin role can perform this task.
Procedure
4 On the Backup Schedule page, enter the settings and click Save.
Setting Value
Results
The status and the start time of the backup is displayed on the UI. You have set the SDDC
Manager backup schedule to run daily at 04:02 AM and after each change of state.
Procedure
4 In the Create backup schedule dialog box, enter these values and click Create.
Setting Value
Setting Value
Results
Prerequisites
n In the vSphere Client, for each vSphere cluster that is managed by the vCenter Server, note
the current vSphere DRS Automation Level setting and then change the setting to Manual.
After the vCenter Server upgrade is complete, you can change the vSphere DRS Automation
Level setting back to its original value. See KB 87631 for information about using VMware
PowerCLI to change the vSphere DRS Automation Level.
Procedure
4 If you already have a backup schedule set up, select Use backup location and user name from
backup schedule and click Start.
5 If you do not already have a backup schedule, enter the following information and click Start.
Setting Value
What to do next
In order to restore vCenter Server, you will need the VMware vCenter Server Appliance ISO file
that matches the version you backed up.
n Identify the required vCenter Server version. In the vCenter Server Management Interface,
click Summary in the left navigation pane to see the vCenter Server version and build number.
n Download the VMware vCenter Server Appliance ISO file for that version from VMware
Customer Connect.
You can use the exported file to create multiple copies of the vSphere Distributed Switch
configuration on an existing deployment, or overwrite the settings of existing vSphere Distributed
Switch instances and port groups.
You must backup the configuration of a vSphere Distributed Switch immediately after each change
in configuration of that switch.
Procedure
4 Expand the Management Networks folder, right-click the distributed switch, and select
Settings > Export configuration.
5 In the Export configuration dialog box, select Distributed switch and all port groups.
6 In the Description text box enter the date and time of export, and click OK.
7 Copy the backup zip file to a secure location from where you can retrieve the file and use it if a
failure of the appliance occurs.
Use this guidance as appropriate based on the exact nature of the failure encountered within
your environment. Sometimes, you can recover localized logical failures by restoring individual
components. In more severe cases, such as a complete and irretrievable hardware failure,
to restore the operational status of your SDDC, you must perform a complex set of manual
deployments and restore sequences. In failure scenarios where there is a risk of data loss, there
has already been data loss or where it involves a catastrophic failure, contact VMware Support to
review your recovery plan before taking any steps to remediate the situation.
Prerequisites
n Verify that you have a valid file-based backup of the failed SDDC Manager instance.
To be valid, the backup must be of the same version as the version of the SDDC Manager
appliance on which you plan to restore the instance.
n SFTP Server IP
n Encryption Password
Procedure
What to do next
The backup file contains sensitive data about your VMware Cloud Foundation instance, including
passwords in plain text. As a best practice, you must control access to the decrypted files and
securely delete them after you complete the restore operation.
Prerequisites
Verify that your host machine with access to the SDDC has OpenSSL installed.
Note The procedures have been written based on the host machine being a Linux-based
operating system.
Procedure
1 Identify the backup file for the restore and download it from the SFTP server to your host
machine.
2 On your host machine, open a terminal and run the following command to extract the content
of the backup file.
4 In the extracted folder, locate and open the metadata.json file in a text editor.
6 In a web browser, paste the URL and download the OVA file.
7 In the extracted folder, locate and view the contents of the security_password_vault.json
file.
8 Locate the entityType BACKUP value and record the backup password.
Procedure
1 In a web browser, log in to management domain vCenter Server by using the vSphere Client
(https://<vcenter_server_fqdn>/ui).
5 On the Select an OVF template page, select Local file, click Upload files, browse to the
location of the SDDC Manager OVA file, click Open, and click Next.
6 On the Select a name and folder page, in the Virtual machine name text box, enter a virtual
machine name, and click Next.
8 On the Review details page, review the settings and click Next.
9 On the License agreements page, accept the license agreement and click Next.
10 On the Select storage page, select the vSAN datastore and click Next.
The datastore must match the vsan_datastore value in the metadata.json file that you
downloaded during the preparation for the restore.
11 On the Select networks page, from the Destination network drop-down menu, select the
management network distributed port group and click Next.
The distributed port group must match the port_group value in the metadata.json file that
you downloaded during the preparation for the restore.
12 On the Customize template page, enter the following values and click Next.
Setting Description
Enter root user password You can use the original root user password or a new
password.
Enter login (vcf) user password You can use the original vcf user password or a new
password.
Enter basic auth user password You can use the original admin user password or a new
password.
Enter backup (backup) user password The backup password that you saved during the
preparation for the restore. This password can be
changed later if desired.
Enter Local user password You can use the original Local user password or a new
password.
Domain search path The domain search path(s) for the appliance.
13 On the Ready to complete page, click Finish and wait for the process to complete.
14 When the SDDC Manager appliance deployment completes, expand the management folder.
15 Right-click the SDDC Manager appliance and select Snapshots > Take Snapshot.
16 Right-click the SDDC Manager appliance, select Power > Power On.
17 On the host machine, copy the encrypted backup file to the /tmp folder on the newly deployed
SDDC Manager appliance by running the following command. When prompted, enter the
vcf_user_password.
18 On the host machine, obtain the authentication token from the SDDC Manager appliance in
order to be able to execute the restore process by running the following command:
19 On the host machine with access to the SDDC Manager, open a terminal and run the command
to start the restore process.
21 Monitor the restore task by using the following command until the status becomes
Successful.
What to do next
Refresh the SSH keys that are stored in the SDDC Manager inventory. See VMware Cloud
Foundation SDDC Manager Recovery Scripts (79004).
Procedure
Prerequisites
n Verify that you have a valid file-based backup of the failed vCenter Server instance.
To be valid, the backup must be of the version of the vCenter Server Appliance on which you
plan to restore the instance.
n SFTP Server IP
n Encryption Password
Procedure
Prerequisites
Because the Management domain vCenter Server might be unavailable to authenticate the login,
you use the SDDC Manager API via the shell to retrieve this information.
Procedure
3 For each vCenter Server instance, record the values of these settings.
Setting Value
version version_number-build_number
4 Verify that the vCenter Server version retrieved from SDDC Manager is the same as the version
associated with the backup file that you plan to restore.
the Management domain vCenter Server, you must also retrieve the credentials of a healthy
Management domain ESXi host.
Before you can query the SDDC Manager API, you must obtain an API access token by using
admin@local account.
Prerequisites
Note If SDDC Manager is not operational, you can retrieve the required vCenter Server root,
vCenter Single Sign-On administrator, and ESXi root credentials from the file-based backup of
SDDC Manager. See Prepare for Restoring SDDC Manager.
Procedure
1 Log in to your host machine with access to the SDDC and open a terminal.
a Run the command to obtain an access token by using the admin@local credentials.
a Run the following command to retrieve the vCenter Server root credentials.
Setting Value
username root
password vcenter_server_root_password
a Run the following command to retrieve the vCenter Single Sign-On administrator
credentials.
Setting Value
username [email protected]
password vsphere_admin_password
5 If you plan to restore the management domain vCenter Server, retrieve the credentials for a
healthy management domain ESXi host.
a Run the following command to retrieve the credentials for a management domain ESXi
host.
username root
password esxi_root_password
must restore the management domain vCenter Server before restoring the VI workload domain
vCenter Server.
You deploy a new vCenter Server appliance and perform a file-based restore. If you are restoring
the management domain vCenter Server, you deploy the new appliance on a healthy ESXi host
in the management domain vSAN cluster. If you are restoring the VI workload domain vCenter
Server, you deploy the new appliance on the management domain vCenter Server.
Prerequisites
n Download the vCenter Server ISO file for the version of the failed instance. See Retrieve the
vCenter Server Deployment Details.
n If you are recovering the VI workload domain vCenter Server, verify that the management
vCenter Server is available.
Procedure
1 Mount the vCenter Server ISO image to your host machine with access to the SDDC and run
the UI installer for your operating system.
2 Click Restore.
b On the End user license agreement page, select the I accept the terms of the license
agreement check box and click Next.
c On the Enter backup details page, enter these values and click Next.
Password vsphere-service-account-password
d On the Review backup information page, review the backup details, record the vCenter
Server configuration information, and click Next.
You use the vCenter Server configuration information at a later step to determine the
deployment size for the new vCenter Server appliance.
e On the vCenter Server deployment target page, enter the values by using the information
that you retrieved during the preparation for the restore, and click Next.
ESXi host or vCenter Server name FQDN of the first ESXi host FQDN of the management vCenter
Server
f In the Certificate warning dialog box, click Yes to accept the host certificate.
g On the Set up a target vCenter Server VM page, enter the values by using the information
that you retrieved during the preparation for the restore, and click Next.
Setting Value
h On the Select deployment size page, select the deployment size that corresponds with the
vCenter Server configuration information from Step 3.d and click Next.
Refer to vSphere documentation to map CPU count recorded from Step 3.d to a vSphere
Server configuration size.
i On the Select datastore page, select these values, and click Next.
Setting Value
j On the Configure network settings page, enter the values by using the information that
you retrieved during the preparation for the restore, and click Next.
Setting Value
IP version IPV4
IP assignment static
k On the Ready to complete stage 1 page, review the restore settings and click Finish.
b On the Backup details page, in the Encryption password text box, enter the encryption
password of the SFTP server and click Next.
c On the Single Sign-On configuration page, enter these values and click Next.
Setting Value
d On the Ready to complete page, review the restore details and click Finish.
What to do next
Refresh the SSH keys that are stored in the SDDC Manager inventory. See VMware Cloud
Foundation SDDC Manager Recovery Scripts (79004).
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
4 Right-click the appliance of the restored vCenter Server instance and select Move to folder.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
2 In the inventory, click the management domain vCenter Server inventory, click the Summary
tab, and verify that there are no unexpected vCenter Server alerts.
3 Click the Linked vCenter Server systems tab and verify that the list contains all other vCenter
Server instances in the vCenter Single Sign-On domain.
4 Log in to the recovered vCenter Server instance by using a Secure Shell (SSH) client.
cd /usr/lib/vmware-vmdir/bin
a Run the command to list the current replication partners of the vCenter Server instance
with the current replication status between the nodes.
b Verify that for each partner, the vdcrepadmin command output contains Host
available: Yes, Status available: Yes, and Partner is 0 changes behind.
c If you observe significant differences, because the resyncing might take some time, wait
five minutes and repeat this step.
Supportability and Serviceability tool (SoS) and the SDDC Manager patch/upgrade precheck
function.
Procedure
a Click the workload domain name and click the Updates/Patches tab.
b Click Precheck.
c Click View status to review the precheck result for the vCenter Server instance and verify
that the status is Succeeded.
This procedure restores only the vSphere Distributed Switch configuration of a vCenter Server
instance.
The restore operation changes the settings on the vSphere Distributed Switch back to the settings
saved in the configuration file. The operation overwrites the current settings of the vSphere
Distributed Switch and its port groups. The operation does not delete existing port groups that
are not a part of the configuration file.
The vSphere Distributed Switch configuration is part of the vCenter Server backup. If you want to
restore the entire vCenter Server instance, see Restore vCenter Server.
Procedure
1 In a web browser, log in to the vCenter Server by using the vSphere Client (https://
<vcenter_server_fqdn>/ui).
4 Expand the Management networks folder, right-click the distributed switch and select
Settings > Restore configuration.
5 On the Restore switch configuration page, click Browse, navigate to the location of the
configuration file for the distributed switch, and click Open.
6 Select the Restore distributed switch and all port groups radio-button and click Next.
7 On the Ready to complete page, review the changes and click Finish.
9 Review the switch configuration to verify that it is as you expect after the restore.
Prerequisites
n Verify that you have a valid file-based backup of the failed NSX Manager instance.
n SFTP Server IP
n Encryption Password
Procedure
5 Update or Recreate the VM Anti-Affinity Rule for the NSX Manager Cluster Nodes
During the NSX Manager bring-up process, SDDC Manager creates a VM anti-affinity rule
to prevent the VMs of the NSX Manager cluster from running on the same ESXi host. If you
redeployed all NSX Manager cluster nodes, you must recreate this rule. If you redeployed
one or two nodes of the cluster, you must add the new VMs to the existing rule.
Procedure
2 Retrieve the Credentials for Restoring NSX Manager from SDDC Manager
Before restoring a failed NSX Manager instance, you must retrieve the NSX Manager root and
admin credentials from the SDDC Manager inventory.
Procedure
4 Under Current versions, in the NSX panel, locate and record the NSX upgrade coordinator
value.
5 Verify that the NSX-T Data Center version retrieved from SDDC Manager is the same as the
version associated with the backup file that you plan to restore.
Retrieve the Credentials for Restoring NSX Manager from SDDC Manager
Before restoring a failed NSX Manager instance, you must retrieve the NSX Manager root and
admin credentials from the SDDC Manager inventory.
Before you can query the SDDC Manager API, you must obtain an API access token by using an
API service account.
Procedure
1 Log in to your host machine with access to the SDDC and open a terminal.
a Run the command to obtain an access token by using the admin@local account
credentials.
a Run the command to retrieve the NSX Manager root and admin credentials.
The command returns the NSX Manager root and admin credentials.
b Record the NSX Manager root and admin credentials for the instance you are restoring.
Important This procedure is not applicable in use cases when there are operational NSX Manager
cluster nodes.
n If two of the three NSX Manager nodes in the NSX Manager cluster are in a failed state,
you begin the restore process by deactivating the cluster. See Deactivate the NSX Manager
Cluster.
n If only one of the three NSX Manager nodes in the NSX Manager cluster is in a failed state,
you directly restore the failed node to the cluster. See Restore an NSX Manager Node to an
Existing NSX Manager Cluster.
Procedure
2 Restore the First Node in a Failed NSX Manager Cluster from a File-Based Backup
You restore the file-based backup of the first NSX Manager cluster node to the newly
deployed NSX Manager instance.
Prerequisites
n Download the NSX Manager OVA file for the version of the failed NSX Manager cluster. See
Retrieve the NSX Manager Version from SDDC Manager.
n Verify that the backup file that you plan to restore is associated with the version of the failed
NSX Manager cluster.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
5 On the Select an OVF template page, select Local file, click Upload files, navigate to the
location of the NSX Manager OVA file, click Open, and click Next.
6 On the Select a name and folder page, enter the VM name and click Next.
9 On the Configuration page, select the appropriate size and click Next.
For the management domain, select Medium and for workload domains, select Large unless
you changed these defaults during deployment.
10 On the Select storage page, select the vSAN datastore, and click Next.
11 On the Select networks page, from the Destination network drop-down menu, select the
management network distributed port group, and click Next.
12 On the Customize template page, enter these values and click Next.
Default IPv4 gateway Enter the default gateway for the appliance.
Management network IPv4 address Enter the IP Address for the appliance.
Management network netmask Enter the subnet mask for the appliance.
DNS server list Enter the DNS servers for the appliance.
NTP server list Enter the NTP server for the appliance.
13 On the Ready to complete page, review the deployment details and click Finish.
Restore the First Node in a Failed NSX Manager Cluster from a File-Based Backup
You restore the file-based backup of the first NSX Manager cluster node to the newly deployed
NSX Manager instance.
Procedure
1 In a web browser, log in to the NSX Manager node for the domain by using the user interface
(https://<nsx_manager_node_fqdn>/login.jsp?local=true)
3 In the left navigation pane, under Lifecycle management, click Backup and restore.
5 In the Backup configuration dialog box, enter these values, and click Save.
Setting Value
Protocol SFTP
Port 22
Setting Value
Password service_account_password
6 Under Backup history, select the target backup, and click Restore.
7 During the restore, when prompted, reject adding NSX Manager nodes by clicking I
understand and Resume.
Results
A progress bar displays the status of the restore operation with the current step of the process.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
3 In the inventory expand vCenter Server > Datacenter > NSX Folder.
4 Click the VM name of the newly deployed first NSX Manager cluster node, click Launch Web
Console, and log in by using administrator credentials.
Setting Value
Password nsx-t_admin_password
Important This procedure is not applicable in use cases when there are two operational NSX
Manager cluster nodes.
If only one of the three NSX Manager nodes in the NSX Manager cluster is in a failed state, after
you prepared for the restore, you directly restore the failed node to the cluster. See Restore an
NSX Manager Node to an Existing NSX Manager Cluster.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
3 In the inventory expand vCenter Server > Datacenter > NSX Folder.
4 Click the VM of the operational NSX Manager node in the cluster, click Launch Web Console,
and log in by using administrator credentials.
Setting Value
Password nsx-t_admin_password
deactivate cluster
6 On the Are you sure you want to remove all other nodes from this cluster? (yes/no)prompt,
enter yes.
What to do next
Power off and delete the two failed NSX Manager nodes from inventory.
Procedure
1 Detach the Failed NSX Manager Node from the NSX Manager Cluster
Before you recover a failed NSX Manager node, you must detach the failed node from the
NSX Manager cluster.
3 Join the New NSX Manager Node to the NSX Manager Cluster
You join the newly deployed NSX Manager node to the cluster by using the virtual machine
web console from the vSphere Client.
Detach the Failed NSX Manager Node from the NSX Manager Cluster
Before you recover a failed NSX Manager node, you must detach the failed node from the NSX
Manager cluster.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
3 In the inventory expand vCenter Server > Datacenter > NSX Folder.
4 Click the VM of an operational NSX Manager node in the cluster, click Launch Web Console,
and log in by using administrator credentials.
Setting Value
Password nsx-t_admin_password
6 Run the command to detach the failed node from the cluster
7 When the detaching process finishes, run the command to view the cluster status.
Prerequisites
Download the NSX Manager OVA file for the version of the failed NSX Manager instance. See
Retrieve the NSX Manager Version from SDDC Manager.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
5 On the Select an OVF template page, select Local file, click Upload files, navigate to the
location of the NSX Manager OVA file, click Open, and click Next.
6 On the Select a name and folder page, in the Virtual machine name text box, enter VM name
of the failed node, and click Next.
10 On the Select storage page, select the vSAN datastore, and click Next.
11 On the Select networks page, from the Destination network drop-down menu, select the
management network distributed port group, and click Next.
12 On the Customize template page, enter these values and click Next.
Setting Value
Setting Value
Hostname failed_node_FQDN
Default IPv4 gateway Enter the default gateway for the appliance.
Management network netmask Enter the subnet mask for the appliance.
DNS server list Enter the DNS servers for the appliance.
NTP servers list Enter the NTP services for the appliance.
13 On the Ready to complete page, review the deployment details and click Finish.
Join the New NSX Manager Node to the NSX Manager Cluster
You join the newly deployed NSX Manager node to the cluster by using the virtual machine web
console from the vSphere Client.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
3 In the inventory expand vCenter Server > Datacenter > NSX Folder.
4 Click the VM of an operational NSX Manager node in the cluster, click Launch web console,
and log in by using administrator credentials.
Setting Value
Password nsx-t_admin_password
8 In the vSphere Client, click the VM of the newly deployed NSX Manager node, click Launch
Web console, and log in by using administrator credentials.
Setting Value
Password nsx-t_admin_password
9 Run the command to join the new NSX Manager node to the cluster.
To view the state of the NSX Manager cluster, you log in to the NSX Manager for the particular
domain.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user interface
(https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
4 Verify that the Cluster status is green and Stable and that each cluster node is Available.
In the following steps, replace <node_FQDN> with the FQDN of the new NSX Manager node.
Procedure
https://<node_FQDN>/login.jsp?local=true
2 Generate a certificate signing request (CSR) for the new NSX Manager node.
a Click System > Certificates > CSRs > Generate CSR and select Generate CSR.
Option Description
Common Name Enter the fully qualified domain name (FQDN) of the
node.
For example, nsx-wld-3.vrack.vsphere.local.
Key Size Set the key bits size of the encryption algorithm.
For example, 2048.
c Click Save.
3 Select the CSR then click Actions and select Download CSR PEM.
4 Rename the downloaded file to <node_FQDN>.csr and upload it to the root directory on the
management domain vCenter Server.
5 SSH to the management domain vCenter Server as the root user and run the following
command:.
bash shell
Signature ok
subject=/L=PA/ST=CA/C=US/OU=VMware Engineering/O=VMware/CN=nsx-wld-3.vrack.vsphere.local
Getting CA Private Key
8 Download the <node_FQDN>.crt file from the vCenter Server root directory.
https://<node_FQDN>/login.jsp?local=true
c Select the CSR for the new node, click Actions, and select Import Certificate for CSR.
b Locate and copy the ID of the certificate for the new node.
c From a system that has the curl command and has access to the NSX Manager nodes (for
example, vCenter Server or SDDC Manager) and run the following command to install the
CA-signed certificate on the new NSX Manager node.
Replace <nsx_admin_password> with the admin password for the NSX Manager node.
Replace <certificate_id> with the certificate ID from step 10b.
11 In the SDDC Manager UI, replace the NSX Manager certificates with trusted CA-signed
certificates from a Certificate Authority (CA). See Chapter 9 Certificate Management.
What to do next
Important If assigning the certificate fails because the certificate revocation list (CRL) verification
fails, see https://kb.vmware.com/kb/78794. If you deactivate the CRL checking to assign the
certificate, after assigning the certificate, you must re-enable the CRL checking.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
3 In the inventory expand vCenter Server > Datacenter > NSX Folder.
4 Right click the new NSX Manager VM and select Guest OS > Restart.
To view the system status of the NSX Manager cluster, you log in to the NSX Manager for the
particular domain.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user interface
(https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
4 If the host transport nodes are in a Pending state, run Configure NSX on these nodes to
refresh the UI.
What to do next
Refresh the SSH keys that are stored in the SDDC Manager inventory. See VMware Cloud
Foundation SDDC Manager Recovery Scripts (79004).
Update or Recreate the VM Anti-Affinity Rule for the NSX Manager Cluster
Nodes
During the NSX Manager bring-up process, SDDC Manager creates a VM anti-affinity rule to
prevent the VMs of the NSX Manager cluster from running on the same ESXi host. If you
redeployed all NSX Manager cluster nodes, you must recreate this rule. If you redeployed one
or two nodes of the cluster, you must add the new VMs to the existing rule.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
n If you redeployed one or two nodes of the cluster, add the new VMs to the existing rule.
b Click Add VM/Host rule member, select the new NSX Manager cluster nodes, and
click Add.
n If you redeployed all NSX Manager cluster nodes, click Add VM/Host rule, enter these
values to create the rule, and click OK.
Setting Value
Procedure
a Run the command to view the details about the VMware Cloud Foundation system.
3 Run the command to collect the log files from the restore of the NSX Manager cluster.
What to do next
Refresh the SSH keys that are stored in the SDDC Manager inventory. See VMware Cloud
Foundation SDDC Manager Recovery Scripts (79004).
Procedure
2 Replace the Failed NSX Edge Node with a Temporary NSX Edge Node
You deploy a temporary NSX Edge node in the domain, add it to the NSX Edge cluster, and
then delete the failed NSX Edge node.
3 Replace the Temporary NSX Edge Node with the Redeployed NSX Edge Node
After you replaced and deleted the failed NSX Edge node, to return the NSX Edge cluster
to its original state, you redeploy the failed node, add it to the NSX Edge cluster, and delete
then temporary NSX Edge node.
Procedure
1 Retrieve the NSX Edge Node Deployment Details from NSX Manager Cluster
Before restoring a failed NSX Edge node, you must retrieve its deployment details from the
NSX Manager cluster.
Retrieve the NSX Edge Node Deployment Details from NSX Manager Cluster
Before restoring a failed NSX Edge node, you must retrieve its deployment details from the NSX
Manager cluster.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user interface
(https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
10 Click the name of the NSX Edge node that you plan to replace and record the following values.
n Name
n Management IP
n Transport Zones
n Edge Cluster
n Uplink Profile
n IP Assignment
Procedure
1 In the SDDC Manager user interface, from the navigation pane click Developer center.
4 In the resourceName text box, enter the FQDN of the failed NSX Edge node, and click
Execute.
You use the SDDC Manager user interface to retrieve the ID of the vSphere cluster for the
workload domain.
Procedure
1 In the SDDC Manager user interface, from the navigation pane click Developer center.
3 Expand APIs for managing clusters, click GET /v1/clusters, and click Execute.
5 Record the ID of the cluster for the workload domain cluster ID.
Replace the Failed NSX Edge Node with a Temporary NSX Edge Node
You deploy a temporary NSX Edge node in the domain, add it to the NSX Edge cluster, and then
delete the failed NSX Edge node.
Procedure
2 Replace the Failed NSX Edge Node with the Temporary NSX Edge Node
You add the temporary NSX Edge node to the NSX Edge cluster by replacing the failed NSX
Edge node.
3 Delete the Failed NSX Edge Node from the NSX Manager Cluster
After replacing the failed NSX Edge node with the temporary NSX Edge node in the NSX
Edge cluster, you delete the failed node.
Prerequisites
Allocate the FQDN and IP address for the temporary NSX Edge node for the domain of the failed
node.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user interface
(https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
6 On the Name and description page, enter these values and click Next.
Setting Value
7 On the Credentials page, enter these values and the passwords recorded in the earlier steps
and then click Next.
Setting Value
Setting Value
8 On the Configure deployment page, select the following and click Next.
Setting Value
9 On the Configure node settings page, enter these values and click Next.
Setting Value
IP Assignment Static
10 On the Configure NSX page, enter these values which are already recorded and click Finish.
Setting Value
Teaming policy switch mapping Enter the values for Uplink1 and Uplink2.
Replace the Failed NSX Edge Node with the Temporary NSX Edge Node
You add the temporary NSX Edge node to the NSX Edge cluster by replacing the failed NSX Edge
node.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user interface
(https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
7 From the Replace drop down menu, select the Failed edge node and from the with drop down
menu, select the Temporary edge node and then click Save.
Delete the Failed NSX Edge Node from the NSX Manager Cluster
After replacing the failed NSX Edge node with the temporary NSX Edge node in the NSX Edge
cluster, you delete the failed node.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user interface
(https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
5 Select the check-box for the failed NSX Edge node and click Delete.
You validate the state of the temporary NSX Edge node and the second NSX Edge node in the
cluster.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user interface
(https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
Setting Value
Node status Up
Replace the Temporary NSX Edge Node with the Redeployed NSX Edge Node
After you replaced and deleted the failed NSX Edge node, to return the NSX Edge cluster to
its original state, you redeploy the failed node, add it to the NSX Edge cluster, and delete then
temporary NSX Edge node.
Procedure
2 Replace the Temporary NSX Edge Node with the Redeployed NSX Edge Node
After deploying the new NSX Edge node with the same configuration as the failed NSX Edge
node, you replace the temporary NSX Edge node with the redeployed failed node in the
NSX- Edge cluster.
4 Update or Recreate the VM Anti-Affinity Rule for the NSX Edge Cluster Nodes
During the NSX Edge deployment process, SDDC Manager creates a VM anti-affinity rule
to prevent the nodes of the NSX Edge cluster from running on the same ESXi host. If you
redeployed the two NSX Edge cluster nodes, you must recreate this rule. If you redeployed
one node of the cluster, you must add the new VM to the existing rule.
To return the NSX Edge cluster to the original state, you must use the FQDN and IP address of
the failed NSX Edge node that you deleted. This procedure ensures that the inventory in SDDC
Manager is accurate.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user interface
(https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
6 On the Name and description page, enter these values and click Next.
Setting Value
7 On the Credentials page, enter these values which are recorded earlier and click Next.
Setting Value
8 On the Configure deployment page, select these values and click Next.
Setting Value
9 On the Configure Node Settings page, enter these values and click Next.
Setting Value
IP assignment Static
10 On the Configure NSX page, enter these values which are recorded earlier and click Finish.
Setting Value
Teaming policy switch mapping Enter the values for Uplink1 and Uplink2.
Replace the Temporary NSX Edge Node with the Redeployed NSX Edge Node
After deploying the new NSX Edge node with the same configuration as the failed NSX Edge
node, you replace the temporary NSX Edge node with the redeployed failed node in the NSX-
Edge cluster.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user interface
(https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
7 From the Replace drop down menu, select the temporary node and from the with drop down
menu, select the new node and then click Save.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user interface
(https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
3 In the left pane, under Configuration, click Fabric > Nodes > .
5 Select the check-box for the temporary NSX Edge node and click Delete.
Update or Recreate the VM Anti-Affinity Rule for the NSX Edge Cluster Nodes
During the NSX Edge deployment process, SDDC Manager creates a VM anti-affinity rule to
prevent the nodes of the NSX Edge cluster from running on the same ESXi host. If you redeployed
the two NSX Edge cluster nodes, you must recreate this rule. If you redeployed one node of the
cluster, you must add the new VM to the existing rule.
Procedure
1 In a web browser, log in to the domain vCenter Server by using the vSphere Client (https://
<vcenter_server_fqdn>/ui).
n If you redeployed one of the nodes in the NSX Edge cluster, add the new VM to the
existing rule.
b Click Add VM/Host rule member, select the new NSX Edge cluster node, and click
Add.
n If you redeployed the two nodes in the NSX Edge cluster, click Add VM/Host rule, enter
these values to create the rule, and click OK.
Setting Value
You validate the state of the redeployed NSX Edge node and the second NSX Edge node in the
cluster.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user interface
(https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
Setting Value
Node status Up
vSphere Storage APIs - Data Protection compatible backup software connects to the vCenter
servers in the management domain to perform backups. In the event of failure, the backup
software connects to the vCenter servers in the management domain to restore the VMs. If the
management domain is lost, the vCenter servers are no longer available and must be restored
first. Choosing a backup software that supports Direct Restore to an ESXi host allows restoring the
vCenter Servers.
Connect your backup solution with the management domain vCenter Server and configure it. To
reduce the backup time and storage cost, use incremental backups in addition to the full ones.
Quiesced backups are enabled for vRealize Suite Lifecycle Manager and Workspace ONE Access.
n VxRail Partner Bundle: You can download the Dell EMC VxRail partner bundle to update the
VxRail appliance.
n Patch Update Bundle: A patch update bundle contains bits to update the appropriate Cloud
Foundation software components in your management domain or VI workload domain. In
most cases, a patch update bundle must be applied to the management domain before it can
be applied to VI workload domains.
n Cumulative Update Bundle: With a cumulative update bundle, you can directly update the
appropriate software in your workload domain to the version contained in the cumulative
bundle rather than applying sequential updates to reach the target version.
n Install Bundle: If you have updated the management domain in your environment, you can
download an install bundle with updated software bits for VI workload domains and vRealize
Suite Lifecycle Manager.
If SDDC Manager does not have direct internet connectivity, you can either use a proxy server to
access the depot, or download install and upgrade bundles manually using the Bundle Transfer
Utility.
To download an async patch bundle, you must use the Async Patch Tool. For more information,
see the Async Patch Tool documentation.
When upgrade bundles are available for your environment, a message is displayed in the SDDC
Manager UI. Available install bundles are displayed on the Bundle Management page and on the
Updates/Patches tab for each workload domain.
When you download bundles, SDDC Manager verifies that the file size and checksum of the
downloaded bundles match the expected values.
Prerequisites
In order to download bundles from the SDDC Manager UI, you must be connected to the VMware
Customer Connect and Dell EMC repositories.
2 Click Authenticate.
Automatic polling of the manifest for bundles by SDDC Manager is enabled by default. If you have
previously edited the application-prod.properties file on the SDDC Manager appliance to
download upgrade bundles in an offline mode, you must edit it again before downloading bundles
from SDDC Manager. Follow the steps below:
1 Using SSH, log in to the SDDC Manager appliance as the vcf user.
4 Set lcm.core.enableManifestPolling=true.
Procedure
The Bundles page displays the bundles available for download. The Bundle Details section
displays the bundle version and release date.
If the bundle can be applied right away, the Bundle Details column displays the workload
domains to which the bundle needs to be applied and the Availability column displays
Available. If another bundle needs to be applied before a particular bundle, the Availability
column displays Future.
The Bundle Details section displays the bundle version, release date, and additional details
about the bundle.
n Click Schedule Download to set the date and time for the bundle download.
Results
The Download Status section displays the date and time at which the bundle download has been
scheduled. When the download begins, the status bar displays the download progress.
Procedure
1 Using SSH, log in to the SDDC Manager appliance with the user name vcf and password you
specified in the deployment parameter sheet.
lcm.depot.adapter.proxyEnabled=true
lcm.depot.adapter.proxyHost=proxy IP address
lcm.depot.adapter.proxyPort=proxy port
7 Restart the LCM server by typing the following command in the console window:
When you download bundles, the Bundle Transfer Utility verifies that the file size and checksum of
the downloaded bundles match the expected values.
Prerequisites
n A Windows or Linux computer with internet connectivity for downloading the bundles.
n A Windows or Linux computer with access to the SDDC Manager appliance for uploading the
bundles.
n To upload the manifest file from a Windows computer, you must have OpenSSL installed and
configured.
n Configure TCP keepalive in your SSH client to prevent socket connection timeouts when using
the Bundle Transfer Utility for long-running operations.
Note The Bundle Transfer Utility is the only supported method for downloading bundles. Do not
use third-party tools or other methods to download bundles.
Procedure
a Log in to VMware Customer Connect and browse to the Download VMware Cloud
Foundation page.
b In the Select Version field, select the version to which you are upgrading.
2 Extract lcm-tools-prod.tar.gz.
3 Navigate to lcm-tools-prod/bin/ and confirm that you have execute permission on all
folders.
4 Copy the Bundle Transfer Utility to a computer with access to the SDDC Manager appliance
and then copy the Bundle Transfer Utility to the SDDC Manager appliance.
a SSH in to the SDDC Manager appliance using the vcf user account.
mkdir /opt/vmware/vcf/lcm/lcm-tools
d Copy the bundle transfer utility file (lcm-tools-prod.tar.gz) that you downloaded in
step 1 to the /opt/vmware/vcf/lcm/lcm-tools directory.
cd /opt/vmware/vcf/lcm/
chown vcf_lcm:vcf -R lcm-tools
chmod 750 -R lcm-tools
This is a structured metadata file that contains information about the VMware Cloud
Foundation product versions included in the release Bill of Materials.
6 Copy the manifest file and lcm-tools-prod directory to a computer with access to the SDDC
Manager appliance.
Use your vSphere SSO credentials for the --sddcMgrUser credentials in the command.
absolute-path- Path to the directory where the bundle files should be downloaded. This directory folder must
output-dir have 777 permissions.
If you do not specify the download directory, bundles are downloaded to the default directory
with 777 permissions.
depotUser VMware Customer Connect email address. You will be prompted to enter the depot user
password. If there are any special characters in the password, specify the password within
single quotes.
After you enter you VMware Customer connect and Dell EMC Depot passwords, the utility asks
Do you want to download vRealize bundles?. Enter Y or N.
The utility displays a list of the available bundles based on the current and target versions of
VMware Cloud Foundation.
n all
n install
n patch
You can also enter a comma-separated list of bundle names to download specific bundles. For
example: bundle-38371, bundle-38378.
Download progress for each bundle is displayed. Wait until all bundles are downloaded.
11 If you downloaded bundles for VMware Cloud Foundation and its components, copy the entire
output directory to a computer with access to the SDDC Manager appliance, and then copy it
to the SDDC Manager appliance.
For example:
The scp command in the example above copies the output directory (upgrade-bundles) to
the /nfs/vmware/vcf/nfs-mount/ directory on the SDDC Manager appliance.
12 In the SDDC Manager appliance, upload the bundle directory to the internal LCM repository.
where absolute-path-bundle-dir is the directory where the bundle files have been be
uploaded, or /nfs/vmware/vcf/nfs-mount/upgrade-bundles as shown in the previous
step.
The utility uploads the bundles and displays upload status for each bundle. Wait for all bundles
to be uploaded before proceeding with an upgrade.
Procedure
u In the navigation pane, click Repository > Bundle Management > Download History.
All downloaded bundles are displayed. Click View Details to see bundle metadata details.
You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.4/4.4.1 from
VMware Cloud Foundation 4.4, 4.3.1, 4.3, 4.2.1, 4.2, 4.1.0.1, or 4.1. If your environment is at a
version earlier than 4.1, you must upgrade the management domain and all VI workload domains
to VMware Cloud Foundation 4.1 and then upgrade to VMware Cloud Foundation 4.4/4.4.1.
Your environment may contain workload domains at different VMware Cloud Foundation releases.
After upgrading to VMware Cloud Foundation 4.4/4.4.1, you can view the versions in your
environment and the associated component versions in that release by navigating to Lifecycle
Management > Release Versions. Note that the management domain and VI workload domains
must be upgraded to the same release version. For example, suppose your environment is at
VMware Cloud Foundation 4.2. If you are upgrading to VMware Cloud Foundation 4.4, the
management domain and VI workload domains must be upgraded to this release.
Upgrades are applied on a workload domain basis. The management domain contains the core
infrastructure, so you must upgrade the management domain before upgrading the other VI
workload domains. You must upgrade all required components to keep your system in an
optimum state.
n Upgrade the Management Domain for VMware Cloud Foundation on Dell EMC on VxRail
You must upgrade the management domain before upgrading VI workload domains in your
environment. In order to upgrade to VMware Cloud Foundation 4.4/4.4.1, the management
domain must be at VMware Cloud Foundation 4.1 or higher. If your environment is at a
version lower than 4.1, you must upgrade the management domain to 4.1 and then upgrade
to 4.4/4.4.1.
n Upgrade a VI Workload Domain for VMware Cloud Foundation on Dell EMC on VxRail
The management domain in your environment must be upgraded before you upgrade
VI workload domains. In order to upgrade to VMware Cloud Foundation 4.4/4.4.1, all VI
domains must be at VMware Cloud Foundation 4.1 or higher. If any VI workload domain is at
a version lower than 4.1, you must upgrade it to 4.1 and then upgrade to 4.4/4.4.1.
n Upgrade NSX-T Data Center for VMware Cloud Foundation in a Federated Environment
When NSX Federation is configured between two VMware Cloud Foundation instances,
SDDC Manager does not manage the lifecycle of the NSX Global Managers. To upgrade the
NSX Global Managers, you must first follow the standard lifecycle of each VMware Cloud
Foundation instance using SDDC Manager, and then manually upgrade the NSX Global
Managers for each instance.
n Take a backup of the SDDC Manager appliance. This is required since the SDDC Manager
appliance will be rebooted during the update.
n Do not run any domain operations while an update is in progress. Domain operations are
creating a new VI domain, adding hosts to a cluster or adding a cluster to a workload domain,
and removing clusters or hosts from a workload domain.
n Download the relevant bundles. See Download VMware Cloud Foundation on Dell EMC VxRail
Bundles.
n If you applied an async patch to your current VMware Cloud Foundation instance you must use
the Async Patch Tool to upgrade to a later version of VMware Cloud Foundation. For example,
if you applied an async vCenter Server patch to a VMware Cloud Foundation 4.3.1 instance,
you must use the Async Patch Tool to upgrade to VMware Cloud Foundation 4.4. See the
Async Patch Tool documentation.
n Ensure that there are no failed workflows in your system and none of the VMware Cloud
Foundation resources are in activating or error state. If any of these conditions are true,
contact VMware Support before starting the upgrade.
n Confirm that the passwords for all VMware Cloud Foundation components are valid. An
expired password can cause an upgrade to fail.
n Review the VMware Cloud Foundation on Dell EMC Release Notes for known issues related to
upgrades.
The components in the management domain must be upgraded in the following order:
2 vRealize Suite Lifecycle Manager, vRealize Suite products, and Workspace ONE Access (if
applicable).
c vRealize Operations
d vRealize Automation
Starting with VMware Cloud Foundation 4.4 and vRealize Suite Lifecycle Manager 8.6.2,
upgrade and deployment of the vRealize Suite products is managed by vRealize Suite Lifecycle
Manager. You can upgrade vRealize Suite products as new versions become available in your
vRealize Suite Lifecycle Manager. vRealize Suite Lifecycle Manager will only allow upgrades
to compatible and supported versions of vRealize Suite products. See “Upgrading vRealize
Suite Lifecycle Manager and vRealize Suite Products” in the vRealize Suite Lifecycle Manager
Installation, Upgrade, and Management Guide for your version of vRealize Suite Lifecycle
Manager.
If you already have vRealize Suite Lifecycle Manager 8.6.2, you can upgrade vRealize Suite
Lifecycle Manager to a supported version using thevRealize Suite Lifecycle Manager UI. See
the VMware Interoperability Matrix for information about which versions are supported with
your version of VMware Cloud Foundation.
If you have an earlier version of vRealize Suite Lifecycle Manager, use the process below to
upgrade to vRealize Suite Lifecycle Manager 8.6.2 and then use the vRealize Suite Lifecycle
Manager UI to upgrade to later supported versions.
Once vRealize Suite Lifecycle Manager is at version 8.6.2 or later, use the vRealize
Suite Lifecycle Manager UI to upgrade vRealize Log Insight, vRealize Operations, vRealize
Automation, and Workspace ONE Access.
4 vCenter Server.
The upgrade process is similar for all components. Information that is unique to a component is
described in the following table.
SDDC Manager and VMware Cloud Foundation services The VMware Cloud Foundation software bundle to
be applied depends on the current version of your
environment.
If you upgrading from VMware Cloud Foundation 4.4, 4.3.1,
4.3, 4.2.1, 4.2, or 4.1.0.1, you must apply the following
bundles to the management domain:
n The VMware Cloud Foundation bundle upgrades SDDC
Manager, LCM, and VMware Cloud Foundation services.
n The Configuration Drift bundle applies configuration
drift on software components.
If you upgrading from VMware Cloud Foundation 4.1,
you apply the VMware Cloud Foundation Update bundle,
which upgrades SDDC Manager, LCM, and VMware Cloud
Foundation services, and also applies the configuration
drift.
NSX-T Data Center Upgrading NSX-T Data Center involves the following
components:
n Upgrade Coordinator
n NSX Edge clusters (if deployed)
n Host clusters
n NSX Manager cluster
The upgrade wizard provides some flexibility when
upgrading NSX-T Data Center for workload domains. By
default, the process upgrades all NSX Edge clusters in
parallel, and then all host clusters in parallel. Parallel
upgrades reduce the overall time required to upgrade your
environment. You can also choose to upgrade NSX Edge
clusters and host clusters sequentially. The ability to select
clusters allows for multiple upgrade windows and does not
require all clusters to be available at a given time.
The NSX Manager cluster is upgraded only if the Upgrade
all host clusters setting is enabled on the NSX-T Host
Clusters tab. New features introduced in the upgrade are
not configurable until the NSX Manager cluster is upgraded.
n If you have a single cluster in your environment, enable
the Upgrade all host clusters setting.
n If you have multiple host clusters and choose to
upgrade only some of them, you must go through the
NSX-T upgrade wizard again until all host clusters have
been upgraded. When selecting the final set of clusters
to be upgraded, you must enable the Upgrade all host
clusters setting so that NSX Manager is upgraded.
n If you upgraded all host clusters without enabling the
Upgrade all host clusters setting, run through the NSX-
T upgrade wizard again to upgrade NSX Manager.
If the upgrade fails, resolve the issue and retry the failed
task. If you cannot resolve the issue, restore vCenter Server
using the file-based backup. See Restore vCenter Server.
Once the upgrade successfully completes, use the vSphere
Client to change the vSphere DRS Automation Level setting
back to the original value for each vSphere cluster that
is managed by the vCenter Server. See KB 87631 for
information about using VMware PowerCLI to change the
vSphere DRS Automation Level.
Procedure
Click View Status to see the update status for each component and the tests performed.
Expand a test by clicking the arrow next to it to see further details.
If any of the tests fail, fix the issue and click Retry Precheck.
The precheck results are displayed below the Precheck button. Ensure that the precheck
results are green before proceeding. A failed precheck may cause the update to fail.
If you selected Schedule Update, select the date and time for the bundle to be applied.
4 The Update Status window displays the components that will be upgraded and the upgrade
status. Click View Update Activity to view the detailed tasks.
After the upgrade is completed, a green bar with a check mark is displayed.
What to do next
If you configured NSX Federation between two VMware Cloud Foundation instances, you must
manually upgrade the NSX Global Managers for each instance. See Upgrade NSX-T Data Center
for VMware Cloud Foundation in a Federated Environment.
2 vCenter Server.
4 Workload Management on clusters that have vSphere with Tanzu. Workload Management can
be upgraded through vCenter Server. See Working with vSphere Lifecycle Manager.
The upgrade process is similar for all components. Information that is unique to a component is
described in the following table.
NSX-T Data Center Upgrading NSX-T Data Center involves the following
components:
n Upgrade Coordinator
n NSX Edge clusters (if deployed)
n Host clusters
n NSX Manager cluster
VI workload domains can share the same NSX Manager
cluster and NSX Edge clusters. When you upgrade
these components for one VI workload domain, they are
upgraded for all VI workload domains that share the same
NSX Manager or NSX Edge cluster. You cannot perform any
operations on the VI workload domains while NSX-T Data
Center is being upgraded.
The upgrade wizard provides some flexibility when
upgrading NSX-T Data Center for workload domains. By
default, the process upgrades all NSX Edge clusters in
parallel, and then all host clusters in parallel. Parallel
upgrades reduce the overall time required to upgrade your
environment. You can also choose to upgrade NSX Edge
clusters and host clusters sequentially. The ability to select
clusters allows for multiple upgrade windows and does not
require all clusters to be available at a given time.
The NSX Manager cluster is upgraded only if the Upgrade
all host clusters setting is enabled on the NSX-T Host
Clusters tab. New features introduced in the upgrade are
not configurable until the NSX Manager cluster is upgraded.
n If you have a single cluster in your environment, enable
the Upgrade all host clusters setting.
n If you have multiple host clusters and choose to
upgrade only some of them, you must go through the
NSX-T upgrade wizard again until all host clusters have
been upgraded. When selecting the final set of clusters
to be upgraded, you must enable the Upgrade all host
clusters setting so that NSX Manager is upgraded.
n If you upgraded all host clusters without enabling the
Upgrade all host clusters setting, run through the NSX-
T upgrade wizard again to upgrade NSX Manager.
If the upgrade fails, resolve the issue and retry the failed
task. If you cannot resolve the issue, restore vCenter Server
using the file-based backup. See Restore vCenter Server.
Once the upgrade successfully completes, use the vSphere
Client to change the vSphere DRS Automation Level setting
back to the original value for each vSphere cluster that
is managed by the vCenter Server. See KB 87631 for
information about using VMware PowerCLI to change the
vSphere DRS Automation Level.
Procedure
Click View Status to see the update status for each component and the tests performed.
Expand a test by clicking the arrow next to it to see further details.
If any of the tests fail, fix the issue and click Retry Precheck.
The precheck results are displayed below the Precheck button. Ensure that the precheck
results are green before proceeding. A failed precheck may cause the update to fail.
If you selected Schedule Update, select the date and time for the bundle to be applied.
4 The Update Status window displays the components that will be upgraded and the upgrade
status. Click View Update Activity to view the detailed tasks.
After the upgrade is completed, a green bar with a check mark is displayed.
What to do next
If you configured NSX Federation between two VMware Cloud Foundation instances, you must
manually upgrade the NSX Global Managers for each instance. See Upgrade NSX-T Data Center
for VMware Cloud Foundation in a Federated Environment.
Procedure
1 In a web browser, go to VMware Customer Connect and browse to the download page for the
version of NSX-T Data Center listed in the VMware Cloud Foundation Release Notes BOM.
2 Locate the NSX version Upgrade Bundle and click Read More.
3 Verify that the upgrade bundle filename extension ends with .mub.
4 Click Download Now to download the upgrade bundle to the system where you access the
NSX Global Manager UI.
The upgrade coordinator guides you through the upgrade sequence. You can track the upgrade
process and, if necessary, you can pause and resume the upgrade process from the UI.
Procedure
4 Navigate to the upgrade bundle .mub file you downloaded or paste the download URL link.
n Click Browse to navigate to the location you downloaded the upgrade bundle file.
n Paste the VMware download portal URL where the upgrade bundle .mub file is located.
5 Click Upload.
7 Read and accept the EULA terms and accept the notification to upgrade the upgrade
coordinator..
8 Click Run Pre-Checks to verify that all NSX-T Data Center components are ready for upgrade.
The pre-check checks for component connectivity, version compatibility, and component
status.
Prerequisites
Before you can upgrade NSX Global Managers, you must upgrade all VMware Cloud Foundation
instances in the NSX Federation, including NSX Local Managers.
Procedure
3 Click Start to upgrade the management plane and then click Accept.
4 On the Select Upgrade Plan page, select Plan Your Upgrade and click Next.
The NSX Manager UI, API, and CLI are not accessible until the upgrade finishes and the
management plane is restarted.
Prerequisites
Download the ESXi ISO that matches the version listed in the the Bill of Materials (BOM) section of
the VMware Cloud Foundation Release Notes.
Procedure
d Navigate to the ESXi ISO file you downloaded and click Open.
a On the Imported ISOs tab, select the ISO file that you imported, and click New baseline.
b Enter a name for the baseline and specify the Content Type as Upgrade.
c Click Next.
d Select the ISO file you had imported and click Next.
c Select the vSAN witness host and click the Updates tab.
d Under Attached Baselines, click Attach > Attach Baseline or Baseline Group.
e Select the baseline that you had created in step 3 and click Attach.
After the compliance check is completed, the Status column for the baseline is displayed
as Non-Compliant.
5 Remediate the vSAN witness host and update the ESXi hosts that it contains.
a Right-click the vSAN witness and click Maintenance Mode > Enter Maintenance Mode.
b Click OK.
d Select the baseline that you had created in step 3 and click Remediate.
e In the End user license agreement dialog box, select the check box and click OK.
f In the Remediate dialog box, select the vSAN witness host, and click Remediate.
The remediation process might take several minutes. After the remediation is completed,
the Status column for the baseline is displayed as Compliant.
g Right-click the vSAN witness host and click Maintenance Mode > Exit Maintenance Mode.
h Click OK.
You shut down the customer workloads and the management components for the VI workload
domains before you shut down the components for the management domain.
® ®
If the VMware NSX Manager™ cluster and VMware NSX Edge™ cluster are shared with other VI
workload domains, shut down the NSX Manager and NSX Edge clusters as part of the shutdown of
the first VI workload domain.
Prerequisites
n Verify that the management virtual machines are not running on snapshots.
n If a vSphere Storage APIs for Data Protection (VADP) based backup solution is running on the
management clusters, verify that the solution is properly shut down by following the vendor
guidance.
n To reduce the startup time before you shut down the management virtual machines, migrate
®
the VMware vCenter Server instance for the management domain to the first VMware ESXi™
host in the default management cluster in the management domain.
n Shut Down a Virtual Infrastructure Workload Domain with vSphere with Tanzu
You shut down the components of a VI workload domain that runs containerized workloads in
VMware Cloud Foundation in a specific order to keep components operational by maintaining
the necessary infrastructure, networking, and management services as long as possible
before shutdown.
You shut down the management components for the VI workload domains before you shut down
the components for the management domain.
If the NSX Manager cluster and NSX Edge cluster are shared with other VI workload domains,
follow this general order:
1 Shut down the customer workloads in all VI workload domains that share the NSX-T Data
Center instance. Otherwise, all NSX networking services in the customer workloads will be
interrupted when you shut down NSX-T Data Center.
2 Shut down the VI workload domain that runs the shared NSX Edge nodes.
Procedure
2 In the VMs and templates inventory, expand the tree of workload domain vCenter Server and
expand data center for the workload domain.
3 Right-click an NSX Edge virtual machine for the management domain or VI workload domain
and select Power > Shut down Guest OS.
5 Repeat the steps for the remaining NSX Edge nodes for the domain.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Right-click the primary NSX manager virtual machine and select Power > Shut down Guest
OS.
5 Repeat the steps for the remaining NSX Manager virtual machines.
Shut Down vSphere Cluster Services Virtual Machines, VxRail Manager, VMware
vSAN, and ESXi Hosts
To shut down the vSphere Cluster Services (vCLS) virtual machines, VxRail Manager, VMware
vSAN, and ESXi hosts in a workload domain cluster, you use the VxRail plugin in the vSphere
Client.
Procedure
2 In the Hosts and Clusters inventory, expand the tree of the workload domain vCenter Server
and expand the data center for the workload domain.
3 Right-click a cluster, select VxRail-Shutdown, and follow the prompts to shut down the
cluster.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Locate the vCenter Server virtual machine for the VI workload domain.
4 Right-click the virtual machine and select Power > Shut down Guest OS.
You shut down the management components for the VI workload domains that run vSphere with
Tanzu and containers or that run virtualized workloads before you shut down the components for
the management domain.
If the NSX Manager cluster and NSX Edge cluster are shared with other VI workload domains,
follow this general order:
1 Shut down the customer workloads in all VI workload domains that share the NSX-T Data
Center instance. Otherwise, all NSX networking services in the customer workloads will be
interrupted when you shut down NSX-T Data Center.
2 Shut down the VI workload domain that runs the shared NSX Edge nodes.
11 VxRail Manager *
Find Out the Location of the vSphere with Tanzu Virtual Machines on the ESXi
Hosts
Before you begin shutting down a VI workload domain with vSphere with Tanzu, you get a
mapping between virtual machines in the workload domain and the ESXi hosts on which they
are deployed. You later use this mapping to log in to specific ESXi hosts and shut down specific
management virtual machines.
Procedure
Procedure
2 In the Hosts and clusters inventory, expand the tree of the VI workload domain vCenter Server
and expand the data center for the VI workload domain.
4 Copy the cluster domain ID domain-c(cluster_domain_id) from the URL of the browser.
When you navigate to a cluster in the vSphere client, the URL is similar to this one:
https://<fqdn-of-vCenter-server>/ui/app/cluster;nav=h/
urn:vmomi:ClusterComputeResource:domain-c8:eef257af-fa50-455a-af7a-6899324fabe6/summary
5 In the Host and Clusters inventory, select the vCenter Server instance for the management
domain or the VI workload domain and click the Configure tab.
If the property is not present, add it. The entry for the cluster cannot be deleted from the
vSphere Client then. However, keeping this entry is not an issue.
8 Click Save.
Results
The vCLS monitoring service initiates the clean-up of vCLS VMs. If vSphere DRS is activated for
the cluster, it stops working and you see an additional warning in the cluster summary. vSphere
DRS remains deactivated until vCLS is re-activated on this cluster.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Locate the vCenter Server virtual machine for the VI workload domain.
4 Right-click the virtual machine and select Power > Shut down Guest OS.
Shut Down the NSX Edge Nodes for vSphere with Tanzu
You begin shutting down the NSX-T Data Center infrastructure in a VI workload domain with
vSphere with Tanzu by shutting down the NSX Edge nodes that provide north-south traffic
connectivity between the physical data center networks and the NSX SDN networks.
Because the vCenter Server instance for the domain is already down, you shut down the NSX
Edge nodes from the ESXi hosts where they are running.
Procedure
1 Log in to the ESXi host that runs the first NSX Edge node as root by using the VMware Host
Client.
3 Right-click an NSX Edge virtual machine, and select Guest OS > Shut down
5 Repeat these steps to shut down the remaining NSX Edge nodes for the VI workload domain
with vSphere with Tanzu.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Right-click the primary NSX manager virtual machine and select Power > Shut down Guest
OS.
5 Repeat the steps for the remaining NSX Manager virtual machines.
Shut Down the VxRail Manager Virtual Machine in a VI Workload Domain with
vSphere with Tanzu
Because the vCenter Server instance for the VI workload domain is already down, you shut down
the VxRail Manager virtual machine from the ESXi host on which it is running.
Procedure
1 Using the VMware Host Client, log in as root to the ESXi host that runs the VxRail Manager
virtual machine.
3 Right-click the VxRail Manager virtual machine and select Guest OS > Shut down.
Shut Down vSAN and the ESXi Hosts in the Management Domain or for vSphere
with Tanzu
You shut down vSAN and the ESXi hosts in the management domain or in a VI workload domain
with vSphere with Tanzu by preparing the vSAN cluster for shutdown, placing each ESXi host in
maintenance mode to prevent any virtual machines being deployed to or starting up on the host,
and shutting down the host.
In a VI workload domain with vSphere with Tanzu, the vCenter Server instance for the domain
is already down. Hence, you perform the shutdown operation on the ESXi hosts by using the
VMware Host Client.
Procedure
1 For the VI workload domain with vSphere with Tanzu, enable SSH on the ESXi hosts in the
workload domain by using the SoS utility of the SDDC Manager appliance.
You enable SSH on the management ESXi hosts before you shut down SDDC Manager.
a Log in to the SDDC Manager appliance by using a Secure Shell (SSH) client as vcf.
b Switch to the root user by running the su command and entering the root password.
2 Log in to the first ESXi host for the management domain or VI workload domain cluster by
using a Secure Shell (SSH) client as root.
3 For a vSAN cluster, deactivate vSAN cluster member updates by running the command.
esxcfg-advcfg -s 1 /VSAN/IgnoreClusterMemberListUpdates
4 Repeat Step 2 and Step 3 on the remaining hosts in the management domain or the VI
workload domain cluster.
5 On the first ESXi host per vSAN cluster, prepare the vSAN cluster for shutdown by running the
command.
8 Repeat Step 5 and Step 7 on the remaining hosts in the management domain or VI workload
domain cluster, proceeding to the next host after the operation on the current one is complete.
9 Shut down the ESXi hosts in the management domain or VI workload domain cluster.
a Log in to the first ESXi host for the workload domain at https://<esxi_host_fqdn>/ui as
root.
b In the navigation pane, right-click Host and, from the drop-down menu, select Shut down.
d Repeat the steps for the remaining hosts in the management domain or VI workload
domain cluster.
After you shut down the components in all VI workload domains, you begin shutting down the
management domain.
Note If your VMware Cloud Foundation instance is deployed with the consolidated architecture,
shut down any customer workloads or additional virtual machines in the management domain
before you proceed with the shutdown order of the management components.
You shut down Site Recovery Manager and vSphere Replication after you shut down the
management components that can be failed over between the VMware Cloud Foundation
instances. You also shut Site Recovery Manager and vSphere Replication down as late as possible
to have the management virtual machines protected as long as possible if a disaster event occurs.
The virtual machines in the paired VMware Cloud Foundation instance become unprotected after
you shut down Site Recovery Manager and vSphere Replication in the current VMware Cloud
Foundation instance.
You shut down vRealize Log Insight as late as possible to collect as much as log data for potential
troubleshooting. You shut down the Workspace ONE Access instances after the management
components they provide identity and access management services for.
®
4 VMware vRealize Suite Lifecycle Manager ™*
11 SDDC Manager *
12 VxRail Manager *
Save the Credentials for the ESXi Hosts and vCenter Server for the Management
Domain
Before you shut down the management domain, get the credentials for the management domain
hosts and vCenter Server from SDDC Manager and save them. You need these credentials to shut
down the ESXi hosts and then to start them and vCenter Server back up. Because SDDC Manager
is down during each of these operations, you must save the credentials in advance.
To get the credentials, log in to the SDDC Manager appliance by using a Secure Shell (SSH) client
as vcf and run the lookup_passwords command.
Procedure
5 In the VMware Identity Manager section, click the horizontal ellipsis icon and select Power off.
6 In the Power off VMware Identity Manager dialog box, click Submit.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Right-click the vRealize Suite Lifecycle Manager virtual machine and select Power > Shut down
Guest OS.
Procedure
2 In the VMs and templates inventory, expand the tree of workload domain vCenter Server and
expand data center for the workload domain.
3 Right-click an NSX Edge virtual machine for the management domain or VI workload domain
and select Power > Shut down Guest OS.
5 Repeat the steps for the remaining NSX Edge nodes for the domain.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Right-click the primary NSX manager virtual machine and select Power > Shut down Guest
OS.
5 Repeat the steps for the remaining NSX Manager virtual machines.
Procedure
1 Enable SSH on the ESXi hosts in the management domain by using the SoS utility of the SDDC
Manager appliance.
When you shut down these hosts, you run commands over SSH to prepare the vSAN
cluster for shutdown and place each management host in maintenance mode. Because at
the management ESXi shutdown SDDC Manager is already down, you must enable SSH on the
hosts before you shut down SDDC Manager.
a Log in to the SDDC Manager appliance by using a Secure Shell (SSH) client as vcf.
b Switch to the root user by running the su command and entering the root password.
/opt/vmware/sddc-support/sos --enable-ssh-esxi
3 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
5 Right-click the SDDC Manager virtual machine and click Power > Shut down Guest OS.
Shut Down the VxRail Manager Virtual Machine in the Management Domain
Shut down the VxRail Manager virtual machine in the management domain by using the vSphere
Client.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
4 Right-click the VxRail Manager virtual machine and click Power > Shut down Guest OS.
Procedure
2 In the Hosts and clusters inventory, expand the tree of the VI workload domain vCenter Server
and expand the data center for the VI workload domain.
4 Copy the cluster domain ID domain-c(cluster_domain_id) from the URL of the browser.
When you navigate to a cluster in the vSphere client, the URL is similar to this one:
https://<fqdn-of-vCenter-server>/ui/app/cluster;nav=h/
urn:vmomi:ClusterComputeResource:domain-c8:eef257af-fa50-455a-af7a-6899324fabe6/summary
5 In the Host and Clusters inventory, select the vCenter Server instance for the management
domain or the VI workload domain and click the Configure tab.
If the property is not present, add it. The entry for the cluster cannot be deleted from the
vSphere Client then. However, keeping this entry is not an issue.
8 Click Save.
Results
The vCLS monitoring service initiates the clean-up of vCLS VMs. If vSphere DRS is activated for
the cluster, it stops working and you see an additional warning in the cluster summary. vSphere
DRS remains deactivated until vCLS is re-activated on this cluster.
To shut down the management domain vCenter Server, it must be running on the first
management ESXi host in the default management cluster.
Caution Before you shut down vCenter Server, migrate any virtual machines that are running
infrastructure services like Active Directory, NTP, DNS and DHCP servers in the management
domain to the first management host by using the vSphere Client. You can shut them down from
the first ESXi host after you shut down vCenter Server.
Procedure
2 In the Hosts and clusters inventory, expand the management domain vCenter Server tree and
expand the management domain data center.
3 Set the vSphere DRS automation level of the management cluster to manual to prevent
vSphere HA migrating the vCenter Server appliance.
a Select the default management cluster and click the Configure tab.
b In the left pane, select Services > vSphere DRS and click Edit.
c In the Edit cluster settings dialog box, click the Automation tab, and, from the drop-down
menu, in the Automation level section, select Manual
d Click OK.
4 If the management domain vCenter Server is not running on the first ESXi host in the default
management cluster, migrate it there.
a Select the default management cluster and click the Monitor tab.
b In the left pane, under vSAN > Resyncing objects, verify that all synchronization tasks are
complete.
6 Stop vSphere HA to avoid vSphere HA initiated migrations of virtual machines after vSAN is
partitioned during the shutdown process.
b In the left pane, select Services > vSphere Availability and click the Edit button.
c In the Edit Cluster Settings dialog box, deactivate vSphere HA and click OK.
9 Right-click the management domain vCenter Server and select Guest OS > Shut down.
Shut Down vSAN and the ESXi Hosts in the Management Domain or for vSphere
with Tanzu
You shut down vSAN and the ESXi hosts in the management domain or in a VI workload domain
with vSphere with Tanzu by preparing the vSAN cluster for shutdown, placing each ESXi host in
maintenance mode to prevent any virtual machines being deployed to or starting up on the host,
and shutting down the host.
In a VI workload domain with vSphere with Tanzu, the vCenter Server instance for the domain
is already down. Hence, you perform the shutdown operation on the ESXi hosts by using the
VMware Host Client.
Procedure
1 For the VI workload domain with vSphere with Tanzu, enable SSH on the ESXi hosts in the
workload domain by using the SoS utility of the SDDC Manager appliance.
You enable SSH on the management ESXi hosts before you shut down SDDC Manager.
a Log in to the SDDC Manager appliance by using a Secure Shell (SSH) client as vcf.
b Switch to the root user by running the su command and entering the root password.
2 Log in to the first ESXi host for the management domain or VI workload domain cluster by
using a Secure Shell (SSH) client as root.
3 For a vSAN cluster, deactivate vSAN cluster member updates by running the command.
esxcfg-advcfg -s 1 /VSAN/IgnoreClusterMemberListUpdates
4 Repeat Step 2 and Step 3 on the remaining hosts in the management domain or the VI
workload domain cluster.
5 On the first ESXi host per vSAN cluster, prepare the vSAN cluster for shutdown by running the
command.
8 Repeat Step 5 and Step 7 on the remaining hosts in the management domain or VI workload
domain cluster, proceeding to the next host after the operation on the current one is complete.
9 Shut down the ESXi hosts in the management domain or VI workload domain cluster.
a Log in to the first ESXi host for the workload domain at https://<esxi_host_fqdn>/ui as
root.
b In the navigation pane, right-click Host and, from the drop-down menu, select Shut down.
d Repeat the steps for the remaining hosts in the management domain or VI workload
domain cluster.
You start the management components for the management domain first. Then, you start the
management components for the VI workload domains and the customer workloads.
If the NSX Manager cluster and NSX Edge cluster are shared with other VI workload domains,
start the other VI workload domains first. Start up NSX Manager and NSX Edge nodes as part of
the startup of the last workload domain.
Prerequisites
n Verify that external services such as Active Directory, DNS, NTP, SMTP, and FTP or SFTP are
available.
n If a vSphere Storage APIs for Data Protection (VADP) based backup solution is deployed on
the default management cluster, verify that the solution is properly started and operational
according to the vendor guidance.
You start the management components for the management domain first. Then, you start the
management components for the VI workload domains and the customer workloads.
You start vRealize Log Insight as early as possible to collect log data that helps troubleshooting
potential issues. You also start Site Recovery Manager and vSphere Replication as early as
possible to protect the management virtual machines if a disaster event occurs.
4 VxRail Manager *
5 SDDC Manager *
Start the vSphere and vSAN Components for the Management Domain
You start the ESXi hosts using an out-of-band management interface, such as, ILO or iDRAC to
connect to the hosts and power them on. Then, restarting the vSAN cluster starts automatically
vSphere Cluster Services, vCenter Server and vSAN.
Procedure
a Log in to the first ESXi host in the workload domain by using the out-of-band management
interface.
2 Repeat the previous step to start all the remaining ESXi hosts in the workload domain.
vCenter Server is started automatically. Wait until vCenter Server is running and the vSphere
Client is available again.
a Right-click the vSAN cluster and select vSAN > Restart cluster.
The vSAN Services page on the Configure tab changes to display information about the
restart process.
5 After the cluster has restarted, check the vSAN health service and resynchronization status,
and resolve any outstanding issues.
b In the left pane, under vSAN > Resyncing objects, verify that all synchronization tasks are
complete.
c In the left pane, navigate to vSAN > Skyline health and verify the status of each vSAN
health check category.
6 If you have added the root user of the ESXi hosts to the Exception Users list for lockdown
mode during shutdown, remove the user from the list on each host.
a Select the host in the inventory and click the Configure tab.
d On the Exception Users page, from the vertical ellipsis menu in front of the root user,
select Remove User and click OK.
Note Start any virtual machines that are running infrastructure services like Active Directory,
NTP, DNS and DHCP servers in the management domain before you start vCenter Server.
Procedure
3 Right-click the management domain vCenter Server, and, from the drop-down menu, select
Power > Power on.
The startup of the virtual machine and the vSphere services takes some time to complete.
5 In the Hosts and clusters inventory, expand the management domain vCenter Server tree and
expand the management domain data center.
b In the left pane, navigate to vSAN > Skyline health and verify the status of each vSAN
health check category.
c In the left pane, navigate to vSAN > Resyncing objects and verify that all synchronization
tasks are complete.
a Select the vSAN cluster under the management domain data center and click the
Configure tab.
b In the left pane, select Services > vSphere Availability and click the Edit button.
c In the Edit Cluster Settings dialog box, enable vSphere HA and click OK.
8 Set the vSphere DRS automation level of the management cluster to automatic.
a Select the default management cluster and click the Configure tab.
b In the left pane, select Services > vSphere DRS and click Edit.
c In the Edit cluster settings dialog box, click the Automation tab, and, from the drop-down
menu, in the Automation level section, select Fully automated.
d Click OK.
Procedure
2 In the Hosts and clusters inventory, expand the tree of the VI workload domain vCenter Server
and expand the data center for the VI workload domain.
4 Copy the cluster domain ID domain-c(cluster_domain_id) from the URL of the browser.
When you navigate to a cluster in the vSphere Client, the URL is similar to this one:
https://<fqdn-of-vCenter-server>/ui/app/cluster;nav=h/
urn:vmomi:ClusterComputeResource:domain-c8:eef257af-fa50-455a-af7a-6899324fabe6/summary
5 In the Host and Clustersinventory, select the vCenter Server instance for the management
domain or the VI workload domain and click the Configure tab.
8 Click Save
Procedure
2 In the VMs and templates inventory, expand the workload domain vCenter Server tree and
expand the workload domain data center.
3 Locate the VxRail Manager virtual machine, right-click it, and select Power > Power on.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
4 Right-click the SDDC Manager virtual machine and click Power > Power on.
a Log in to the SDDC Manager appliance by using a Secure Shell (SSH) client as vcf.
b Switch to the root user by running the su command and entering the root password.
/opt/vmware/sddc-support/sos --disable-ssh-esxi
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Power on the NSX Manager nodes for the management domain or the VI workload domain.
a Right-click the primary NSX Manager node and select Power > Power on.
This operation takes several minutes to complete until the NSX Manager node becomes
fully operational again and its user interface - accessible.
4 Log in to NSX Manager for the management domain or VI workload domain at https://
<nsxt_manager_cluster_fqdn> as admin.
c On the Appliances page, verify that the NSX Manager cluster has a Stable status and all
NSX Manager nodes are available.
Procedure
2 In the VMs and templates inventory, expand the tree of workload domain vCenter Server and
expand data center for the workload domain.
3 Right-click an NSX Edge virtual machine from the edge cluster and select Power > Power on.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Right-click the vRealize Suite Lifecycle Manager virtual machine and select Power > Power on.
Procedure
2 Power on the Workspace ONE Access cluster and verify its status.
d In the VMware Identity Manager section, click the horizontal ellipsis icon and select Power
on.
3 Configure the domain and domain search parameters on the Workspace ONE Access
appliances.
a Log in to the first appliances of the Workspace ONE Access cluster by using a Secure Shell
(SSH) client as sshuser.
vi /etc/resolv.conf
d Add the following entries to the end of the file and save the changes.
Domain <domain_name>
search <space_separated_list_of_domains_to_search>
e Repeat this step to configure the domain and domain search parameters on the remaining
Workspace ONE Access appliances.
4 In the vRealize Suite Lifecycle Manager user interface, check the health of the Workspace ONE
Access cluster.
c In the VMware Identity Manager section, click the horizontal ellipsis icon and select
Trigger cluster health.
You start the management components for the management domain first. Then, you start the
management components for the VI workload domains and the customer workloads.
If the NSX Manager cluster and NSX Edge cluster are shared with other VI workload domains,
follow this general order:
2 Start the VI workload domain that runs the shared NSX Edge nodes.
3 Start the customer workloads that rely on NSX-T Data Center services.
Start the vCenter Server Instance for a VxRail Virtual Infrastructure Workload
Domain
Use the vSphere Client to power on the vCenter Server appliance for the VxRail VI workload
domain.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
4 Right-click the virtual machine of the VxRail VI workload domain vCenter Server and select
Power > Power on.
The startup of the virtual machine and the vSphere services takes some time to complete.
What to do next
Start ESXi hosts, vSAN and VxRail Manager in a Virtual Infrastructure Workload
Domain
You start the ESXi hosts using an out-of-band management interface, such as, ILO or iDRAC to
connect to the hosts and power them on. Powering on the ESXi hosts starts VxRail Manager,
which starts vSAN and the vSphere Cluster Services (vCLS) virtual machines.
Procedure
a Log in to the first ESXi host in the VI workload domain by using the out-of-band
management interface.
2 Repeat the previous step to start all the remaining ESXi hosts in the VI workload domain.
3 Log in in to the VI workload domain vCenter Server and wait until the VxRail Manager startup
for the cluster is finished.
Use the Recent Tasks pane in the cluster to monitor startup progress.
Once startup is complete, the VxRail Manager and vSphere Cluster Services (vCLS) virtual
machines in the cluster should be running.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Power on the NSX Manager nodes for the management domain or the VI workload domain.
a Right-click the primary NSX Manager node and select Power > Power on.
This operation takes several minutes to complete until the NSX Manager node becomes
fully operational again and its user interface - accessible.
4 Log in to NSX Manager for the management domain or VI workload domain at https://
<nsxt_manager_cluster_fqdn> as admin.
c On the Appliances page, verify that the NSX Manager cluster has a Stable status and all
NSX Manager nodes are available.
Procedure
2 In the VMs and templates inventory, expand the tree of workload domain vCenter Server and
expand data center for the workload domain.
3 Right-click an NSX Edge virtual machine from the edge cluster and select Power > Power on.
You start the management components for the management domain first. Then, you start the
management components for the VI workload domains and the customer workloads.
If the NSX Manager cluster and NSX Edge cluster are shared with other VI workload domains,
follow this general order:
2 Start the VI workload domain that runs the shared NSX Edge nodes.
3 Start the customer workloads that rely on NSX-T Data Center services.
Verify the Operational State of the VI Workload Domain with vSphere with Tanzu
After you start up the management domain, verify that the main functionality of the management
components is working according to the requirements. See Operational Verification of VMware
Cloud Foundation and Developer Ready Infrastructure for VMware Cloud Foundation.
Start the vSphere and vSAN Components for the Management Domain
You start the ESXi hosts using an out-of-band management interface, such as, ILO or iDRAC to
connect to the hosts and power them on. Then, restarting the vSAN cluster starts automatically
vSphere Cluster Services, vCenter Server and vSAN.
Procedure
a Log in to the first ESXi host in the workload domain by using the out-of-band management
interface.
2 Repeat the previous step to start all the remaining ESXi hosts in the workload domain.
vCenter Server is started automatically. Wait until vCenter Server is running and the vSphere
Client is available again.
a Right-click the vSAN cluster and select vSAN > Restart cluster.
The vSAN Services page on the Configure tab changes to display information about the
restart process.
5 After the cluster has restarted, check the vSAN health service and resynchronization status,
and resolve any outstanding issues.
b In the left pane, under vSAN > Resyncing objects, verify that all synchronization tasks are
complete.
c In the left pane, navigate to vSAN > Skyline health and verify the status of each vSAN
health check category.
6 If you have added the root user of the ESXi hosts to the Exception Users list for lockdown
mode during shutdown, remove the user from the list on each host.
a Select the host in the inventory and click the Configure tab.
d On the Exception Users page, from the vertical ellipsis menu in front of the root user,
select Remove User and click OK.
Start the vCenter Server Instance for a Virtual Infrastructure Workload Domain
Use the vSphere Client to power on the vCenter Server appliance in the management domain. If
the VI workload domain contains a vSAN cluster, check its health status too.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
4 Right-click the virtual machine of the VI workload domain vCenter Server and select Power >
Power on.
The startup of the virtual machine and the vSphere services takes some time to complete.
6 In the Hosts and clusters inventory, expand the tree of the VI workload domain vCenter Server
and expand the data center for the VI workload domain.
a Select the vSAN cluster in the VI workload domain and click the Monitor tab.
b In the left pane, navigate to vSAN > Skyline health and verify the status of each vSAN
health check category.
c In the left pane, navigate to vSAN > Resyncing objects and verify that all synchronization
tasks are complete.
b In the left pane, select Services > vSphere Availability and click the Edit button.
c In the Edit Cluster Settings dialog box, enable vSphere HA and click OK.
9 For a VI workload domain with vSphere with Tanzu, verify that the Kubernetes services are
started.
a Log in to the VI workload domain vCenter Server by using a Secure Shell (SSH) client as
root.
vmon-cli -s wcp
Procedure
2 In the Hosts and clusters inventory, expand the tree of the VI workload domain vCenter Server
and expand the data center for the VI workload domain.
4 Copy the cluster domain ID domain-c(cluster_domain_id) from the URL of the browser.
When you navigate to a cluster in the vSphere Client, the URL is similar to this one:
https://<fqdn-of-vCenter-server>/ui/app/cluster;nav=h/
urn:vmomi:ClusterComputeResource:domain-c8:eef257af-fa50-455a-af7a-6899324fabe6/summary
5 In the Host and Clustersinventory, select the vCenter Server instance for the management
domain or the VI workload domain and click the Configure tab.
8 Click Save
Procedure
2 In the VMs and templates inventory, expand the workload domain vCenter Server tree and
expand the workload domain data center.
3 Locate the VxRail Manager virtual machine, right-click it, and select Power > Power on.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Power on the NSX Manager nodes for the management domain or the VI workload domain.
a Right-click the primary NSX Manager node and select Power > Power on.
This operation takes several minutes to complete until the NSX Manager node becomes
fully operational again and its user interface - accessible.
4 Log in to NSX Manager for the management domain or VI workload domain at https://
<nsxt_manager_cluster_fqdn> as admin.
c On the Appliances page, verify that the NSX Manager cluster has a Stable status and all
NSX Manager nodes are available.
Procedure
2 In the VMs and templates inventory, expand the tree of workload domain vCenter Server and
expand data center for the workload domain.
3 Right-click an NSX Edge virtual machine from the edge cluster and select Power > Power on.