|
| 1 | +--- |
| 2 | +title: Install UCP on Azure |
| 3 | +description: Learn how to install Docker Universal Control Plane in a Microsoft Azure environment. |
| 4 | +keywords: Universal Control Plane, UCP, install, Docker EE, Azure, Kubernetes |
| 5 | +--- |
| 6 | + |
| 7 | +Docker UCP closely integrates into Microsoft Azure for its Kubernetes Networking |
| 8 | +and Persistent Storage feature set. UCP deploys the Calico CNI provider, in Azure |
| 9 | +the Calico CNI leverages the Azure networking infrastructure for data path |
| 10 | +networking and the Azure IPAM for IP address management. There are |
| 11 | +infrastructure prerequisites that are required prior to UCP installation for the |
| 12 | +Calico / Azure integration. |
| 13 | + |
| 14 | +## Docker UCP Networking |
| 15 | + |
| 16 | +Docker UCP configures the Azure IPAM module for Kubernetes to allocate |
| 17 | +IP addresses to Kubernetes pods. The Azure IPAM module requires each Azure |
| 18 | +VM that's part of the Kubernetes cluster to be configured with a pool of |
| 19 | +IP addresses. |
| 20 | + |
| 21 | +You have two options for deploying the VMs for the Kubernetes cluster on Azure: |
| 22 | +- Install the cluster on Azure stand-alone virtual machines. Docker UCP provides |
| 23 | + an [automated mechanism](#configure-ip-pools-for-azure-stand-alone-vms) |
| 24 | + to configure and maintain IP pools for stand-alone Azure VMs. |
| 25 | +- Install the cluster on an Azure virtual machine scale set. Configure the |
| 26 | + IP pools by using an ARM template like |
| 27 | + [this one](#set-up-ip-configurations-on-an-azure-virtual-machine-scale-set). |
| 28 | + |
| 29 | +The steps for setting up IP address management are different in the two |
| 30 | +environments. If you're using a scale set, you set up `ipConfigurations` |
| 31 | +in an ARM template. If you're using stand-alone VMs, you set up IP pools |
| 32 | +for each VM by using a utility container that's configured to run as a |
| 33 | +global Swarm service, which Docker provides. |
| 34 | + |
| 35 | +## Azure Prerequisites |
| 36 | + |
| 37 | +The following list of infrastructure prerequisites need to be met in order |
| 38 | +to successfully deploy Docker UCP on Azure. |
| 39 | + |
| 40 | +- All UCP Nodes (Managers and Workers) need to be deployed into the same |
| 41 | +Azure Resource Group. The Azure Networking (Vnets, Subnets, Security Groups) |
| 42 | +components could be deployed in a second Azure Resource Group. |
| 43 | +- All UCP Nodes (Managers and Workers) need to be attached to the same |
| 44 | +Azure Subnet. |
| 45 | +- All UCP (Managers and Workers) need to be tagged in Azure with the |
| 46 | +`Orchestrator` tag. Note the value for this tag is the Kubernetes version number |
| 47 | +in the format `Orchestrator=Kubernetes:x.y.z`. This value may change in each |
| 48 | +UCP release. To find the relevant version please see the UCP |
| 49 | +[Release Notes](../../release-notes). For example for UCP 3.0.6 the tag |
| 50 | +would be `Orchestrator=Kubernetes:1.8.15`. |
| 51 | +- The Azure Computer Name needs to match the Node Operating System's Hostname. |
| 52 | +Note this applies to the FQDN of the host including domain names. |
| 53 | +- An Azure Service Principal with `Contributor` access to the Azure Resource |
| 54 | +Group hosting the UCP Nodes. Note, if using a separate networking Resource |
| 55 | +Group the same Service Principal will need `Network Contributor` access to this |
| 56 | +Resource Group. |
| 57 | + |
| 58 | +The following information will be required for the installation: |
| 59 | + |
| 60 | +- `subscriptionId` - The Azure Subscription ID in which the UCP |
| 61 | +objects are being deployed. |
| 62 | +- `tenantId` - The Azure Active Directory Tenant ID in which the UCP |
| 63 | +objects are being deployed. |
| 64 | +- `aadClientId` - The Azure Service Principal ID |
| 65 | +- `aadClientSecret` - The Azure Service Principal Secret Key |
| 66 | + |
| 67 | +### Azure Configuration File |
| 68 | + |
| 69 | +For Docker UCP to integrate in to Microsoft Azure, an Azure configuration file |
| 70 | +will need to be placed within each UCP node in your cluster. This file |
| 71 | +will need to be placed at `/etc/kubernetes/azure.json`. |
| 72 | + |
| 73 | +See the template below. Note entries that do not contain `****` should not be |
| 74 | +changed. |
| 75 | + |
| 76 | +``` |
| 77 | +{ |
| 78 | + "cloud":"AzurePublicCloud", |
| 79 | + "tenantId": "***", |
| 80 | + "subscriptionId": "***", |
| 81 | + "aadClientId": "***", |
| 82 | + "aadClientSecret": "***", |
| 83 | + "resourceGroup": "***", |
| 84 | + "location": "****", |
| 85 | + "subnetName": "/****", |
| 86 | + "securityGroupName": "****", |
| 87 | + "vnetName": "****", |
| 88 | + "cloudProviderBackoff": false, |
| 89 | + "cloudProviderBackoffRetries": 0, |
| 90 | + "cloudProviderBackoffExponent": 0, |
| 91 | + "cloudProviderBackoffDuration": 0, |
| 92 | + "cloudProviderBackoffJitter": 0, |
| 93 | + "cloudProviderRatelimit": false, |
| 94 | + "cloudProviderRateLimitQPS": 0, |
| 95 | + "cloudProviderRateLimitBucket": 0, |
| 96 | + "useManagedIdentityExtension": false, |
| 97 | + "useInstanceMetadata": true |
| 98 | +} |
| 99 | +``` |
| 100 | + |
| 101 | +There are some optional values for Azure deployments: |
| 102 | + |
| 103 | +- `"primaryAvailabilitySetName": "****",` - The Worker Nodes availability set. |
| 104 | +- `"vnetResourceGroup": "****",` - If your Azure Network objects live in a |
| 105 | +seperate resource group. |
| 106 | +- `"routeTableName": "****",` - If you have defined multiple Route tables within |
| 107 | +an Azure subnet. |
| 108 | + |
| 109 | +More details on this configuration file can be found |
| 110 | +[here](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure.go). |
| 111 | + |
| 112 | + |
| 113 | + |
| 114 | +## Considerations for IPAM Configuration |
| 115 | + |
| 116 | +The subnet and the virtual network associated with the primary interface of |
| 117 | +the Azure VMs need to be configured with a large enough address prefix/range. |
| 118 | +The number of required IP addresses depends on the number of pods running |
| 119 | +on each node and the number of nodes in the cluster. |
| 120 | + |
| 121 | +For example, in a cluster of 256 nodes, to run a maximum of 128 pods |
| 122 | +concurrently on a node, make sure that the address space of the subnet and the |
| 123 | +virtual network can allocate at least 128 * 256 IP addresses, _in addition to_ |
| 124 | +initial IP allocations to VM NICs during Azure resource creation. |
| 125 | + |
| 126 | +Accounting for IP addresses that are allocated to NICs during VM bring-up, set |
| 127 | +the address space of the subnet and virtual network to 10.0.0.0/16. This |
| 128 | +ensures that the network can dynamically allocate at least 32768 addresses, |
| 129 | +plus a buffer for initial allocations for primary IP addresses. |
| 130 | + |
| 131 | +> Azure IPAM, UCP, and Kubernetes |
| 132 | +> |
| 133 | +> The Azure IPAM module queries an Azure virtual machine's metadata to obtain |
| 134 | +> a list of IP addresses that are assigned to the virtual machine's NICs. The |
| 135 | +> IPAM module allocates these IP addresses to Kubernetes pods. You configure the |
| 136 | +> IP addresses as `ipConfigurations` in the NICs associated with a virtual machine |
| 137 | +> or scale set member, so that Azure IPAM can provide them to Kubernetes when |
| 138 | +> requested. |
| 139 | +{: .important} |
| 140 | + |
| 141 | +#### Additional Notes |
| 142 | + |
| 143 | +- The `IP_COUNT` variable defines the subnet size for each node's pod IPs. This subnet size is the same for all hosts. |
| 144 | +- The Kubernetes `pod-cidr` must match the Azure Vnet of the hosts. |
| 145 | + |
| 146 | +## Configure IP pools for Azure stand-alone VMs |
| 147 | + |
| 148 | +Follow these steps once the underlying infrastructure has been provisioned. |
| 149 | + |
| 150 | +### Configure multiple IP addresses per VM NIC |
| 151 | + |
| 152 | +Follow the steps below to configure multiple IP addresses per VM NIC. |
| 153 | + |
| 154 | +1. Initialize a swarm cluster comprising the virtual machines you created |
| 155 | + earlier. On one of the nodes of the cluster, run: |
| 156 | + |
| 157 | + ```bash |
| 158 | + docker swarm init |
| 159 | + ``` |
| 160 | + |
| 161 | +2. Note the tokens for managers and workers. You may retrieve the join tokens |
| 162 | + at any time by running `$ docker swarm join-token manager` or `$ docker swarm |
| 163 | + join-token worker` on the manager node. |
| 164 | +3. Join two other nodes on the cluster as manager (recommended for HA) by running: |
| 165 | + |
| 166 | + ```bash |
| 167 | + docker swarm join --token <manager-token> |
| 168 | + ``` |
| 169 | + |
| 170 | +4. Join remaining nodes on the cluster as workers: |
| 171 | + |
| 172 | + ```bash |
| 173 | + docker swarm join --token <worker-token> |
| 174 | + ``` |
| 175 | + |
| 176 | +5. Create a file named "azure_ucp_admin.toml" that contains contents from |
| 177 | + creating the Service Principal. |
| 178 | + |
| 179 | + ``` |
| 180 | + $ cat > azure_ucp_admin.toml <<EOF |
| 181 | + AZURE_CLIENT_ID = "<AD App ID field from Step 1>" |
| 182 | + AZURE_TENANT_ID = "<AD Tenant ID field from Step 1>" |
| 183 | + AZURE_SUBSCRIPTION_ID = "<Azure subscription ID>" |
| 184 | + AZURE_CLIENT_SECRET = "<AD App Secret field from Step 1>" |
| 185 | + EOF |
| 186 | + ``` |
| 187 | +
|
| 188 | +6. Create a Docker Swarm secret based on the "azure_ucp_admin.toml" file. |
| 189 | +
|
| 190 | + ```bash |
| 191 | + docker secret create azure_ucp_admin.toml azure_ucp_admin.toml |
| 192 | + ``` |
| 193 | +
|
| 194 | +7. Create a global swarm service using the [docker4x/az-nic-ips](https://hub.docker.com/r/docker4x/az-nic-ips/) |
| 195 | + image on Docker Hub. Use the Swarm secret to prepopulate the virtual machines |
| 196 | + with the desired number of IP addresses per VM from the VNET pool. Set the |
| 197 | + number of IPs to allocate to each VM through the IP_COUNT environment variable. |
| 198 | + For example, to configure 128 IP addresses per VM, run the following command: |
| 199 | +
|
| 200 | + ```bash |
| 201 | + docker service create \ |
| 202 | + --mode=global \ |
| 203 | + --secret=azure_ucp_admin.toml \ |
| 204 | + --log-driver json-file \ |
| 205 | + --log-opt max-size=1m \ |
| 206 | + --env IP_COUNT=128 \ |
| 207 | + --name ipallocator \ |
| 208 | + --constraint "node.platform.os == linux" \ |
| 209 | + docker4x/az-nic-ips |
| 210 | + ``` |
| 211 | +
|
| 212 | +## Set up IP configurations on an Azure virtual machine scale set |
| 213 | +
|
| 214 | +Configure IP Pools for each member of the VM scale set during provisioning by |
| 215 | +associating multiple `ipConfigurations` with the scale set’s |
| 216 | +`networkInterfaceConfigurations`. Here's an example `networkProfile` |
| 217 | +configuration for an ARM template that configures pools of 32 IP addresses |
| 218 | +for each VM in the VM scale set. |
| 219 | +
|
| 220 | +```json |
| 221 | +"networkProfile": { |
| 222 | + "networkInterfaceConfigurations": [ |
| 223 | + { |
| 224 | + "name": "[variables('nicName')]", |
| 225 | + "properties": { |
| 226 | + "ipConfigurations": [ |
| 227 | + { |
| 228 | + "name": "[variables('ipConfigName1')]", |
| 229 | + "properties": { |
| 230 | + "primary": "true", |
| 231 | + "subnet": { |
| 232 | + "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]" |
| 233 | + }, |
| 234 | + "loadBalancerBackendAddressPools": [ |
| 235 | + { |
| 236 | + "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('loadBalancerName'), '/backendAddressPools/', variables('bePoolName'))]" |
| 237 | + } |
| 238 | + ], |
| 239 | + "loadBalancerInboundNatPools": [ |
| 240 | + { |
| 241 | + "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('loadBalancerName'), '/inboundNatPools/', variables('natPoolName'))]" |
| 242 | + } |
| 243 | + ] |
| 244 | + } |
| 245 | + }, |
| 246 | + { |
| 247 | + "name": "[variables('ipConfigName2')]", |
| 248 | + "properties": { |
| 249 | + "subnet": { |
| 250 | + "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]" |
| 251 | + } |
| 252 | + } |
| 253 | + } |
| 254 | + . |
| 255 | + . |
| 256 | + . |
| 257 | + { |
| 258 | + "name": "[variables('ipConfigName32')]", |
| 259 | + "properties": { |
| 260 | + "subnet": { |
| 261 | + "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]" |
| 262 | + } |
| 263 | + } |
| 264 | + } |
| 265 | + ], |
| 266 | + "primary": "true" |
| 267 | + } |
| 268 | + } |
| 269 | + ] |
| 270 | +} |
| 271 | +``` |
| 272 | +
|
| 273 | +## Install UCP |
| 274 | +
|
| 275 | +Use the following command to install UCP on the manager node. |
| 276 | +The `--pod-cidr` option maps to the IP address range that you configured for |
| 277 | +the subnets in the previous sections, and the `--host-address` maps to the |
| 278 | +IP address of the master node. |
| 279 | +
|
| 280 | +```bash |
| 281 | +docker container run --rm -it \ |
| 282 | + --name ucp \ |
| 283 | + -v /var/run/docker.sock:/var/run/docker.sock \ |
| 284 | + {{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} install \ |
| 285 | + --host-address <ucp-ip> \ |
| 286 | + --interactive \ |
| 287 | + --swarm-port 3376 \ |
| 288 | + --pod-cidr <ip-address-range> \ |
| 289 | + --cloud-provider Azure |
| 290 | +``` |
0 commit comments