Skip to content

Commit 96ce9ab

Browse files
author
L-Hudson
authored
Merge pull request docker#7353 from ollypom/dockeree-azuredocs
Added Azure Config File
2 parents 3154a93 + 22c3a4c commit 96ce9ab

File tree

4 files changed

+419
-115
lines changed

4 files changed

+419
-115
lines changed
Lines changed: 290 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,290 @@
1+
---
2+
title: Install UCP on Azure
3+
description: Learn how to install Docker Universal Control Plane in a Microsoft Azure environment.
4+
keywords: Universal Control Plane, UCP, install, Docker EE, Azure, Kubernetes
5+
---
6+
7+
Docker UCP closely integrates into Microsoft Azure for its Kubernetes Networking
8+
and Persistent Storage feature set. UCP deploys the Calico CNI provider, in Azure
9+
the Calico CNI leverages the Azure networking infrastructure for data path
10+
networking and the Azure IPAM for IP address management. There are
11+
infrastructure prerequisites that are required prior to UCP installation for the
12+
Calico / Azure integration.
13+
14+
## Docker UCP Networking
15+
16+
Docker UCP configures the Azure IPAM module for Kubernetes to allocate
17+
IP addresses to Kubernetes pods. The Azure IPAM module requires each Azure
18+
VM that's part of the Kubernetes cluster to be configured with a pool of
19+
IP addresses.
20+
21+
You have two options for deploying the VMs for the Kubernetes cluster on Azure:
22+
- Install the cluster on Azure stand-alone virtual machines. Docker UCP provides
23+
an [automated mechanism](#configure-ip-pools-for-azure-stand-alone-vms)
24+
to configure and maintain IP pools for stand-alone Azure VMs.
25+
- Install the cluster on an Azure virtual machine scale set. Configure the
26+
IP pools by using an ARM template like
27+
[this one](#set-up-ip-configurations-on-an-azure-virtual-machine-scale-set).
28+
29+
The steps for setting up IP address management are different in the two
30+
environments. If you're using a scale set, you set up `ipConfigurations`
31+
in an ARM template. If you're using stand-alone VMs, you set up IP pools
32+
for each VM by using a utility container that's configured to run as a
33+
global Swarm service, which Docker provides.
34+
35+
## Azure Prerequisites
36+
37+
The following list of infrastructure prerequisites need to be met in order
38+
to successfully deploy Docker UCP on Azure.
39+
40+
- All UCP Nodes (Managers and Workers) need to be deployed into the same
41+
Azure Resource Group. The Azure Networking (Vnets, Subnets, Security Groups)
42+
components could be deployed in a second Azure Resource Group.
43+
- All UCP Nodes (Managers and Workers) need to be attached to the same
44+
Azure Subnet.
45+
- All UCP (Managers and Workers) need to be tagged in Azure with the
46+
`Orchestrator` tag. Note the value for this tag is the Kubernetes version number
47+
in the format `Orchestrator=Kubernetes:x.y.z`. This value may change in each
48+
UCP release. To find the relevant version please see the UCP
49+
[Release Notes](../../release-notes). For example for UCP 3.0.6 the tag
50+
would be `Orchestrator=Kubernetes:1.8.15`.
51+
- The Azure Computer Name needs to match the Node Operating System's Hostname.
52+
Note this applies to the FQDN of the host including domain names.
53+
- An Azure Service Principal with `Contributor` access to the Azure Resource
54+
Group hosting the UCP Nodes. Note, if using a separate networking Resource
55+
Group the same Service Principal will need `Network Contributor` access to this
56+
Resource Group.
57+
58+
The following information will be required for the installation:
59+
60+
- `subscriptionId` - The Azure Subscription ID in which the UCP
61+
objects are being deployed.
62+
- `tenantId` - The Azure Active Directory Tenant ID in which the UCP
63+
objects are being deployed.
64+
- `aadClientId` - The Azure Service Principal ID
65+
- `aadClientSecret` - The Azure Service Principal Secret Key
66+
67+
### Azure Configuration File
68+
69+
For Docker UCP to integrate in to Microsoft Azure, an Azure configuration file
70+
will need to be placed within each UCP node in your cluster. This file
71+
will need to be placed at `/etc/kubernetes/azure.json`.
72+
73+
See the template below. Note entries that do not contain `****` should not be
74+
changed.
75+
76+
```
77+
{
78+
"cloud":"AzurePublicCloud",
79+
"tenantId": "***",
80+
"subscriptionId": "***",
81+
"aadClientId": "***",
82+
"aadClientSecret": "***",
83+
"resourceGroup": "***",
84+
"location": "****",
85+
"subnetName": "/****",
86+
"securityGroupName": "****",
87+
"vnetName": "****",
88+
"cloudProviderBackoff": false,
89+
"cloudProviderBackoffRetries": 0,
90+
"cloudProviderBackoffExponent": 0,
91+
"cloudProviderBackoffDuration": 0,
92+
"cloudProviderBackoffJitter": 0,
93+
"cloudProviderRatelimit": false,
94+
"cloudProviderRateLimitQPS": 0,
95+
"cloudProviderRateLimitBucket": 0,
96+
"useManagedIdentityExtension": false,
97+
"useInstanceMetadata": true
98+
}
99+
```
100+
101+
There are some optional values for Azure deployments:
102+
103+
- `"primaryAvailabilitySetName": "****",` - The Worker Nodes availability set.
104+
- `"vnetResourceGroup": "****",` - If your Azure Network objects live in a
105+
seperate resource group.
106+
- `"routeTableName": "****",` - If you have defined multiple Route tables within
107+
an Azure subnet.
108+
109+
More details on this configuration file can be found
110+
[here](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure.go).
111+
112+
113+
114+
## Considerations for IPAM Configuration
115+
116+
The subnet and the virtual network associated with the primary interface of
117+
the Azure VMs need to be configured with a large enough address prefix/range.
118+
The number of required IP addresses depends on the number of pods running
119+
on each node and the number of nodes in the cluster.
120+
121+
For example, in a cluster of 256 nodes, to run a maximum of 128 pods
122+
concurrently on a node, make sure that the address space of the subnet and the
123+
virtual network can allocate at least 128 * 256 IP addresses, _in addition to_
124+
initial IP allocations to VM NICs during Azure resource creation.
125+
126+
Accounting for IP addresses that are allocated to NICs during VM bring-up, set
127+
the address space of the subnet and virtual network to 10.0.0.0/16. This
128+
ensures that the network can dynamically allocate at least 32768 addresses,
129+
plus a buffer for initial allocations for primary IP addresses.
130+
131+
> Azure IPAM, UCP, and Kubernetes
132+
>
133+
> The Azure IPAM module queries an Azure virtual machine's metadata to obtain
134+
> a list of IP addresses that are assigned to the virtual machine's NICs. The
135+
> IPAM module allocates these IP addresses to Kubernetes pods. You configure the
136+
> IP addresses as `ipConfigurations` in the NICs associated with a virtual machine
137+
> or scale set member, so that Azure IPAM can provide them to Kubernetes when
138+
> requested.
139+
{: .important}
140+
141+
#### Additional Notes
142+
143+
- The `IP_COUNT` variable defines the subnet size for each node's pod IPs. This subnet size is the same for all hosts.
144+
- The Kubernetes `pod-cidr` must match the Azure Vnet of the hosts.
145+
146+
## Configure IP pools for Azure stand-alone VMs
147+
148+
Follow these steps once the underlying infrastructure has been provisioned.
149+
150+
### Configure multiple IP addresses per VM NIC
151+
152+
Follow the steps below to configure multiple IP addresses per VM NIC.
153+
154+
1. Initialize a swarm cluster comprising the virtual machines you created
155+
earlier. On one of the nodes of the cluster, run:
156+
157+
```bash
158+
docker swarm init
159+
```
160+
161+
2. Note the tokens for managers and workers. You may retrieve the join tokens
162+
at any time by running `$ docker swarm join-token manager` or `$ docker swarm
163+
join-token worker` on the manager node.
164+
3. Join two other nodes on the cluster as manager (recommended for HA) by running:
165+
166+
```bash
167+
docker swarm join --token <manager-token>
168+
```
169+
170+
4. Join remaining nodes on the cluster as workers:
171+
172+
```bash
173+
docker swarm join --token <worker-token>
174+
```
175+
176+
5. Create a file named "azure_ucp_admin.toml" that contains contents from
177+
creating the Service Principal.
178+
179+
```
180+
$ cat > azure_ucp_admin.toml <<EOF
181+
AZURE_CLIENT_ID = "<AD App ID field from Step 1>"
182+
AZURE_TENANT_ID = "<AD Tenant ID field from Step 1>"
183+
AZURE_SUBSCRIPTION_ID = "<Azure subscription ID>"
184+
AZURE_CLIENT_SECRET = "<AD App Secret field from Step 1>"
185+
EOF
186+
```
187+
188+
6. Create a Docker Swarm secret based on the "azure_ucp_admin.toml" file.
189+
190+
```bash
191+
docker secret create azure_ucp_admin.toml azure_ucp_admin.toml
192+
```
193+
194+
7. Create a global swarm service using the [docker4x/az-nic-ips](https://hub.docker.com/r/docker4x/az-nic-ips/)
195+
image on Docker Hub. Use the Swarm secret to prepopulate the virtual machines
196+
with the desired number of IP addresses per VM from the VNET pool. Set the
197+
number of IPs to allocate to each VM through the IP_COUNT environment variable.
198+
For example, to configure 128 IP addresses per VM, run the following command:
199+
200+
```bash
201+
docker service create \
202+
--mode=global \
203+
--secret=azure_ucp_admin.toml \
204+
--log-driver json-file \
205+
--log-opt max-size=1m \
206+
--env IP_COUNT=128 \
207+
--name ipallocator \
208+
--constraint "node.platform.os == linux" \
209+
docker4x/az-nic-ips
210+
```
211+
212+
## Set up IP configurations on an Azure virtual machine scale set
213+
214+
Configure IP Pools for each member of the VM scale set during provisioning by
215+
associating multiple `ipConfigurations` with the scale set’s
216+
`networkInterfaceConfigurations`. Here's an example `networkProfile`
217+
configuration for an ARM template that configures pools of 32 IP addresses
218+
for each VM in the VM scale set.
219+
220+
```json
221+
"networkProfile": {
222+
"networkInterfaceConfigurations": [
223+
{
224+
"name": "[variables('nicName')]",
225+
"properties": {
226+
"ipConfigurations": [
227+
{
228+
"name": "[variables('ipConfigName1')]",
229+
"properties": {
230+
"primary": "true",
231+
"subnet": {
232+
"id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]"
233+
},
234+
"loadBalancerBackendAddressPools": [
235+
{
236+
"id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('loadBalancerName'), '/backendAddressPools/', variables('bePoolName'))]"
237+
}
238+
],
239+
"loadBalancerInboundNatPools": [
240+
{
241+
"id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('loadBalancerName'), '/inboundNatPools/', variables('natPoolName'))]"
242+
}
243+
]
244+
}
245+
},
246+
{
247+
"name": "[variables('ipConfigName2')]",
248+
"properties": {
249+
"subnet": {
250+
"id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]"
251+
}
252+
}
253+
}
254+
.
255+
.
256+
.
257+
{
258+
"name": "[variables('ipConfigName32')]",
259+
"properties": {
260+
"subnet": {
261+
"id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]"
262+
}
263+
}
264+
}
265+
],
266+
"primary": "true"
267+
}
268+
}
269+
]
270+
}
271+
```
272+
273+
## Install UCP
274+
275+
Use the following command to install UCP on the manager node.
276+
The `--pod-cidr` option maps to the IP address range that you configured for
277+
the subnets in the previous sections, and the `--host-address` maps to the
278+
IP address of the master node.
279+
280+
```bash
281+
docker container run --rm -it \
282+
--name ucp \
283+
-v /var/run/docker.sock:/var/run/docker.sock \
284+
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} install \
285+
--host-address <ucp-ip> \
286+
--interactive \
287+
--swarm-port 3376 \
288+
--pod-cidr <ip-address-range> \
289+
--cloud-provider Azure
290+
```

datacenter/ucp/3.0/guides/admin/install/upgrade.md

Lines changed: 13 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,13 @@ in each node of the swarm to version 17.06 Enterprise Edition. Plan for the
1919
upgrade to take place outside of business hours, to ensure there's minimal
2020
impact to your users.
2121

22-
Also, don't make changes to UCP configurations while you're upgrading it.
22+
Don't make changes to UCP configurations while you're upgrading.
2323
This can lead to misconfigurations that are difficult to troubleshoot.
2424

25+
> Note: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft
26+
> Azure then please ensure all of the Azure [prerequisities](install-on-azure.md/#azure-prerequisites)
27+
> are met.
28+
2529
## Back up your swarm
2630

2731
Before starting an upgrade, make sure that your swarm is healthy. If a problem
@@ -48,7 +52,7 @@ Starting with the manager nodes, and then worker nodes:
4852
2. Upgrade the Docker Engine to version 17.06 or higher.
4953
3. Make sure the node is healthy.
5054

51-
In your browser, navigate to the **Nodes** page in the UCP web UI,
55+
In your browser, navigate to the **Nodes** page in the UCP web interface,
5256
and check that the node is healthy and is part of the swarm.
5357

5458
> Swarm mode
@@ -58,9 +62,9 @@ Starting with the manager nodes, and then worker nodes:
5862
5963
## Upgrade UCP
6064

61-
You can upgrade UCP from the web UI or the CLI.
65+
You can upgrade UCP from the web interface or the CLI.
6266

63-
### Use the UI to perform an upgrade
67+
### Use the web interface to perform an upgrade
6468

6569
When an upgrade is available for a UCP installation, a banner appears.
6670

@@ -76,13 +80,13 @@ Select a version to upgrade to using the **Available UCP Versions** dropdown,
7680
then click to upgrade.
7781

7882
Before the upgrade happens, a confirmation dialog along with important
79-
information regarding swarm and UI availability is displayed.
83+
information regarding swarm and interface availability is displayed.
8084

8185
![](../../images/upgrade-ucp-3.png){: .with-border}
8286

83-
During the upgrade, the UI is unavailable, so wait until the upgrade is complete
84-
before trying to use the UI. When the upgrade completes, a notification alerts
85-
you that a newer version of the UI is available, and you can see the new UI
87+
During the upgrade, the interface is unavailable, so wait until the upgrade is complete
88+
before trying to use the interface. When the upgrade completes, a notification alerts
89+
you that a newer version of the interface is available, and you can see the new interface
8690
after you refresh your browser.
8791

8892
### Use the CLI to perform an upgrade
@@ -103,7 +107,7 @@ $ docker container run --rm -it \
103107
This runs the upgrade command in interactive mode, so that you are prompted
104108
for any necessary configuration values.
105109

106-
Once the upgrade finishes, navigate to the UCP web UI and make sure that
110+
Once the upgrade finishes, navigate to the UCP web interface and make sure that
107111
all the nodes managed by UCP are healthy.
108112

109113
## Recommended upgrade paths

0 commit comments

Comments
 (0)