@@ -65,34 +65,7 @@ web UI.
6565
6666![ ] ( ../../images/scale-your-cluster-2.png ) {: .with-border}
6767
68- ## Remove nodes from the cluster
69-
70- 1 . If the target node is a manager, you will need to first demote the node into
71- a worker before proceeding with the removal:
72- * From the UCP web UI, navigate to the ** Nodes** page. Select the node you
73- wish to remove and switch its role to ** Worker** , wait until the operation
74- completes, and confirm that the node is no longer a manager.
75- * From the CLI, perform ` docker node ls ` and identify the nodeID or hostname
76- of the target node. Then, run ` docker node demote <nodeID or hostname> ` .
77-
78- 2 . If the status of the worker node is ` Ready ` , you need to manually force
79- the node to leave the swarm. To do this, connect to the target node through
80- SSH and run ` docker swarm leave --force ` directly against the local docker
81- engine.
82-
83- > Loss of quorum
84- >
85- > Do not perform this step if the node is still a manager, as
86- > this may cause loss of quorum.
87-
88- 3 . Now that the status of the node is reported as ` Down ` , you may remove the
89- node:
90- * From the UCP web UI, browse to the **Nodes** page and select the node.
91- In the details pane, click **Actions** and select **Remove**.
92- Click **Confirm** when you're prompted.
93- * From the CLI, perform `docker node rm <nodeID or hostname>`.
94-
95- ## Pause and drain nodes
68+ ## Pause or drain nodes
9669
9770Once a node is part of the cluster you can change its role making a manager
9871node into a worker and vice versa. You can also configure the node availability
@@ -103,52 +76,56 @@ so that it is:
10376* Drained: the node can't receive new tasks. Existing tasks are stopped and
10477 replica tasks are launched in active nodes.
10578
106- In the UCP web UI, browse to the ** Nodes** page and select the node. In the details pane, click the ** Configure** to open the ** Edit Node** page.
79+ In the UCP web UI, browse to the ** Nodes** page and select the node. In the
80+ details pane, click the ** Configure** to open the ** Edit Node** page.
10781
10882![ ] ( ../../images/scale-your-cluster-3.png ) {: .with-border}
10983
110- If you're load-balancing user requests to UCP across multiple manager nodes,
111- when demoting those nodes into workers, don't forget to remove them from your
112- load-balancing pool.
84+ ## Promote or demote a node
11385
114- ## Scale your cluster from the CLI
86+ You can promote worker nodes to managers to make UCP fault tolerant. You can
87+ also demote a manager node into a worker.
11588
116- You can also use the command line to do all of the above operations. To get the
117- join token, run the following command on a manager node:
89+ To promote or demote a manager node:
11890
119- ``` bash
120- $ docker swarm join-token worker
121- ```
91+ 1 . Navigate to the ** Nodes** page, and click the node that you want to demote.
92+ 2 . In the details pane, click ** Configure** and select ** Details** to open
93+ the ** Edit Node** page.
94+ 3 . In the ** Role** section, click ** Manager** or ** Worker** .
95+ 4 . Click ** Save** and wait until the operation completes.
96+ 5 . Navigate to the ** Nodes** page, and confirm that the node role has changed.
12297
123- If you want to add a new manager node instead of a worker node, use
124- ` docker swarm join-token manager ` instead. If you want to use a custom listen
125- address, add the ` --listen-addr ` arg:
98+ If you're load-balancing user requests to Docker EE across multiple manager
99+ nodes, don't forget to remove these nodes from your load-balancing pool when
100+ you demote them to workers.
126101
127- ``` bash
128- $ docker swarm join \
129- --token SWMTKN-1-2o5ra9t7022neymg4u15f3jjfh0qh3yof817nunoioxa9i7lsp-dkmt01ebwp2m0wce1u31h6lmj \
130- --listen-addr 234.234.234.234 \
131- 192.168.99.100:2377
132- ```
102+ ## Remove a node from the cluster
133103
134- Once your node is added, you can see it by running ` docker node ls ` on a manager :
104+ You can remove worker nodes from the cluster at any time :
135105
136- ``` bash
137- $ docker node ls
138- ```
106+ 1 . Navigate to the ** Nodes ** page and select the node.
107+ 2 . In the details pane, click ** Actions ** and select ** Remove ** .
108+ 3 . Click ** Confirm ** when you're prompted.
139109
140- To change the node's availability, use:
110+ Since manager nodes are important to the cluster overall health, you need to
111+ be careful when removing one from the cluster.
141112
142- ``` bash
143- $ docker node update --availability drain node2
144- ```
113+ To remove a manager node:
114+
115+ 1 . Make sure all nodes in the cluster are healthy. Don't remove manager nodes
116+ if that's not the case.
117+ 2 . Demote the manager node into a worker.
118+ 3 . Now you can remove that node from the cluster.
119+
120+ ## Use the CLI to manage your nodes
145121
146- You can set the availability to ` active ` , ` pause ` , or ` drain ` .
122+ You can use the Docker CLI client to manage your nodes from the CLI. To do
123+ this, configure your Docker CLI client with a [ UCP client bundle] ( ../../../user-access/cli.md ) .
147124
148- To remove the node, use :
125+ Once you do that, you can start managing your UCP nodes :
149126
150127``` bash
151- $ docker node rm < node-hostname >
128+ docker node ls
152129```
153130
154131## Where to go next
0 commit comments