Skip to content

Commit 29c16ae

Browse files
committed
DOCS-2330 distribute replica set
1 parent 27dc172 commit 29c16ae

File tree

3 files changed

+30
-76
lines changed

3 files changed

+30
-76
lines changed

source/administration/replica-set-architectures.txt

Lines changed: 4 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -75,8 +75,6 @@ conditions are true:
7575
prevents it from being a primary. Any member with a ``priority``
7676
value greater than ``0`` is available to be a primary.
7777

78-
- A majority of the set's members operate in the main data center.
79-
8078
.. seealso:: :doc:`/tutorial/expand-replica-set`.
8179

8280
.. _replica-set-geographical-distribution:
@@ -86,10 +84,7 @@ Geographically Distributed Sets
8684

8785
A geographically distributed replica set provides data recovery should
8886
one data center fail. These sets include at least one member in a
89-
secondary data center. The member has its
90-
:data:`~local.system.replset.members[n].priority`
91-
:ref:`set <replica-set-reconfiguration-usage>` to ``0`` to prevent the
92-
member from ever becoming primary.
87+
secondary data center.
9388

9489
In many circumstances, these deployments consist of the following:
9590

@@ -99,9 +94,7 @@ In many circumstances, these deployments consist of the following:
9994
- One :term:`secondary <secondary>` member in the primary data center.
10095
This member can become the primary member at any time.
10196

102-
- One secondary member in a secondary data center. This member is
103-
ineligible to become primary. Set its
104-
:data:`local.system.replset.members[n].priority` to ``0``.
97+
- One secondary member in a secondary data center.
10598

10699
If the primary is unavailable, the replica set will elect a new primary
107100
from the primary data center.
@@ -111,9 +104,7 @@ the member in the secondary center cannot independently become the
111104
primary.
112105

113106
If the primary data center fails, you can manually recover the data
114-
set from the secondary data center. With appropriate :ref:`write concern
115-
<write-concern>` there will be no data loss and downtime can be
116-
minimal.
107+
set from the secondary data center.
117108

118109
When you add a secondary data center, make sure to keep an odd number of
119110
members overall to prevent ties during elections for primary by
@@ -238,7 +229,7 @@ to ensure that they will never become primary. These members will vote in
238229
elections for primary but will never be eligible for election to
239230
primary. Consider likely failover scenarios, such as inter-site network
240231
partitions, and ensure there will be members eligible for election as
241-
primary *and* a quorum of voting members in the main facility.
232+
primary.
242233

243234
.. note::
244235

source/administration/replica-sets.txt

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1245,8 +1245,6 @@ determining whether a majority is available to hold an election.
12451245

12461246
That means that if a primary steps down and neither side of the
12471247
partition has a majority on its own, the set will not elect a new
1248-
primary and the set will become read only. To avoid this situation,
1249-
attempt to place a majority of instances in one data center with a
1250-
minority of instances in a secondary facility.
1248+
primary and the set will become read only.
12511249

12521250
.. see:: :ref:`replica-set-election-internals`.

source/tutorial/deploy-geographically-distributed-replica-set.txt

Lines changed: 25 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -33,30 +33,13 @@ members in a geographically distinct facility or data center.
3333
Requirements
3434
------------
3535

36-
For a three-member replica set you need two instances in a
37-
primary facility (hereafter, "Site A") and one member in a secondary
38-
facility (hereafter, "Site B".) Site A should be the same facility or
39-
very close to your primary application infrastructure
40-
(i.e. application servers, caching layer, users, etc.)
41-
42-
For a four-member replica set you need two members in Site A,
43-
two members in Site B (or one member in Site B and one member in Site
44-
C,) and a single :term:`arbiter` in Site A.
45-
46-
For replica sets with additional members in the secondary facility or with
47-
multiple secondary facilities, the requirements are the same as above but with the
48-
following notes:
49-
50-
- Ensure that a majority of the :ref:`voting members
51-
<replica-set-non-voting-members>` are within Site A. This includes
52-
:ref:`secondary-only members <replica-set-secondary-only-members>` and
53-
:ref:`arbiters <replica-set-arbiters>` For more information on the
54-
need to keep the voting majority on one site, see
55-
:ref:`replica-set-elections-and-network-partitions`.
56-
57-
- If you deploy a replica set with an uneven number of members, deploy
58-
an :ref:`arbiter <replica-set-arbiters>` on Site A. The arbiter must
59-
be on site A to keep the majority there.
36+
If possible, use an odd number of data centers, and choose a
37+
distribution of members that maximizes the likelihood that even with a
38+
loss of a data center, the remaining replica set members can form a
39+
majority or at minimum, provide a copy of your data.
40+
41+
If you deploy a replica set with an uneven number of members, add an
42+
:ref:`arbiter <replica-set-arbiters>`.
6043

6144
For all configurations in this tutorial, deploy each replica set member
6245
on a separate system. Although you may deploy more than one replica set member on a
@@ -87,9 +70,7 @@ features:
8770
- ``mongodb2.example.net``
8871

8972
Configure DNS names appropriately, *or* set up your systems'
90-
``/etc/hosts`` file to reflect this configuration. Ensure that one
91-
system (e.g. ``mongodb2.example.net``) resides in Site B. Host all
92-
other systems in Site A.
73+
``/etc/hosts`` file to reflect this configuration.
9374

9475
- Ensure that network traffic can pass between all members in the
9576
network securely and efficiently. Consider the following:
@@ -197,9 +178,9 @@ To deploy a geographically distributed three-member set:
197178
rs.add("mongodb1.example.net")
198179
rs.add("mongodb2.example.net")
199180

200-
#. Make sure that you have configured the member located in Site B
201-
(i.e. ``mongodb2.example.net``) as a :ref:`secondary-only member
202-
<replica-set-secondary-only-members>`:
181+
#. Optional. Configure the member eligibility for becoming primary.
182+
In some cases, you may prefer that the members in one data center be
183+
elected primary before the members in the other data centers.
203184

204185
a. Issue the following command to determine the
205186
:data:`~local.system.replset.members[n]._id` value for
@@ -209,8 +190,8 @@ To deploy a geographically distributed three-member set:
209190

210191
rs.conf()
211192

212-
#. In the :data:`~local.system.replset.members` array, save the
213-
:data:`~local.system.replset.members[n]._id` value. The example in
193+
#. In the :data:`~local.system.replset.members` array, find the array
194+
index of the member to configure. The example in
214195
the next step assumes this value is ``2``.
215196

216197
#. In the :program:`mongo` shell connected to the replica set's
@@ -265,9 +246,7 @@ features:
265246
- ``mongodb3.example.net``
266247

267248
Configure DNS names appropriately, *or* set up your systems'
268-
``/etc/host`` file to reflect this configuration. Ensure that one
269-
system (e.g. ``mongodb2.example.net``) resides in Site B. Host all
270-
other systems in Site A.
249+
``/etc/host`` file to reflect this configuration.
271250

272251
- One host (e.g. ``mongodb3.example.net``) will be an :term:`arbiter`
273252
and can run on a system that is also used for an application server
@@ -400,9 +379,9 @@ To deploy a geographically distributed four-member set:
400379

401380
rs.addArb("mongodb4.example.net")
402381

403-
#. Make sure that you have configured each member located in Site B
404-
(e.g. ``mongodb3.example.net``) as a :ref:`secondary-only member
405-
<replica-set-secondary-only-members>`:
382+
#. Optional. Configure the member eligibility for becoming primary.
383+
In some cases, you may prefer that the members in one data center be
384+
elected primary before the members in the other data centers.
406385

407386
a. Issue the following command to determine the
408387
:data:`~local.system.replset.members[n]._id` value for the member:
@@ -411,8 +390,8 @@ To deploy a geographically distributed four-member set:
411390

412391
rs.conf()
413392

414-
#. In the :data:`~local.system.replset.members` array, save the
415-
:data:`~local.system.replset.members[n]._id` value. The example in
393+
#. In the :data:`~local.system.replset.members` array, find the array
394+
index of the member to configure. The example in
416395
the next step assumes this value is ``2``.
417396

418397
#. In the :program:`mongo` shell connected to the replica set's
@@ -449,26 +428,12 @@ To deploy a geographically distributed four-member set:
449428
Deploy a Distributed Set with More than Four Members
450429
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
451430

452-
The procedure for deploying a geographically distributed set with more
453-
than four members is similar to the above procedures, with the following
454-
differences:
431+
The above procedures detail the steps necessary for deploying a
432+
geographically redundant replica set. Larger replica set deployments
433+
generally follow the same steps, but have additional considerations:
455434

456435
- Never deploy more than seven voting members.
457436

458-
- Use the procedure for a four-member set if you have an even number of
459-
members (see :ref:`replica-set-deploy-distributed-four-member`).
460-
Ensure that Site A always has a majority of the members by deploying
461-
the :term:`arbiter` within Site A. For six member sets, deploy at
462-
least three voting members in addition to the arbiter in Site A, the
463-
remaining members in alternate sites.
464-
465-
- Use the procedure for a three-member set if you have an odd number of
466-
members (see :ref:`replica-set-deploy-distributed-three-member`).
467-
Ensure that Site A always has a majority of the members of the set.
468-
For example, if a set has five members, deploy three members within
469-
the primary facility and two members in other facilities.
470-
471-
- If you have a majority of the members of the set *outside* of Site A
472-
and the network partitions to prevent communication between sites,
473-
the current primary in Site A will step down, even if none of the
474-
members outside of Site A are eligible to become primary.
437+
- If you have an even number of members, similar to :ref:`the procedure
438+
for a four-member set <replica-set-deploy-distributed-four-member>`,
439+
deploy an arbiter.

0 commit comments

Comments
 (0)