@@ -33,30 +33,13 @@ members in a geographically distinct facility or data center.
33
33
Requirements
34
34
------------
35
35
36
- For a three-member replica set you need two instances in a
37
- primary facility (hereafter, "Site A") and one member in a secondary
38
- facility (hereafter, "Site B".) Site A should be the same facility or
39
- very close to your primary application infrastructure
40
- (i.e. application servers, caching layer, users, etc.)
41
-
42
- For a four-member replica set you need two members in Site A,
43
- two members in Site B (or one member in Site B and one member in Site
44
- C,) and a single :term:`arbiter` in Site A.
45
-
46
- For replica sets with additional members in the secondary facility or with
47
- multiple secondary facilities, the requirements are the same as above but with the
48
- following notes:
49
-
50
- - Ensure that a majority of the :ref:`voting members
51
- <replica-set-non-voting-members>` are within Site A. This includes
52
- :ref:`secondary-only members <replica-set-secondary-only-members>` and
53
- :ref:`arbiters <replica-set-arbiters>` For more information on the
54
- need to keep the voting majority on one site, see
55
- :ref:`replica-set-elections-and-network-partitions`.
56
-
57
- - If you deploy a replica set with an uneven number of members, deploy
58
- an :ref:`arbiter <replica-set-arbiters>` on Site A. The arbiter must
59
- be on site A to keep the majority there.
36
+ If possible, use an odd number of data centers, and choose a
37
+ distribution of members that maximizes the likelihood that even with a
38
+ loss of a data center, the remaining replica set members can form a
39
+ majority or at minimum, provide a copy of your data.
40
+
41
+ If you deploy a replica set with an uneven number of members, add an
42
+ :ref:`arbiter <replica-set-arbiters>`.
60
43
61
44
For all configurations in this tutorial, deploy each replica set member
62
45
on a separate system. Although you may deploy more than one replica set member on a
@@ -87,9 +70,7 @@ features:
87
70
- ``mongodb2.example.net``
88
71
89
72
Configure DNS names appropriately, *or* set up your systems'
90
- ``/etc/hosts`` file to reflect this configuration. Ensure that one
91
- system (e.g. ``mongodb2.example.net``) resides in Site B. Host all
92
- other systems in Site A.
73
+ ``/etc/hosts`` file to reflect this configuration.
93
74
94
75
- Ensure that network traffic can pass between all members in the
95
76
network securely and efficiently. Consider the following:
@@ -197,9 +178,9 @@ To deploy a geographically distributed three-member set:
197
178
rs.add("mongodb1.example.net")
198
179
rs.add("mongodb2.example.net")
199
180
200
- #. Make sure that you have configured the member located in Site B
201
- (i.e. ``mongodb2.example.net``) as a :ref:`secondary-only member
202
- <replica-set-secondary-only- members>`:
181
+ #. Optional. Configure the member eligibility for becoming primary.
182
+ In some cases, you may prefer that the members in one data center be
183
+ elected primary before the members in the other data centers.
203
184
204
185
a. Issue the following command to determine the
205
186
:data:`~local.system.replset.members[n]._id` value for
@@ -209,8 +190,8 @@ To deploy a geographically distributed three-member set:
209
190
210
191
rs.conf()
211
192
212
- #. In the :data:`~local.system.replset.members` array, save the
213
- :data:`~local.system.replset.members[n]._id` value . The example in
193
+ #. In the :data:`~local.system.replset.members` array, find the array
194
+ index of the member to configure . The example in
214
195
the next step assumes this value is ``2``.
215
196
216
197
#. In the :program:`mongo` shell connected to the replica set's
@@ -265,9 +246,7 @@ features:
265
246
- ``mongodb3.example.net``
266
247
267
248
Configure DNS names appropriately, *or* set up your systems'
268
- ``/etc/host`` file to reflect this configuration. Ensure that one
269
- system (e.g. ``mongodb2.example.net``) resides in Site B. Host all
270
- other systems in Site A.
249
+ ``/etc/host`` file to reflect this configuration.
271
250
272
251
- One host (e.g. ``mongodb3.example.net``) will be an :term:`arbiter`
273
252
and can run on a system that is also used for an application server
@@ -400,9 +379,9 @@ To deploy a geographically distributed four-member set:
400
379
401
380
rs.addArb("mongodb4.example.net")
402
381
403
- #. Make sure that you have configured each member located in Site B
404
- (e.g. ``mongodb3.example.net``) as a :ref:`secondary-only member
405
- <replica-set-secondary-only- members>`:
382
+ #. Optional. Configure the member eligibility for becoming primary.
383
+ In some cases, you may prefer that the members in one data center be
384
+ elected primary before the members in the other data centers.
406
385
407
386
a. Issue the following command to determine the
408
387
:data:`~local.system.replset.members[n]._id` value for the member:
@@ -411,8 +390,8 @@ To deploy a geographically distributed four-member set:
411
390
412
391
rs.conf()
413
392
414
- #. In the :data:`~local.system.replset.members` array, save the
415
- :data:`~local.system.replset.members[n]._id` value . The example in
393
+ #. In the :data:`~local.system.replset.members` array, find the array
394
+ index of the member to configure . The example in
416
395
the next step assumes this value is ``2``.
417
396
418
397
#. In the :program:`mongo` shell connected to the replica set's
@@ -449,26 +428,12 @@ To deploy a geographically distributed four-member set:
449
428
Deploy a Distributed Set with More than Four Members
450
429
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
451
430
452
- The procedure for deploying a geographically distributed set with more
453
- than four members is similar to the above procedures, with the following
454
- differences :
431
+ The above procedures detail the steps necessary for deploying a
432
+ geographically redundant replica set. Larger replica set deployments
433
+ generally follow the same steps, but have additional considerations :
455
434
456
435
- Never deploy more than seven voting members.
457
436
458
- - Use the procedure for a four-member set if you have an even number of
459
- members (see :ref:`replica-set-deploy-distributed-four-member`).
460
- Ensure that Site A always has a majority of the members by deploying
461
- the :term:`arbiter` within Site A. For six member sets, deploy at
462
- least three voting members in addition to the arbiter in Site A, the
463
- remaining members in alternate sites.
464
-
465
- - Use the procedure for a three-member set if you have an odd number of
466
- members (see :ref:`replica-set-deploy-distributed-three-member`).
467
- Ensure that Site A always has a majority of the members of the set.
468
- For example, if a set has five members, deploy three members within
469
- the primary facility and two members in other facilities.
470
-
471
- - If you have a majority of the members of the set *outside* of Site A
472
- and the network partitions to prevent communication between sites,
473
- the current primary in Site A will step down, even if none of the
474
- members outside of Site A are eligible to become primary.
437
+ - If you have an even number of members, similar to :ref:`the procedure
438
+ for a four-member set <replica-set-deploy-distributed-four-member>`,
439
+ deploy an arbiter.
0 commit comments