Skip to content

Commit d2a7b15

Browse files
committed
Initial commit
0 parents  commit d2a7b15

File tree

2 files changed

+220
-0
lines changed

2 files changed

+220
-0
lines changed

README.md

Lines changed: 144 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,144 @@
1+
# Initial requirementes
2+
3+
This is example configuration which launches *Elasticsearch Cluster* on *Docker Swarm Cluster*. The entire setup distinguish following services:
4+
5+
- service with coordination Elasticsearch node role enabled which basically acts like load balnacer
6+
- service with Elasticsearch master eligible nodes
7+
- service with Elasticsearch data nodes responsible for CRUD operations
8+
- servive visualizer which is presenting how containers are spread accorss the Docker Swarm Cluster
9+
10+
In order to run it I recommend to have at least 3 VM's provisioned. You can try AWS or even virtualbox running on your local machine. The easiest way is to setup servers using docker-machine. In this example I am using my AWS account (you can specify amazonec2-access-key and amazonec2_security_key in command line or use it ~/.aws/credentials and defined default profile)
11+
12+
# Provisioning virtual machines
13+
Execute following command to provision servers:
14+
15+
`docker-machine create --driver amazonec2 --amazonec2-region eu-central-1 --amazonec2-instance-type t2.medium --amazonec2-security-group my-security-group --amazonec2-open-port 9200 --amazonec2-open-port 9300 --amazonec2-open-port 2377 --amazonec2-open-port 7946 --amazonec2-open-port 4789 --amazonec2-open-port 8080 node-1`
16+
17+
The given command should be executed 3 times at least, make sure you change hostname every time you are executing the command,the last argument in the command.
18+
Please note that created AWS security group is only for demonstration purposes, you never should expose crucial ports to the internet.
19+
20+
# Tuning virtual machines
21+
Before running a stack, minor changes have to be adopted on each of newaly created Vm's,
22+
1. Edit /etc/sysctl.conf by adding following `vm.max_map_count=262144` or just simple execute `docker-machine ssh node-1 sudo sysctl -w vm.max_map_count=262144` for each of the node.
23+
2. Edit file `/etc/systemd/system/docker.service.d/10-machine.conf` and modify default ulimit because Elasticsearch cluster is requesting memory locking (see `bootstrap.memory_lock=true`), thus following parameter to ExecStart `--default-ulimit memlock=-1`. You can simple execute that command on the each of the node:
24+
25+
`docker-machine ssh node-11 sudo "sed -i '/ExecStart=\/usr\/bin\/dockerd/ s/$/--default-ulimit memlock=-1/' /etc/systemd/system/docker.service.d/10-machine.conf"`
26+
27+
Please note that location of that file might be different depending of your Linux distribution. That one works with Ubuntu 16.04 LTS.
28+
After that execute on each of the node `systemctl daemon-reload` and restar Docker daemon `systemctl restart docker`
29+
30+
# Initiating Docker Swarm cluster
31+
Once vm's are created you can initiate Docker swarm cluster. Export environment variables belonging to the one of the node:
32+
33+
`eval $(docker-machine env node-1)`
34+
35+
and initiate a cluster:
36+
37+
`docker swarm init`
38+
39+
then connect to the other vm's and add them accordingly to the cluster, for test purposes you can just add only workers to the cluster. For production environment pleae consider the approprite amount of manager nodes to keep RAFT database secure and provide high availability for your Docker Swarm Cluster.
40+
41+
Make sure the cluster is working by executine command:
42+
43+
# Listing of the nodes consiting Docker Swarm cluster
44+
`docker node ls`
45+
```
46+
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
47+
iqepxq2w46nprlgm55gomf1ic * node-1 Ready Active Leader 18.09.0
48+
1rruyl7x9s9x43rql0x2jibx0 node-2 Ready Active 18.09.0
49+
yqoycyrs9j0cb1me7cwr77764 node-3 Ready Active 18.09.0
50+
56yflx8vy47oi3upycouzpjpj node-4 Ready Active 18.09.0
51+
```
52+
53+
# Deploying stack
54+
Than deploy a stack by executing following command:
55+
56+
`docker stack deploy -c docker-compose.yml es`
57+
58+
# Validating the status of Elasticsearch cluster
59+
After few minutes you should be able to get cluster state and list of nodes consisting the Elasticsearch cluster.
60+
61+
`curl ${IP_ADDRESS}:9200/_cluster/__health?pretty`
62+
```json {
63+
"cluster_name" : "docker-cluster",
64+
"status" : "green",
65+
"timed_out" : false,
66+
"number_of_nodes" : 7,
67+
"number_of_data_nodes" : 1,
68+
"active_primary_shards" : 0,
69+
"active_shards" : 0,
70+
"relocating_shards" : 0,
71+
"initializing_shards" : 0,
72+
"unassigned_shards" : 0,
73+
"delayed_unassigned_shards" : 0,
74+
"number_of_pending_tasks" : 0,
75+
"number_of_in_flight_fetch" : 0,
76+
"task_max_waiting_in_queue_millis" : 0,
77+
"active_shards_percent_as_number" : 100.0
78+
}
79+
```
80+
81+
# Listing of nodes consisting Elasticsearch Cluster
82+
83+
`curl ${IP_ADDRESS}:9200/__cat/node`
84+
85+
```
86+
10.0.0.22 45 56 1 0.03 0.29 0.19 m - fC8ghBT
87+
10.0.0.11 42 55 2 0.46 0.71 0.43 m * xiw-R34
88+
10.0.0.2 39 33 1 0.05 0.53 0.35 - - PPjPo-g
89+
10.0.0.3 49 55 2 0.12 0.42 0.29 - - upCdP2m
90+
10.0.0.5 39 55 2 0.46 0.71 0.43 - - DPCQgwz
91+
10.0.0.12 36 55 2 0.12 0.42 0.29 d - ybt0Jgt
92+
10.0.0.4 38 56 1 0.03 0.29 0.19 - - lmmnRLo
93+
```
94+
95+
Due to service mesh feature built in Docker Swarm you choose any of the IP addresses of your nodes. Docker Swarm will route the request to the approprite coordination node running on virtaul machines being a member of Docker Swarm cluster.
96+
97+
# Listing of running stack
98+
`docker stack ls`
99+
NAME SERVICES ORCHESTRATOR
100+
elastic 4 Swarm
101+
102+
# Listing of running services and appropriate numbers of replicas
103+
104+
`docker service ls`
105+
```
106+
ID NAME MODE REPLICAS IMAGE PORTS
107+
yxnunajngch0 elastic_coordination global 4/4 elasticsearch:6.5.2
108+
ilh21fuh116z elastic_data replicated 1/1 elasticsearch:6.5.2
109+
x7zzznodcql9 elastic_master replicated 2/2 elasticsearch:6.5.2
110+
lbttfnzojei4 elastic_visualizer replicated 1/1 dockersamples/visualizer:latest *:8080->8080/tcp
111+
```
112+
Please note that service coordination has replicate mode set to `global`, it means that we are requesting to have Elasticsearch coordination instannce running on each of the node consisting Docker Swarm cluster. Other services have defined replicas as an integer.
113+
114+
# Listing of specific task running on a particular node
115+
`docker service ps elastic_coordination`
116+
```
117+
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
118+
y6lfnbnavy7z elastic_coordination.yqoycyrs9j0cb1me7cwr77764 elasticsearch:6.5.2 node-3 Running Running 2 minutes ago *:9200->9200/tcp
119+
1f1xk71zug9z elastic_coordination.iqepxq2w46nprlgm55gomf1ic elasticsearch:6.5.2 node-1 Running Running 2 minutes ago *:9200->9200/tcp
120+
fpu2bdmnnfl2 elastic_coordination.56yflx8vy47oi3upycouzpjpj elasticsearch:6.5.2 node-4 Running Running 2 minutes ago *:9200->9200/tcp
121+
l8lozi001l2l elastic_coordination.1rruyl7x9s9x43rql0x2jibx0 elasticsearch:6.5.2 node-2 Running Running 2 minutes ago *:9200->9200/tcp
122+
```
123+
# Scaling Elasticsearch cluster by adding extra data nodes
124+
125+
Scaling up Elasticsearch cluster can be achived by executing following command
126+
127+
`docker service scale elastic_data=3`
128+
129+
which means that we are requesting to scaling up to 3 data nodes.
130+
131+
`docker service ls --filter name=elastic_data`
132+
```
133+
ID NAME MODE REPLICAS IMAGE PORTS
134+
ilh21fuh116z elastic_data replicated 3/3 elasticsearch:6.5.2
135+
```
136+
137+
`docker service ps elastic_data`
138+
```
139+
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
140+
pd0d7xqf3bee elastic_data.1 elasticsearch:6.5.2 node-3 Running Running 45 minutes ago
141+
vsp6k75b0132 elastic_data.2 elasticsearch:6.5.2 node-4 Running Running 8 minutes ago
142+
vmxgv4axarsc elastic_data.3 elasticsearch:6.5.2 node-2 Running Running 8 minutes ago
143+
```
144+

docker-compose.yml

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
version: "3.7"
2+
services:
3+
coordination:
4+
image: elasticsearch:6.5.2
5+
environment:
6+
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
7+
- "bootstrap.memory_lock=true"
8+
- "discovery.zen.minimum_master_nodes=2"
9+
- "discovery.zen.ping.unicast.hosts=master"
10+
- "node.master=false"
11+
- "node.data=false"
12+
- "node.ingest=false"
13+
networks:
14+
- esnet
15+
ports:
16+
- target: 9200
17+
published: 9200
18+
protocol: tcp
19+
mode: host
20+
deploy:
21+
endpoint_mode: dnsrr
22+
mode: 'global'
23+
resources:
24+
limits:
25+
memory: 1G
26+
master:
27+
image: elasticsearch:6.5.2
28+
environment:
29+
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
30+
- "bootstrap.memory_lock=true"
31+
- "discovery.zen.minimum_master_nodes=2"
32+
- "discovery.zen.ping.unicast.hosts=master"
33+
- "node.master=true"
34+
- "node.data=false"
35+
- "node.ingest=false"
36+
networks:
37+
- esnet
38+
deploy:
39+
endpoint_mode: dnsrr
40+
mode: 'replicated'
41+
replicas: 2
42+
resources:
43+
limits:
44+
memory: 1G
45+
data:
46+
image: elasticsearch:6.5.2
47+
environment:
48+
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
49+
- "bootstrap.memory_lock=true"
50+
- "discovery.zen.minimum_master_nodes=2"
51+
- "discovery.zen.ping.unicast.hosts=master"
52+
- "node.master=false"
53+
- "node.data=true"
54+
- "node.ingest=false"
55+
networks:
56+
- esnet
57+
deploy:
58+
endpoint_mode: dnsrr
59+
mode: 'replicated'
60+
replicas: 1
61+
resources:
62+
limits:
63+
memory: 1G
64+
visualizer:
65+
image: dockersamples/visualizer
66+
ports:
67+
- "8080:8080"
68+
stop_grace_period: 1m30s
69+
volumes:
70+
- "/var/run/docker.sock:/var/run/docker.sock"
71+
deploy:
72+
placement:
73+
constraints: [node.role == manager]
74+
networks:
75+
esnet:
76+
driver: overlay

0 commit comments

Comments
 (0)