Skip to content

Commit 34ef028

Browse files
committed
Fixed bad client and data node controllers.
1 parent fec8b91 commit 34ef028

File tree

3 files changed

+32
-24
lines changed

3 files changed

+32
-24
lines changed

README.md

Lines changed: 24 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -72,19 +72,20 @@ $ kubectl logs elasticsearch-master-i0x8d elasticsearch-master
7272
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
7373
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
7474
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
75-
[2015-06-05 10:59:41,319][WARN ][common.jna ] Unable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out. Increase RLIMIT_MEMLOCK (ulimit).
76-
[2015-06-05 10:59:41,457][INFO ][node ] [Rancor] version[1.5.2], pid[1], build[62ff986/2015-04-27T09:21:06Z]
77-
[2015-06-05 10:59:41,457][INFO ][node ] [Rancor] initializing ...
78-
[2015-06-05 10:59:41,515][INFO ][plugins ] [Rancor] loaded [cloud-kubernetes], sites []
79-
[2015-06-05 10:59:44,602][INFO ][node ] [Rancor] initialized
80-
[2015-06-05 10:59:44,603][INFO ][node ] [Rancor] starting ...
81-
[2015-06-05 10:59:44,679][INFO ][transport ] [Rancor] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.244.74.2:9300]}
82-
[2015-06-05 10:59:44,724][INFO ][discovery ] [Rancor] elasticsearch-k8s/bch3o_HyR9aTAXwQLmwZrg
83-
[2015-06-05 10:59:55,023][INFO ][cluster.service ] [Rancor] new_master [Rancor][bch3o_HyR9aTAXwQLmwZrg][elasticsearch-master-i0x8d][inet[/10.244.74.2:9300]]{data=false, master=true}, reason: zen-disco-join (elected_as_master)
84-
[2015-06-05 10:59:55,035][INFO ][node ] [Rancor] started
85-
[2015-06-05 10:59:55,071][INFO ][gateway ] [Rancor] recovered [0] indices into cluster_state
86-
[2015-06-05 11:04:56,930][INFO ][cluster.service ] [Rancor] added {[Mary Zero][EXkSeIMFTpazlO9fER8o7A][elasticsearch-lb-z6vd4][inet[/10.244.74.3:9300]]{data=false, master=false},}, reason: zen-disco-receive(join from node[[Mary Zero][EXkSeIMFTpazlO9fER8o7A][elasticsearch-lb-z6vd4][inet[/10.244.74.3:9300]]{data=false, master=false}])
87-
[2015-06-05 11:06:32,179][INFO ][cluster.service ] [Rancor] added {[Y'Garon][pHDxPFYOQAGSyEJu1apjOg][elasticsearch-data-vqkyz][inet[/10.244.85.3:9300]]{master=false},}, reason: zen-disco-receive(join from node[[Y'Garon][pHDxPFYOQAGSyEJu1apjOg][elasticsearch-data-vqkyz][inet[/10.244.85.3:9300]]{master=false}])
75+
[2015-06-24 23:59:14,009][WARN ][bootstrap ] Unable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out. Increase RLIMIT_MEMLOCK (ulimit).
76+
[2015-06-24 23:59:14,108][INFO ][node ] [Red Ghost] version[1.6.0], pid[1], build[cdd3ac4/2015-06-09T13:36:34Z]
77+
[2015-06-24 23:59:14,111][INFO ][node ] [Red Ghost] initializing ...
78+
[2015-06-24 23:59:14,160][INFO ][plugins ] [Red Ghost] loaded [cloud-kubernetes], sites []
79+
[2015-06-24 23:59:14,219][INFO ][env ] [Red Ghost] using [1] data paths, mounts [[/data (/dev/sda9)]], net usable_space [14.3gb], net total_space [15.5gb], types [ext4]
80+
[2015-06-24 23:59:17,584][INFO ][node ] [Red Ghost] initialized
81+
[2015-06-24 23:59:17,584][INFO ][node ] [Red Ghost] starting ...
82+
[2015-06-24 23:59:17,762][INFO ][transport ] [Red Ghost] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.244.51.3:9300]}
83+
[2015-06-24 23:59:17,774][INFO ][discovery ] [Red Ghost] elasticsearch-k8s/aZnBnidATKKxQ7G2LBabng
84+
[2015-06-24 23:59:23,208][INFO ][cluster.service ] [Red Ghost] new_master [Red Ghost][aZnBnidATKKxQ7G2LBabng][elasticsearch-master-70p0s][inet[/10.244.51.3:9300]]{data=false, master=true}, reason: zen-disco-join (elected_as_master)
85+
[2015-06-24 23:59:23,217][INFO ][node ] [Red Ghost] started
86+
[2015-06-24 23:59:23,253][INFO ][gateway ] [Red Ghost] recovered [0] indices into cluster_state
87+
[2015-06-25 00:07:39,631][INFO ][cluster.service ] [Red Ghost] added {[Termagaira][GFoS_4c0Rj2R25q1y6qNGw][elasticsearch-lb-usmg3][inet[/10.244.51.4:9300]]{data=false, master=false},}, reason: zen-disco-receive(join from node[[Termagaira][GFoS_4c0Rj2R25q1y6qNGw][elasticsearch-lb-usmg3][inet[/10.244.51.4:9300]]{data=false, master=false}])
88+
[2015-06-25 00:08:23,421][INFO ][cluster.service ] [Red Ghost] added {[Juggernaut][m-8dg7yuTw-Bfmna3TNarA][elasticsearch-data-56u6c][inet[/10.244.51.5:9300]]{master=false},}, reason: zen-disco-receive(join from node[[Juggernaut][m-8dg7yuTw-Bfmna3TNarA][elasticsearch-data-56u6c][inet[/10.244.51.5:9300]]{master=false}])
8889
```
8990

9091
As you can assert, the cluster is up and running. Easy, wasn't it?
@@ -111,27 +112,27 @@ You should see something like this:
111112

112113
```
113114
$ kubectl get service elasticsearch
114-
NAME LABELS SELECTOR IP(S) PORT(S)
115-
elasticsearch <none> component=elasticsearch,role=load-balancer 10.100.10.240 9200/TCP
115+
NAME LABELS SELECTOR IP(S) PORT(S)
116+
elasticsearch component=elasticsearch,role=load-balancer component=elasticsearch,role=load-balancer 10.100.235.91 9200/TCP
116117
```
117118

118119
From any host on your cluster (that's running `kube-proxy`):
119120

120121
```
121-
curl http://10.100.10.240:9200
122+
curl http://10.100.235.91:9200
122123
```
123124

124125
This should be what you see:
125126

126127
```json
127128
{
128129
"status" : 200,
129-
"name" : "Mary Zero",
130+
"name" : "Termagaira",
130131
"cluster_name" : "elasticsearch-k8s",
131132
"version" : {
132-
"number" : "1.5.2",
133-
"build_hash" : "62ff9868b4c8a0c45860bebb259e21980778ab1c",
134-
"build_timestamp" : "2015-04-27T09:21:06Z",
133+
"number" : "1.6.0",
134+
"build_hash" : "cdd3ac4dde4f69524ec0a14de3828cb95bbb86d0",
135+
"build_timestamp" : "2015-06-09T13:36:34Z",
135136
"build_snapshot" : false,
136137
"lucene_version" : "4.10.4"
137138
},
@@ -142,7 +143,7 @@ This should be what you see:
142143
Or if you want to see cluster information:
143144

144145
```
145-
curl http://10.100.10.240:9200/_cluster/health?pretty
146+
curl http://10.100.235.91:9200/_cluster/health?pretty
146147
```
147148

148149
This should be what you see:
@@ -159,6 +160,7 @@ This should be what you see:
159160
"relocating_shards" : 0,
160161
"initializing_shards" : 0,
161162
"unassigned_shards" : 0,
162-
"number_of_pending_tasks" : 0
163+
"number_of_pending_tasks" : 0,
164+
"number_of_in_flight_fetch" : 0
163165
}
164166
```

elasticsearch-data-controller.yaml

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ metadata:
77
component: elasticsearch
88
role: data
99
spec:
10-
serviceAccount: elasticsearch
1110
replicas: 1
1211
selector:
1312
component: elasticsearch
@@ -18,9 +17,13 @@ spec:
1817
component: elasticsearch
1918
role: data
2019
spec:
20+
serviceAccount: elasticsearch
2121
containers:
2222
- name: elasticsearch-data
2323
image: pires/elasticsearch:data
24+
env:
25+
- name: KUBERNETES_TRUST_CERT
26+
value: "true"
2427
ports:
2528
- containerPort: 9300
2629
name: transport

elasticsearch-lb-controller.yaml

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ metadata:
77
component: elasticsearch
88
role: load-balancer
99
spec:
10-
serviceAccount: elasticsearch
1110
replicas: 1
1211
selector:
1312
component: elasticsearch
@@ -18,9 +17,13 @@ spec:
1817
component: elasticsearch
1918
role: load-balancer
2019
spec:
20+
serviceAccount: elasticsearch
2121
containers:
2222
- name: elasticsearch-lb
2323
image: pires/elasticsearch:lb
24+
env:
25+
- name: KUBERNETES_TRUST_CERT
26+
value: "true"
2427
ports:
2528
- containerPort: 9200
2629
name: http

0 commit comments

Comments
 (0)