This repository is the CloudCoreo stack for kubernetes node clusters.
This stack will add a scalable, highly availabe, self healing kubernetes node cluster based on the CloudCoreo leader election cluster here.
Kubernetes allows you to manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops. The architecture is such that master and node clusters are both required. This is only the cluster for the nodes and expects a master cluster available here.
The node cluster is quite interesting in the way it works with the master cluster. There is a bit of work necessary in order to get routing working. Each node must have its own route entry in the route tables for the VPC in which it is containted. As a user of this cluster, you must specify the master service cidr, but you must ALSO specify the cidr block size which will be used to subdivide the master range amongst the nodes.
For instance:
Lets assume you set the KUBE_NODE_SERVICE_IP_CIDRS variable to 10.234.0.0/20.
Your job is to decide how many (maximum) containers you want to run simultaneously on each node.
For this, lets decide on 62 as the maximum. Great! That just happens to mean you put in a value for KUBE_NODE_SERVICE_IP_CIDRS_SUBDIVIDER that gets you 64 addresses (62 usable, 1 for the broadcast and one for network address). That value is 26.
KUBE_NODE_SERVICE_IP_CIDRS = 10.234.0.0/20
KUBE_NODE_SERVICE_IP_CIDRS_SUBDIVIDER = 25
So what happens now?
As nodes come up they create a table of usable values based on the two variables above. In our case there are 32 possible cidrs:
10.234.0.0/26
10.234.0.64/26
10.234.1.0/26
10.234.1.64/26
10.234.2.0/26
10.234.2.64/26
...
...
10.234.13.0/26
10.234.13.64/26
10.234.14.0/26
10.234.14.64/26
10.234.15.0/26
10.234.15.64/26
Each node will check the kubernets nodes via kubectl command and find an unused network block. It will then insert itself into the proper routing tables. The 'used network blocks' are determined by the labels set on the nodes.
- description: the ami to launch for the cluster - default is Amazon Linux AMI 2015.03 (HVM), SSD Volume Type
- description: kubernetes version
- default: 1.1.4
- description: the name of the VPC
- default: kube-dev
- description: the cloudcoreo defined vpc to add this cluster to
- default: 10.1.0.0/16
- description: the cloudcoreo name of the private vpc subnets. eg private-us-west-2c
- default: kube-dev-private-us-west-1
- description: the private subnet in which the cluster should be added
- default: dev-private-route
- description: the zone in which the internal elb dns entry should be maintained
- default: dev.aws.lcloud.com
- description: Node IP CIDR block - NOTE - This MUST be different that the cidr specified for the kubernetes master service cidrs
- default: 10.2.1.0/24
- description: kubernetes service cidrs
- default: 26
- description: the name of the cluster - this will become your dns record too
- default: kube-master
- description: the name of the cluster - this will become your dns record too
- default: kube-node
- description: a tcp port the ELB will check every so often - this defines health and ASG termination
- default: 10250
- description: ports to allow traffic on directly to the instances
- default: 1..65535
- description: cidrs that are allowed to access the instances directly
- default: 10.0.0.0/8
- description: the image size to launch
- default: t2.small
- description: the minimum number of instances to launch
- default: 3
- description: the maxmium number of instances to launch
- default: 6
- description: the time in seconds to allow for instance to boot before checking health
- default: 600
- description: the time in seconds between rolling instances during an upgrade
- default: 300
- description: the timezone the servers should come up in
- default: America/LosAngeles
- description: the ssh key to associate with the instance(s) - blank will disable ssh
- default: cloudops
- description: kube proxy log file
- default: /var/log/kube-proxy.log
- description: kublet log file
- default: /var/log/kube-kublet.log
- description: ports to pass through the elb to the kubernetes node cluster instances
- default:
[
  {
      :elb_protocol => 'tcp',
      :elb_port => 10250,
      :to_protocol => 'tcp',
      :to_port => 10250
  }
]
- description: if you have more than one VPC with the same CIDR, and it is not under CloudCoreo control, we need a way to find it. Enter some unique tags that exist on the VPC you want us to find. ['env=production','Name=prod-vpc']
- description: if you more than one route table or set of route tables, and it is not under CloudCoreo control, we need a way to find it. Enter some unique tags that exist on your route tables you want us to find. i.e. ['Name=my-private-routetable','env=dev']
- description: Usually the private-routetable association is enough for us to find the subnets you need, but if you have more than one subnet, we may need a way to find them. unique tags is a great way. enter them there. i.e. ['Name=my-private-subnet']
- description: leave this blank - we are using ELB for health checks only
- description: leave this blank - we are using ELB for health checks only
- Container Management
- Kubernetes
- High Availability
- Master
- Cluster
- Servers


