Setup Production Ready Kubernetes on baremetal with kubespray
Kubespray is a composition of Ansible playbooks, inventory, provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks.
Prerequisites
Hardware
- 5 Nodes: Virtual/Physical Machines
Memory: 8GB
CPU: 4Core
Hard disk: 120GB available
Software
Kubernetes nodes
- Ubuntu 18.04
- Python
- SSH Server
- Privileged user
Kubespray machine
- Ansible 2.7.8+(not 2.8.x)
- Jinja 2.9+
Nodes Networking requisites
- Internet access to download docker images and install softwares
- IPv4 Forwarding should be enabled
- In order to avoid any issue during deployment, you should disable firewall.
Preparing the nodes
Run follow instructions on all nodes.
Install Python
Ansible needs python to be installed on all the machines. Ubuntu 18.04 already has Python3. So you need to create symbolic link.
sudo ln -s /usr/bin/python3 /usr/bin/python
Disable Swap
sudo swapoff -a
sudo sed -i '/ swap /d' /etc/fstab
Setup SSH using key-based authentication
Generate an RSA key pair by execute this command on the Ansible controller machine.
ssh-keygen -t rsa
Copy over the public key to all nodes.
ssh-copy-id ubuntu@<node-ip-address>
Setup Ansible Controller machine
Setup kubespray
Clone the official repository.
git clone https://github.com/kubernetes-incubator/kubespray.git
cd kubespray
Install dependencies from requirements.txt
sudo pip install -r requirements.txt
Set Remote User for Ansible
Add the following section in ansible.cfg
file.
vim ansible.cfg
...
[defaults]
...
remote_user=ubuntu
Create Inventory
cp -rfp inventory/sample inventory/prod
where prod is the custom configuration name. Replace with whatever name you would like to assign to the current cluster.
Create inventory using inventory generator.
declare -a IPS=(10.211.55.42 10.211.55.43 10.211.55.44 10.211.55.45 10.211.55.46)
CONFIG_FILE=inventory/prod/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
Once it run, you can see an inventory file that looks like below:
all:
hosts:
node1:
ansible_host: 10.211.55.42
ip: 10.211.55.42
access_ip: 10.211.55.42
node2:
ansible_host: 10.211.55.43
ip: 10.211.55.43
access_ip: 10.211.55.43
node3:
ansible_host: 10.211.55.44
ip: 10.211.55.44
access_ip: 10.211.55.44
node4:
ansible_host: 10.211.55.45
ip: 10.211.55.45
access_ip: 10.211.55.45
node5:
ansible_host: 10.211.55.46
ip: 10.211.55.46
access_ip: 10.211.55.46
children:
kube-master:
hosts:
node1:
node2:
kube-node:
hosts:
node1:
node2:
node3:
node4:
node5:
etcd:
hosts:
node1:
node2:
node3:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {}
Tuning Kubernetes nodes and setup haproxy for Kubernetes masters
Download the role for tuning Kubernetes node.
cd roles/
git clone https://github.com/lapee79/ansible-role-ubuntu-tuning-for-k8s.git
mv ansible-role-ubuntu-tuning-for-k8s ubuntu-tuning-for-k8s
cd ..
Create a playbook for tuning.
cat << EOF | tee nodes-tuning.yml
---
- hosts: all
become: true
roles:
- name: ubuntu-tuning-for-k8s
tags:
- ubuntu
- kubernetes
EOF
Run nodes-tuning playbook
ansible-playbook -i inventory/prod/hosts.yml nodes-tuning.yml
Download the role for setup haproxy to Kubernetes mastsers
cd roles/
git clone https://github.com/lapee79/ansible-role-haproxy-for-k8s-masters.git
mv ansible-role-haproxy-for-k8s-masters haproxy-for-k8s-masters
Set the IP address and port for haproxy in group_vars/all/all.yml
in inventory directory.
cd ..
vim inventory/prod/group_vars/all/all.yml
...
## External LB example config
## apiserver_loadbalancer_domain_name: "elb.some.domain"
apiserver_loadbalancer_domain_name: "10.211.55.101"
loadbalancer_apiserver:
address: 10.211.55.101
port: 443
Edit the inventory file to configure haproxy
all:
hosts:
k8s-master-01:
ansible_host: 10.211.55.42
ip: 10.211.55.42
access_ip: 10.211.55.42
k8s-master-02:
ansible_host: 10.211.55.43
ip: 10.211.55.43
access_ip: 10.211.55.43
k8s-master-03:
ansible_host: 10.211.55.44
ip: 10.211.55.44
access_ip: 10.211.55.44
k8s-worker-01:
ansible_host: 10.211.55.45
ip: 10.211.55.45
access_ip: 10.211.55.45
k8s-worker-02:
ansible_host: 10.211.55.46
ip: 10.211.55.46
access_ip: 10.211.55.46
children:
kube-master:
hosts:
k8s-master-01:
vrrp_instance_state: MASTER
vrrp_instance_priority: 101
k8s-master-02:
vrrp_instance_state: BACKUP
vrrp_instance_priority: 100
k8s-master-03:
vrrp_instance_state: BACKUP
vrrp_instance_priority: 99
vars:
vrrp_interface: enp0s5
vrrp_instance_virtual_router_id: 51
kube-node:
hosts:
k8s-worker-01:
k8s-worker-02:
etcd:
hosts:
k8s-master-01:
k8s-master-02:
k8s-master-03:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {}
Create a playbook for Kubernetes masters’ haproxy.
cat << EOF | tee setup-haproxy-for-k8s-masters.yml
---
- hosts: kube-master
become: true
roles:
- name: haproxy-for-k8s-masters
tags:
- haproxy
- keepalived
- kubernetes
EOF
Run setup-haproxy-for-k8s-masters
playbook
cd ../../..
ansible-playbook -i inventory/prod/hosts.yml setup-haproxy-for-k8s-masters.yml
Customize Kubernetes cluster configurations
There are config files in your inventory directory’s group_vars. You may want to modify some configurations.
vim inventory/prod/group_vars/k8s-cluster/k8s-cluster.yml
...
# Choose network plugin (cilium, calico, contiv, weave or flannel. Use cni for generic cni plugin)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: calico
...
# configure arp_ignore and arp_announce to avoid answering ARP queries from kube-ipvs0 interface
# must be set to true for MetalLB to work
kube_proxy_strict_arp: true
...
# Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts
kubeconfig_localhost: true
# Download kubectl onto the host that runs Ansible in {{ bin_dir }}
# kubectl_localhost: false
vim inventory/prod/group_vars/k8s-cluster/addons.yml
...
# Kubernetes dashboard
# RBAC required. see docs/getting-started.md for access details.
dashboard_enabled: false
Set up Kubernetes cluster with kubespray
Run the ansible-playbook to bootstrap your Kubernetes cluster:
ansible-playbook -i inventory/prod/hosts.yml cluster.yml -b -v \
--private-key=~/.ssh/private_key
Verify Kubernetes cluster
You should verify the Kubernetes cluster using these commands.
kubectl --kubeconfig=inventory/prod/artifacts/admin.conf cluster-info
kubectl --kubeconfig=inventory/prod/artifacts/admin.conf get nodes
Scale Kubernetes cluster
Adding nodes
- Add the new worker node to your inventory in the appropriate group
- Run the ansible-playbook command, substituting
cluster.yml
forscale.yml
:
ansible-playbook -i inventory/prod/hosts.yml scale.yml -b -v \
--private-key=~/.ssh/private_key
Remove nodes
You may want to remove master, worker, or etcd nodes from your existing cluster. This can be done by re-running the remove-node.yml
playbook. First, all specified nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function.
Use --extra-vars "node=,"
to select the node(s) you want to delete.
ansible-playbook -i inventory/prod/hosts.yml remove-node.yml -b -v \
--private-key=~/.ssh/private_key \
--extra-vars "node=nodename,nodename2"
Upgrading Kubernetes
Note
If you have deprecated API objects and plan to upgrade to newer Kubernetes version than 1.16, you can maintain the APIs.
https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/
vim roles/kubernetes/node/defaults/main.yaml
...
# Uncomment if you need to enable deprecated runtimes
kube_api_runtime_config:
- apps/v1beta1=true
- apps/v1beta2=true
- extensions/v1beta1/daemonsets=true
- extensions/v1beta1/deployments=true
- extensions/v1beta1/replicasets=true
- extensions/v1beta1/networkpolicies=true
- extensions/v1beta1/podsecuritypolicies=true
Graceful upgrade
Kubespray supports cordon, drain and uncordoning of nodes when performing a cluster upgrade. There is a separate playbook used for this purpose. It is important to note that upgrade-cluster.yml
can only be used for upgrading an existing cluster. That means there must be at least 1 kube-master already deployed.
ansible-playbook upgrade-cluster.yml -b -i inventory/prod/hosts.yml -e kube_version=v1.16.3
After a successful upgrade, the Server Version should be updated:
kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:42:50Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}