Update Status of the project: Stable

kubeadm-playboook ansible project’s code is on Github

Quick explanation

https://medium.com/@re.search.it.eng/batteries-included-kubernetes-for-everyone-bccf9b8558dd

What is it:

For 3 years we keep on gathering best guidelines and growing this project for best kubernetes cluster installation + addons. It’s gluing: pure kubeadm, offical helm charts for various addons, fine-tunings from docs and best practices.

All based purely on kubeadm and official helm charts.
It tries to bring together most (if not all) the steps to get from a freshly installed linux to a working k8s cluster.
Its vision is to find and integrate the best tools out there (while using KISS principle).

Why

Going beyond minikube, making your own (usually on prem) k8s cluster (with the usuall addons installed) is still too hard or needlessly complex. Kubeadm is so strong now, that complex projects don’t make sense.
We felt that what was missing was getting things before and after the cluster installation, to get an initial (but reasonable) platform up.

What it makes it different:

What is in plan

  1. Authentication via LDAP (in plan KeyCloak); integrate it in dashboard, grafana, etc.
  2. Move from heapster to metrics server (once it will be stable)
  3. Logging stack (e.g. EFK - currently helm charts are not fully stable) (PRs are welcome :)

Since when

Started years back. Battle tested on for all Centos/RHEL 7.2+ till 7.6 and Ubuntu 16.04,18.04,19.10,20.04 (both with overlay2 and automatic docker_setup).
Actively used on a daily basis and tested with k8s starting 1.7 till 1.19.

Targets/pros&cons

Kubeadm simplifies drastically the installation, so for BYO (vms,desktops,baremetal), complex projects like kubespray/kops are not required any longer. Major difference from other projects: it uses kubeadm for all activities, and kubernetes is running in containers.
The project is for those who want to create&recreate k8s cluster using the official method (kubeadm), with all production features:

This project targets to get a fully working environment in matter of minutes on any hw: baremetal, vms (vsphere, virtualbox), etc.

What it does not do:

PROS:

CONS/future versions:

Prerequisites:

This playbook will:

NOTE: It does support http_proxy configuration cases. Simply update the your proxy in the group_vars/all.
This has been tested with RHEL&CentOS 7.3-7.6 and Ubuntu 16.04 and Kubernetes v1.6.1 - v1.13.4
In general, keep the kube* tools at the same minor version with the desired k8s cluster. (e.g. For installing k8s v1.7 one must also use kubeadm 1.7 (kubeadm limitation).)
FYI, higher kube* are usually supported with 1 minor version older cluster (e.g. kube[adm/ctl/let] 1.8.* accepts kubernetes cluster 1.7.*).

If for any reason anyone needs to relax RBAC, they can do: kubectl create -f https://github.com/ReSearchITEng/kubeadm-playbook/blob/master/allow-all-all-rbac.yml

How To Use:

Use the right release/branch

Use the release/branch that fits your k8s version needs. While master may have additinal features, it’s as tested as the releases.

Full cluster (re)installation (reset + install)

git clone https://github.com/ReSearchITEng/kubeadm-playbook.git
cd kubeadm-playbook/
cp hosts.example hosts
vi hosts <add hosts>
# Setul vars in group_vars
vi group_vars/all/* <modify vars as needed>
ansible-playbook -i hosts site.yml [--skip-tags "docker,prepull_images,kubelet"] [-f1]

If there are any issues, you may want to run only some of the steps, by choosing the appropriate tags to run. Read the site.yml. Here are also some explanations of important steps:

Add nodes:

Add nodes in 2 steps: reset node + install node:

To remove a specific node (drains and afterwards kube resets, etc)

Other activities possible:

There are other operations possible against the cluster, look at the file: site.yml and decide. Few more examples of useful tags:

Playbooks

site.yml -> holds all tasks, including reset, install, post_deploy (overlay network, charts install), sanity;
This way, site.yml should be for install install of the cluster (where all steps are required). One may use site.yml for maintenance, but always use the tags for the desired actions (on top of keeping only primary-master and desired machines for which actions are targeted)

The below playbooks are subsets of the site.yml:

Simplified config

group_vars/all/temp.yaml

containerd:
  ## if FS used by containerd is zfs (e.g. Ubuntu 20.04+)
  snapshotter: zfs
  ## if using the playbook to also install docker/containerd, storage driver needs to be overriden too. It still assumes /var/lib/docker and /var/lib/containerd/io.containerd.snapshotter.v1.zfs are on ZFS. 
  storage_driver: zfs

## Override version
KUBERNETES_VERSION_CUSTOM: "1.26.0"
CORP_DNS_DOMAIN: "example.com"

Check the installation of dashboard

The output should have already presented the required info (or run again: ansible-playbook -i hosts site.yml --tags cluster_sanity). The Dashboard is set on the master host, and, additionally, if it was set, also at something like: http://dashboard.cloud.corp.example.com (depending on the configured selected domain entry), and if the wildcard DNS was properly set up *.k8s.cloud.corp.example.com pointing to master machine public IP).

e.g. curl -SLk 'http://k8s-master.example.com/#!/overview?namespace=_all' | grep browsehappy

Dashboard is also listening on primary hostname, port 443 (or similar if ingress helm params were changed).
E.g., if your primary-master is vm01.com, browse: https://vm01.com:443/
Note: The http version (http://vm01.com:80/) will ask for token.

For testing the Persistent volume, one may use/tune the files in the demo folder.

kubectl exec -it demo-pod -- bash -c "echo Hello TEST >> /usr/share/nginx/html/index.html "

and check the http://pv.cloud.corp.example.com page.

load-ballancing

For LB, one may want to check also:

DEMO:

Installation demo k8s 1.16 on Ubuntu 18.04: kubeadm ansible playbook install demo asciinema video - demo single machine Ubuntu k8s 1.16

Vagrant

For using vagrant on one or multiple machines with bridged interface (public_network and ports accessible) all machines must have 1st interface as the bridged interface (so k8s processes will bind automatically to it). For this, use this script: vagrant_bridged_demo.sh.

Steps to start Vagrant deployment:

  1. edit ./Vagrant file and set desired number of machines, sizing, etc.
  2. run:
    ./vagrant_bridged_demo.sh --full [ --bridged_adapter <desired host interface|auto>  ] # bridged_adapter defaults to ip route | grep default | head -1 
    

    After preparations (edit group_vars/all, etc.), run the ansible installation normally.

Using vagrant keeping NAT as 1st interface (usually with only one machine) was not tested and the Vagrantfile may requires some changes. There was no focus on this option as it’s more complicated to use afterwards: one must export the ports manually to access ingresses like dashboard from the browser, and usually does not support more than one machine.

kubeadm-ha

Starting 1.14/1.15, kubeadm supports multimaster (aka HA) setup easy (out of the box), so no special setup. (Our playbook supports master HA also for older v1.11-v1.13, thanks to projects like: https://github.com/mbert/kubeadm2ha ( and https://github.com/sv01a/ansible-kubeadm-ha-cluster and/or github.com/cookeem/kubeadm-ha ).

How does it compare to other projects:

Kubeadm -> the official k8s installer

With kubeadm-playbook we are focus only kubeadm. Pros:

Cons:

Other k8s installers

Similar k8s install on physical/vagrant/vms (byo - on premises) projects you may want to check, but all below are without kubeadm (as opposed to this project)

Bonus goodies:

other_tools/ hold scripts like k8s cli which installs easily kubectx, krew, kubeval, etc.
The docs folder hold info on how to secure cluster using operators in an elegant manner (along with aqua’s set of security tests)

PRs are accepted and welcome.

PS: work inspired from: @sjenning - and the master ha part from @mbert. PRs & suggestions from: @carlosedp - Thanks. URL page of kubeadm-playboook ansible project kubeadm-playboook ansible project’s code is on Github

Our story: https://medium.com/@re.search.it.eng/batteries-included-kubernetes-for-everyone-bccf9b8558dd

License: Public Domain