For a full list of variables you can configure, please read Configurable Parameters in Kubespray.
By default, Kubespray will setup Kubernetes to create two replicas of CoreDNS, which is impossible on single-node clusters. While it doesn’t break anything or cause any critical errors, it can create noise in your logs and we recommend adding the following:
In your inventory/<CLUSTER_NAME>/hosts.yaml
add the dns_min_replicas
variable like so:
k8s-cluster:
vars:
dns_min_replicas: 1
In inventory/<CLUSTER_NAME>/group_vars/k8s-cluster/k8s-cluster.yml
set
kube_version: v1.18.10
In your inventory/<CLUSTER_NAME>/hosts.yaml
add the docker_version
and calico_version
variables like so:
k8s-cluster:
vars:
docker_version: latest
calico_version: "v3.16.4"
-e 'slate_enable_ingress=false'
to your ansible-playbook
command in SLATE Cluster Registration.In inventory/<CLUSTER_NAME>/group_vars/k8s-cluster/addons.yml
set
cert_manager_enabled: false
to
cert_manager_enabled: true
If your box has multiple NICs, you will want to specify which NIC Calico uses for BGP peering with other nodes (else will run into Calico wedging itself on cluster reboots). For more information, read IP autodetection methods.
In inventory/<CLUSTER_NAME>/group_vars/k8s-cluster/k8s-net-calico.yml
set
calico_ip_auto_method: "cidr={{ ip }}/32" # Defaults to the address specified in `ip:` in hosts.yaml
Kubespray defaults to configuring kube-proxy
with IPVS instead of IPTables as it is more performant. If you would like to use IPTables instead, in inventory/<CLUSTER_NAME>/group_vars/k8s-cluster/k8s-cluster.yml
set:
kube_proxy_mode: ipvs
to
kube_proxy_mode: iptables
These setups are quite complex and we recommend pinging #installation
on the SLATE Slack before proceeding. Both of these require disabling MetalLB.
In your inventory/<CLUSTER_NAME>/hosts.yaml
add the access_ip
variable so your node block(s) look like the following:
hosts:
node1:
access_ip: <ACCESS_IP> # the public IP bound one-to-one to _this_ host
ansible_host: <ACCESS_IP> # the IP on which the host is accessible via SSH
ip: <HOST_IP> # the internal IP of the host to bind Kubernetes cluster services to
You will have to manually port-forward every service running on your cluster for them to be publicly accessible. To start, port 6443 must be port-forwarded for kubectl
access.
In your inventory/<CLUSTER_NAME>/hosts.yaml
add the supplementary_addresses_in_ssl_keys
variable like so:
k8s-cluster:
vars:
supplementary_addresses_in_ssl_keys: ['<NAT_IP>']
And add the following flag to your SLATE registration playbook command: -e 'cluster_access_ip=<NAT_IP>:6443'
.