Dev Environment
Developer Quick Start
Welcome to the Sylva tutorial! This guide will walk you through the basics of deploying a management cluster using Sylva, tailored for developers and testers.
We describe here possible setting using two different CAPI providers:
- CAPD: CAPI Docker provider.
- CAPM3: It uses libvirt and sushy-emulator to mimic a baremetal server that can be provisioned with metal3 using ironic, the same way real baremetal servers are provisioned. Here, bootstrap, management, as well as workload cluster, all will be installed on this single node. For a single node management cluster + single node workload cluster, a single VM or baremetal node is enough but must have good amount of resources. For deep dive on emulated baremetal, follow the readme docs on libvirt-metal project repository.
Pre-requisites
Before you begin, ensure your system meets the following requirements:
VM Minimal Requirements
- CAPD
- CAPM3
- Flavor: Any Linux distribution
- Size: 100Gi
- Memory: 32Gi
- CPU: 8 VCPUs
- Flavor: Any Linux distribution
- Disk size: 128Gi
- Memory: 64Gi
- CPU: 16 VCPUs
With this setup, you can use rke2-capm3-virt
and kubeadm-capm3-virt
templates to create a single node management cluster with a single node workload cluster. You will have to patch those templates to create single-node clusters. For example:
cluster:
...
control_plane_replicas: 1
machine_deployments:
md0:
replicas: 0
Refer base-capm3-virt for more details on defaults.
Alternatively, if resources allows, fully high available cluster can also be created on single VM/node.
Emulated Baremetal is also being leveraged in Sylva CI to deploy & upgrade fully highly available clusters, with 4 nodes for the management cluster, and 4 nodes for the workload cluster. It is recommended to use 256GiB memory with 64 vCPUs for it.
Common Setup
- Docker: Install Docker from Docker Installation Guide.
- pip: Install pip following the instructions on pip's official site.
- PyYAML: Install PyYAML either from your Linux distribution's package manager or via pip. Visit PyYAML for more details.
- Yamllint: Install Yamllint through your distribution's package manager or pip. More information can be found on Yamllint's website.
- yq: Install yq, steps are found on the offical Github Page.
- Optional: Set up proxies as per the troubleshooting guide if necessary.
Clone sylva-core
Clone the sylva-core
repository to get started with your setup:
git clone https://gitlab.com/sylva-projects/sylva-core.git
cd sylva-core
Prepare your deployment values
Copy the default environment values to a new directory and modify them as per your requirements:
- CAPD
- CAPM3
cp -r environment-values/kubeadm-capd/ environment-values/my-kubeadm-capd
vim environment-values/my-kubeadm-capd/values.yaml
Setup the Cluster Virtual IP
Configure the virtual IP for your cluster by checking and setting up the Docker network:
# Check if Docker network "kind" exists and create if it doesn't
if ! docker network inspect kind > /dev/null 2>&1; then
echo "Docker network 'kind' doesn't exist. Creating the network..."
docker network create kind
fi
# Export Docker network "kind" address
KIND_PREFIX=$(docker network inspect kind -f '{{ (index .IPAM.Config 0).Subnet }}')
CLUSTER_IP=$(echo $KIND_PREFIX | awk -F"." '{print $1"."$2"."$3".100"}')
echo $CLUSTER_IP
yq -i ".cluster_virtual_ip = \"$CLUSTER_IP\"" environment-values/my-kubeadm-capd/values.yaml
Verify that the virtual cluster IP has been set correctly:
yq e ".cluster_virtual_ip" environment-values/my-kubeadm-capd/values.yaml
cp -r environment-values/rke2-capm3-virt/ environment-values/my-rke2-capm3-virt
vim environment-values/my-rke2-capm3-virt/values.yaml
Optional: Proxies Setup for Your Management Cluster
Configure proxy settings for your management cluster if needed:
proxies:
http_proxy: "your_http_proxy"
https_proxy: "your_https_proxy"
no_proxy: "your_no_proxy_list"
Optional: Docker Hub Registry Mirrors Setup
Set up Docker Hub registry mirrors to avoid rate limits on image pulls:
# Configure containerd registry mirrors as per the official containerd documentation
registry_mirrors:
hosts_config:
docker.io:
- mirror_url: "http://your.mirror/docker"
- If needed, visit the Containerd related documentation
- See charts/syla-units/values.yaml for a more detailed example
Deploy
With your values configured, proceed to deploy using the bootstrap script
./bootstrap.sh environment-values/my-kubeadm-capd
After deployment
Adding a Workload Cluster
To add a workload cluster, copy the environment values, set the workload cluster virtual IP, and apply them:
cp -r environment-values/workload-clusters/kubeadm-capd/ environment-values/workload-clusters/my-workload-kubeadm-capd/
KIND_PREFIX=$(docker network inspect kind -f '{{ (index .IPAM.Config 0).Subnet }}')
WORKLOAD_CLUSTER_IP=$(echo $KIND_PREFIX | awk -F"." '{print $1"."$2"."$3".200"}') # use the .200 (or any other that wouldn't be assigned by Docker) IP from the local kind subnet
echo $WORKLOAD_CLUSTER_IP
yq -i ".cluster_virtual_ip = \"$WORKLOAD_CLUSTER_IP\"" environment-values/workload-clusters/my-workload-kubeadm-capd/values.yaml
./apply-workload-cluster.sh environment-values/workload-clusters/my-workload-kubeadm-capd
Removing a Workload Cluster
To remove a workload cluster, suspend all operations and delete the cluster:
export WORKLOAD_CLUSTER=my-workload-kubeadm-capd
flux suspend -n $WORKLOAD_CLUSTER --all kustomization
flux suspend -n $WORKLOAD_CLUSTER --all helmrelease
kubectl delete -n $WORKLOAD_CLUSTER clusters.cluster.x-k8s.io $WORKLOAD_CLUSTER
kubectl delete namespace $WORKLOAD_CLUSTER
For further assistance, refer to the detailed documentation or the community support channels.