Skip to main content

Dev Environment

Developer Quick Start

Welcome to the Sylva tutorial! This guide will walk you through the basics of deploying a management cluster using Sylva, tailored for developers and testers.

We describe here possible setting using two different CAPI providers:

  • CAPD: CAPI Docker provider.
  • CAPM3: It uses libvirt and sushy-emulator to mimic a baremetal server that can be provisioned with metal3 using ironic, the same way real baremetal servers are provisioned. Here, bootstrap, management, as well as workload cluster, all will be installed on this single node. For a single node management cluster + single node workload cluster, a single VM or baremetal node is enough but must have good amount of resources. For deep dive on emulated baremetal, follow the readme docs on libvirt-metal project repository.

Pre-requisites

Before you begin, ensure your system meets the following requirements:

VM Minimal Requirements

  • Flavor: Any Linux distribution
  • Size: 100Gi
  • Memory: 32Gi
  • CPU: 8 VCPUs

Common Setup

  • Docker: Install Docker from Docker Installation Guide.
  • pip: Install pip following the instructions on pip's official site.
  • PyYAML: Install PyYAML either from your Linux distribution's package manager or via pip. Visit PyYAML for more details.
  • Yamllint: Install Yamllint through your distribution's package manager or pip. More information can be found on Yamllint's website.
  • yq: Install yq, steps are found on the offical Github Page.
  • Optional: Set up proxies as per the troubleshooting guide if necessary.

Clone sylva-core

Clone the sylva-core repository to get started with your setup:

git clone https://gitlab.com/sylva-projects/sylva-core.git
cd sylva-core

Prepare your deployment values

Copy the default environment values to a new directory and modify them as per your requirements:

cp -r environment-values/kubeadm-capd/ environment-values/my-kubeadm-capd
vim environment-values/my-kubeadm-capd/values.yaml

Setup the Cluster Virtual IP

Configure the virtual IP for your cluster by checking and setting up the Docker network:

# Check if Docker network "kind" exists and create if it doesn't
if ! docker network inspect kind > /dev/null 2>&1; then
echo "Docker network 'kind' doesn't exist. Creating the network..."
docker network create kind
fi

# Export Docker network "kind" address
KIND_PREFIX=$(docker network inspect kind -f '{{ (index .IPAM.Config 0).Subnet }}')
CLUSTER_IP=$(echo $KIND_PREFIX | awk -F"." '{print $1"."$2"."$3".100"}')
echo $CLUSTER_IP
yq -i ".cluster_virtual_ip = \"$CLUSTER_IP\"" environment-values/my-kubeadm-capd/values.yaml

Verify that the virtual cluster IP has been set correctly:

yq e ".cluster_virtual_ip" environment-values/my-kubeadm-capd/values.yaml

Optional: Proxies Setup for Your Management Cluster

Configure proxy settings for your management cluster if needed:

proxies:
http_proxy: "your_http_proxy"
https_proxy: "your_https_proxy"
no_proxy: "your_no_proxy_list"

Optional: Docker Hub Registry Mirrors Setup

Set up Docker Hub registry mirrors to avoid rate limits on image pulls:

# Configure containerd registry mirrors as per the official containerd documentation
registry_mirrors:
hosts_config:
docker.io:
- mirror_url: "http://your.mirror/docker"
note

Deploy

With your values configured, proceed to deploy using the bootstrap script

./bootstrap.sh environment-values/my-kubeadm-capd

After deployment

Adding a Workload Cluster

To add a workload cluster, copy the environment values, set the workload cluster virtual IP, and apply them:

cp -r environment-values/workload-clusters/kubeadm-capd/ environment-values/workload-clusters/my-workload-kubeadm-capd/

KIND_PREFIX=$(docker network inspect kind -f '{{ (index .IPAM.Config 0).Subnet }}')
WORKLOAD_CLUSTER_IP=$(echo $KIND_PREFIX | awk -F"." '{print $1"."$2"."$3".200"}') # use the .200 (or any other that wouldn't be assigned by Docker) IP from the local kind subnet
echo $WORKLOAD_CLUSTER_IP
yq -i ".cluster_virtual_ip = \"$WORKLOAD_CLUSTER_IP\"" environment-values/workload-clusters/my-workload-kubeadm-capd/values.yaml

./apply-workload-cluster.sh environment-values/workload-clusters/my-workload-kubeadm-capd

Removing a Workload Cluster

To remove a workload cluster, suspend all operations and delete the cluster:

export WORKLOAD_CLUSTER=my-workload-kubeadm-capd
flux suspend -n $WORKLOAD_CLUSTER --all kustomization
flux suspend -n $WORKLOAD_CLUSTER --all helmrelease
kubectl delete -n $WORKLOAD_CLUSTER clusters.cluster.x-k8s.io $WORKLOAD_CLUSTER
kubectl delete namespace $WORKLOAD_CLUSTER

For further assistance, refer to the detailed documentation or the community support channels.