Skip to content

Kubernetes Management

Kubernetes has established itself as the de facto standard for container orchestration, enabling the automation of deploying, scaling, and managing containerized applications at scale.

Our Kubernetes service in the new Cloud Services portal offers you a powerful and managed Kubernetes environment. This means we take care of the complex setup, maintenance, and management of the Kubernetes Control Plane (Master Nodes), allowing you to fully concentrate on the development and operation of your applications.

This service is seamlessly integrated into our portal and utilizes the robust infrastructure of OpenStack and Gardener for the provisioning of the Worker Nodes. Furthermore, benefit from easy integration with other portal services such as our S3-compatible object storage and the central backup service.

The service is aimed at developers, DevOps teams, and system administrators who are seeking a scalable, reliable, and easy-to-manage platform for their containerized workloads.

This documentation guides you through all necessary steps: from the prerequisites and the creation and configuration of your first cluster to daily administration and scaling. We explain the specific features and options available to you in our Cloud Portal.

Prerequisites

These prerequisites are necessary to create a Kubernetes cluster in the Cloud Services Portal.

  • A valid account in the Cloud Services Portal.
  • Required permissions/roles in the portal to create Kubernetes clusters.
  • Sufficient quota (vCPU/vRAM/Storage) for creation.
  • Potentially local tools (kubectl, k9s ...).

Getting Started

Overview

We offer you two options for creating your Kubernetes cluster in our Cloud Services Portal:

  • As a new project
  • In an existing project

Creating a Cluster

To create a cluster, a project with sufficient quota is mandatory. The step-by-step guide assumes the wizard creates a standalone project. For an already existing project, the selection of the project name and size is naturally omitted.

Step by Step

Setup Cluster

Fundamentally, the wizard is very easy to use and relatively clear with the form fields for creating a cluster.

  • Cluster Name -> Must not be longer than 10 characters.

  • Purpose

    • testing -> No monitoring, no HA of the Kubernetes Controller components.
    • production -> Monitoring and HA of the Controller components.
  • Image -> Here you can define the desired Worker Image version. Changing this later is not supported.

  • Kube-Version -> Here you can define the required Kubernetes version. This can only be upgraded to a higher version later, not downgraded.

  • Flavor -> Here the possible OpenStack flavors for the Worker Nodes are defined.

  • Zones -> Here the distribution of Worker Nodes across our different AZs is defined.

  • Worker Count -> Here you can define the minimum and maximum number of Workers. Our service may detect load peaks and accordingly provision more Workers/Resources for the cluster.

Note

Please note that this process can take up to 10 minutes depending on the number of workers.

Once the cluster is created, it appears in the list:

Cluster Healthy

Accessing the Cluster

Once the Kubernetes cluster is created and in a healthy status, you can generate both an admin configuration and a viewer configuration.

Arrow pointing to Download-Kubeconfig

Kube-admin-Form

Warning

Please note that these access credentials are only valid for 24 hours for security reasons, regardless of the number of days specified in the field!

Using the Kubernetes Config with Kubectl

After downloading, you will receive a YAML file that looks like this:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1akNDQWs2Z0F3SUJBZ0lRSlRoRUhBWjlrMzJMZDlnQWg5bFhGREFOQmdrcWhraUc5dzBCQVFzRkFEQU4KTVFzd0NRWURWUVFERXdKallUQWVGdzB5TlRBME1UUXhNRFExTVRsYUZ3MHpOVEEwTVRReE1EUTJNVGxhTUEweApDekFKQmdOVkJBTVRBbU5oTUlJQm9qQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FZOEFNSUlCaWdLQ0FZRUE0aEU1CmpQUHZFNDFiUnNuUHV5a0J2WkthR3ovTWJKa3Ewb0liQzlVVlVYTW1EV0hqeWpjeHdLK0haZ2NSNG1VY0VseXQKVVk1dSt1UGF1SlpFQjk3a3hCRTY0U2h4WFdDK0VReDJzNHZBdFhjVkxadFdyTGJ4cjNrVnY4ZjUxY280SWpoOApJOGZmVk51cDVvbmpia0ZqWTQwQjg2c0gzeUg3OUoyWVEyWC9zUTZrajhDVEVxUFRRZGcvNGZ3Ymh5WUpRQVpqCjQwbHZWTTVCcndPVm1KMDEzZkkyNVlmME15dDJuUjRlMGo2TVlVZEhxclFScjh2bStXc1BzbHN0M1YzdE02YWcKQ0p3NU40VXlrSDZRUVNRcmdLS0ZmNmMrbVJNT0VIWS9WZGVack5SWWtIQmVZY2x0OVIxTkMzK3hjK2ZaM3ZyeAp5ZVFsS3NzV0x1Ly8yTElVWVlCbE5FWDhadzhuRW9Id3Q5OEZkQTN4ZHRHdXJkcGJ6MHFNZVVRK1VtblZPWTRLCldpTlR6WHkvVlpLVFFmWkFER1dJdVZxUFAxVnh0Z2I5d2tscjhRd28xVFljV25aMFRML3JLa3gvemNYK1pGbW8KbXVseUg2OTl6ZnZpdFBkWWdLT0c5MVl0Q2grM1F2bmlxbEFIUy9BZjRYL2F4Z2dsbWYwZURUaXVXMFhwQWdNQgpBQUdqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lCcGpBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXCkJCU3hIYm9seTV4NFJXbS9aeW1ibENndllrL3d4ekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBWUVBS0FIL0RBRlUKZ0Jibm5zV3JRVWppMVYxajVWNW5TWHM3K2JCT2MxSVdqT3VjSDVhc3Y3OGlHaC9WV21pblFwS0ZPNTAwWFREMgp3T29ubVN1dE9sLzFpa0E1M29Mb25TdU9JWG5vdXYzVFc2SUo1WExsT3k4QU1FY0FrQy9KN2NrdS9CZUJ1QTMvClJ2eU51U1RpN2p5S1l4WnhXZDRzYnNESkZ1Qi9ycFBhSmtEcTZYRTFCTWJOeUd2UTduZDJucXhINDBCaWtiZ2MKOEpuTjFRb2N3b25WSFR2YWtkZGtUa1E2emgraDlMcUJBdy9UR0pvdmJRM1o1bFJJUDBqdGFXVnVrNlovK3k2RApIZWdDZkMraWlOOTVpalVrY25nMkZMVE9wSm5BdHloRnNtUlFKSjM3cGZhcWxrMXNPeHoyUUIxaGpMOHRveHNTCmJwbkZURW5tclQwc29uQjhUZXgzSzlrT2FrcGVuQXZ3cG1OQXhCd2RodHpyRFdUSnAzUmx4S2F0NENLR2U0R0kKVjFrSEVkczN6RjFJK0VjOW5obXVxcEViNEdXbzNsVTJMVllwSnBscnd3QnhFbHU1TzN4b0IxM1hTME96TittTgpmdWpNemZNakkzazBpODd5cVFlRm5kcVQ4ZjJhVFFieVovYWNLdVNGTDRzV0xQTWtmTDFoQWVBOAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://api.kc-6iizv1z.ew5cmpb.gardener.ewcs.ch
  name: garden-8586188f435346b4af787015734ef963--kc-6iizv1z-external
  • The name corresponds to the Cluster Name + Role.

On your Client

You can now either copy this YAML file to /home/[user]/.kube/config

Or define an export that references this YAML:

export KUBECONFIG=/tmp/kubeconfig-kv-6iiuvlz-admin.yml

Example

Now you can easily query the cluster using the command-line tool kubectl:

root@kube-trial:~/# kubectl get nodes
NAME                                                       STATUS   ROLES    AGE   VERSION
shoot--ew5cmpb--kc-6iizv1z-default-worker-z1-5c69c-mpnzr   Ready    <none>   84m   v1.31.3

In this case, the cluster has only one Worker Node.

Editing the Cluster

You can access the Update Form via the dropdown menu:

Cluster-Update

The following options are available to you here:

  • Update the Kubernetes Version
  • Update the Purpose from Testing to Production

Note

Depending on the history, you might already be in Production status. Please note that you cannot revert to testing status, as this is not technically possible.

The same applies to the Kubernetes version of the cluster; if you have already selected the latest version, no newer one will be offered. We will always strive to provide you with the latest versions of Kubernetes.

Warning

Please note that changes to the Purpose or Version trigger a major cluster operation and thus require time.

Worker Nodes

Your Workers can be managed via the Worker-Groups dropdown.

worker-group

  • Existing Worker-Groups can only have their count edited.
  • If you want a distribution across multiple AZs, you must create a new Worker-Group and optionally delete the old one.

Creating a New Worker-Group

Clicking the [Create worker group]{.title-ref} button opens the following form for you to define a new Worker-Group.

worker-group-form

  • Here, similar to the initial form, you can define additional Worker-Groups for your Kubernetes cluster.

Note

Here too, the project's quota must be observed, as it cannot be exceeded, which could lead to an error when creating the Worker-Nodes. Please check the Notifications tab here.

Worker Groups in Kubectl

root@kube-trial:~/.kube# kubectl get nodes
NAME                                                       STATUS   ROLES    AGE    VERSION
shoot--ew5cmpb--kc-6iizv1z-default-worker-z1-5c69c-mpnzr   Ready    <none>   127m   v1.31.3
shoot--ew5cmpb--kc-6iizv1z-test-z1-5cc9c-wb2zf             Ready    <none>   2m2s   v1.31.3
  • The Worker-Groups are distinguished by their names.

Deleting a Worker-Group

You can delete existing Worker Groups down to one group, so that the cluster remains functional.

Warning

Depending on the deployment and the definition of the worker-node affinity, it may happen that one of your Worker-Groups cannot be deleted, as this would otherwise compromise the integrity of your deployment.

Deleting the Groups is initiated via the [Delete it]{.title-ref} button.

Note

Please note that this process also requires time, as the Nodes need to be "drained" and Pods must be moved.

Deleting the Cluster

Deleting the cluster is similarly simple to deleting Worker-Groups; the entire cluster can be deleted via the dropdown on the Cluster Management page.

Prerequisites for deleting a cluster:

Warning

Clusters on which deployments are still running cannot be deleted !!!