A.N.T

Setting Up a Development Kubernetes Cluster

Set up a single-node Kubernetes Cluster (minikube) as well as other tools (Kubectl, Helm, KEDA, etc.) on your local machine.

April 05, 2024

05m Read

By: Abhilaksh Singh Reen

Table of Contents

Docker

Minikube

Kubectl

Verify Kubectl is connected to Minikube

Adding a Test Deployment

Helm

Testing Helm

Metrics Server

K6

KEDA

Conclusion

In this Blog Post, we'll be setting up a local Kubernetes Development environment. We'll be installing the following tools in order to get this up and running:

1) Docker: For building our Docker image and for running Minikube.

1) Minikube: it is a single-node Kubernetes Cluster. Since we only have one node (our local machine) to work with, we'll be running the K8s Cluster using Minikube.

1) Kubectl: this is a command line tool that communicates with the kube-apiserver running inside the cluster, allowing us to easily manage the cluster's resources.

1) Helm: a tool for easily deploying complicated resources in a K8s Cluster.

1) Metrics Server: the Kubernetes Metrics Service does not come preinstalled in a Minikube cluster - we'll be installing it ourselves.

1) K6: this is a load testing utility made by Grafana Labs, it allows us to easily load our application with thousands of requests simultaneously from our local machine in a predictable way.

1) KEDA - Kubernetes Event-Driven Autoscaling: is a kind of Autoscaler that can scale a particular resource in our cluster based on the number of events that we have to process.

Docker

The installations of the first four tools namely Docker, Minikube, Kubectl, and Helm are relatively simple and can be performed by following the official installation instructions without much need for debugging.

To install Docker, we can follow the instructions at the official installation page.

Make sure to also follow the post-installation steps to be able to use Docker without the magic word sudo.

Minikube

Setting up Minikube is straightforward and the installation instructions can be found on the Minikube Start Page.

Kubectl

Quite similar to Minikube, Kubectl can be easily installed following the instructions available here.

Verify Kubectl is connected to Minikube

Run the following command to get the current Kubernetes Cluster that Kubectl is set up to use:

kubectl config current-context

You should see the output minikube.

In case Kubectl is configured to use another context, we can change it to minikube by running:

kubectl config use-context minikube

The output should say that the context has switched to minikube.

Adding a Test Deployment

Let's test our Minikube Cluster by deploying an Nginx image.

Inside your working directory, create a file called nginx.yaml.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

We've defined a deployment that uses the nginx Docker image and a service that we can use to access our deployment's pods.

Create the deployment and the service:

kubectl apply --filename nginx.yaml

Run the following command to get the pods in the cluster:

kubectl get pods

You should see a pod whose name starts with nginx-deployment.

Let's port-forward port 5000 of our local machine to port 80 of this pod.

kubectl port-forward pod/nginx-deployment-7c5ddbdf54-pjfzq 5000:80

Now, head to localhost:5000 in your web browser, you should see the "Welcome to Nginx" page.

You can also run the following command in a terminal to retrieve the HTML output.

curl -i localhost:5000

Great, we've successfully deployed an application on our new K8s Cluster. Since we won't be needing this application anymore, let's clean up the deployment and the service.

kubectl delete --filename nginx.yaml

Run the following command to make sure that the deployment and the service have indeed been deleted.

kubectl get all

Helm

Just like Docker, Minikube, Kubectl, installing Helm is a simple and straightforward process. Head to the Helm Installation page and follow the instructions.

Testing Helm

Let's install Redis using a Helm Chart provided by Bitnami.

It is generally a good practice to separate your K8s resources into namespaces. We'll create a new namespace for our test.

kubectl create namespace helm-test

Add the Bitnami Helm Charts repo

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

Install the bitnami/redis Helm Chart in the created namespace.

helm upgrade --install --namespace helm-test redis-test bitnami/redis --set auth.enabled=false

Let's see what was created

kubectl --namespace helm-test get all

You should see at least one Redis "master" pod and one or more "slave" pods.

Once again, we'll clean up the resources we created for testing.

kubectl --namespace helm-test delete all --all

Metrics Server

We can try running a command that requires the Metrics server

kubectl top pod

You should see an output saying error: Metrics API not available.

This is the official GitHub repository for the Metrics Server. We can install the required components using this components.yaml file.

kubectl apply --filename https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

If you now run

kubectl top pod

you will see that the Metrics API is still not available.

Let's get all the resources in the kube-system namespace

kubectl --namespace kube-system get all

Here, we see that the metrics-server pod is in the Running state but it's not ready.

We can describe this pod:

kubectl --namespace kube-system describe pod/metrics-server-6d94bc8694-8kct4

and we see an error message that says Readiness probe failed: HTTP probe failed with statuscode: 500, this means that the metrics-server pod is not able to communicate with the Kube API Server.

We can fix this with a simple edit in the metrics-server deployment.

kubectl --namespace kube-system edit deploy metrics-server

Add the following above the image part

        command:
        - /metrics-server
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP

Save the file and run

watch -n 1 kubectl --namespace kube-system get all

You should see the metrics-server pod come to the ready state.

Now, we can get top resources, deploy scalers, etc.

kubectl top pod

K6

Installing K6 is pretty simple, just follow the instructions given here.

Let's create a simple test to make sure everything is working as expected. Create a new file called test-single.js

import http from "k6/http";
import { check } from "k6";

export async function getFirstTodo() {
  const response = http.get("https://jsonplaceholder.typicode.com/todos/1");

  check(response, {
    "status code": (res) => res.status === 200,
    "content type JSON": (res) => res.headers["Content-Type"] === "application/json; charset=utf-8",
  });

  const responseData = response.json();

  check(responseData, {
    "data.id": (resData) => resData.id === 1,
  });
}

export const options = {
  scenarios: {
    getFirstTodo: {
      exec: "getFirstTodo",
      executor: "per-vu-iterations",
      vus: 1,
      iterations: 1,
    },
  },
};

Now, run this test

k6 run test-single.js

You should see an output saying that all checks have passed.

KEDA

KEDA requires its own resources that have to be created before we can use it for autoscaling. You can apply the following file to create these resources.

kubectl apply --filename https://github.com/kedacore/keda/releases/download/v2.10.1/keda-2.10.1-core.yaml

If we list the namespaces ...

kubectl get namespaces

... we should see that a new namespace called keda has been created.

Let's check what's in this namespace

kubectl --namespace keda get all

You should see pods and services for keda-metrics-apiserver and keda-operator.

Conclusion

We've managed to set up 7 tools on our local machine that allow us to easily develop and test applications meant to run on a Kubernetes Cluster.

See you next time :)

DIY Weighted Load Balancing in Python

DIY Weighted Load Balancing in Python

Set up a single-node Kubernetes Cluster (minikube) as well as other tools (Kubectl, Helm, KEDA, etc.) on your local machine.

04m Read