Skip to content

Let’s get to the final chapter. We’ve learned how to create all the individual YAML files for a Deployment, Service, ConfigMap, Secret, Ingress, and PVC. For a real application, this can easily become 5, 10, or even more separate files.

This introduces a new set of problems:

  • How do you manage all these files as a single, cohesive application?
  • How do you handle deploying to different environments (dev, staging, prod) where settings like replica counts, domain names, and resource limits need to change, without copying and pasting everything?
  • How do you easily install, upgrade, and delete the entire application stack with a single command?

The answer to these challenges is Helm. Helm is the official package manager for Kubernetes. Think of it like apt/yum for Linux, npm for Node.js, or pip for Python. It’s a tool that allows you to package all your related Kubernetes resources together and manage them as a single unit.

  1. Chart: A Helm Chart is the packaging format. It’s a collection of files in a specific directory structure that describes a related set of Kubernetes resources. This is the “package” you create for your application. The key directories are:
    • Chart.yaml: A file containing metadata about the chart, like its name and version.
    • templates/: A directory containing all your Kubernetes YAML files (e.g., deployment.yaml, service.yaml).
    • values.yaml: A file containing the default configuration values for your chart.
  2. Templating: The YAML files inside a chart’s templates/ directory are not static. They are processed by a templating engine. Instead of hardcoding a value like replicas: 3, you would write replicas: {{ .Values.replicaCount }}. This placeholder will be filled in with a value from your values.yaml file. This is incredibly powerful because it allows you to write a single set of YAML templates and reuse them for all your environments.
  3. Values: The values.yaml file provides the default values for all the placeholders in your templates. You can override these values when you install the chart, either by providing a custom values file or by setting them directly on the command line. For example, you could have a prod-values.yaml file that sets replicaCount: 10 and a dev-values.yaml that sets it to 1.
  4. Release: A Release is an instance of a chart running in your Kubernetes cluster. If you install the same chart twice with different configurations, you create two separate Releases.


  1. Create a New Chart Helm provides a command to scaffold a new chart with all the necessary files and directories.

    Terminal window
    helm create my-webapp-chart

    Look inside the newly created my-webapp-chart directory. You’ll see the Chart.yaml, values.yaml, and a templates directory with boilerplate YAML for a deployment, service, ingress, and more.

  2. Customize the Chart’s Values Let’s edit the my-webapp-chart/values.yaml file to match our Nginx application. Find these keys and change their values:

    # Inside values.yaml
    replicaCount: 3 # Let's start with 3 replicas
    image:
    repository: nginx # Change from the default to nginx
    tag: "1.21.6" # Use the version we had before
    pullPolicy: IfNotPresent
    service:
    type: NodePort # The default is ClusterIP, let's use NodePort for easy access
    port: 80

    Take a quick look at my-webapp-chart/templates/deployment.yaml. You will see placeholders like replicas: {{ .Values.replicaCount }} which directly correspond to the keys in values.yaml.

  3. Install the Chart Now, instead of using kubectl apply on multiple files, we use one command to install the entire application stack. We’ll name our release my-first-release.

    Terminal window
    helm install my-first-release ./my-webapp-chart

    Helm will print out useful information, including how to access your application. You can see the status of your release with helm list. You can also verify that the Kubernetes objects were created with kubectl get deployment,service.

  4. Upgrade the Release Let’s say we want to scale our application up to 5 replicas. With Helm, this is an “upgrade”.

    Terminal window
    helm upgrade my-first-release ./my-webapp-chart --set replicaCount=5

    Helm will intelligently figure out what changed and apply only those changes to the cluster. Run kubectl get pods and you will see Kubernetes scaling your deployment up to 5 Pods. This same process is used to update the image.tag when you have a new version of your application.

  5. Uninstall the Release When it’s time to clean up, you don’t need to delete each object individually. You just uninstall the release.

    Terminal window
    helm uninstall my-first-release

    This one command removes the Deployment, Service, and every other resource associated with that release, leaving your cluster clean.

Putting It All Together: Your Deployment Workflow

Section titled “Putting It All Together: Your Deployment Workflow”

Congratulations! You’ve gone from zero to a fully deployable application. Here is what your workflow looks like with the knowledge you now have:

  1. Develop & Containerize: Write your application code and create a Dockerfile to package it into a container image. Build and push that image to a registry like Docker Hub or Google Container Registry.
  2. Chart Your Application: Create a Helm chart for your app using helm create. Modify the default values.yaml and templates to match your application’s needs (image name, ports, config, storage requirements, etc.).
  3. Install to Dev: Install your chart into a development namespace in your Kubernetes cluster using helm install. You can use a dev-values.yaml file to set a low replica count and other dev-specific settings.
  4. Test and Iterate: As you develop new versions of your code, you push a new image and run helm upgrade --set image.tag=1.1.0 to roll out the update.
  5. Promote to Production: When you’re ready to go live, you install the exact same chart into your production namespace, but this time you provide a prod-values.yaml file to override the settings for a production environment (replicaCount: 50, ingress.hostname: my-prod-app.com, etc.). helm install prod-release ./my-webapp-chart -f prod-values.yaml

By learning Pods, Deployments, Services, ConfigMaps, PersistentVolumes, and Helm, you now have the foundational knowledge required to deploy, manage, and scale real-world applications on Kubernetes.


You’ve built a solid foundation, and the next step is to add more detail and realism to that foundation. Instead of jumping to obscure or advanced topics, the best path is to learn the concepts that make your applications truly robust, secure, and ready for production environments.

Here are the next best things to learn, focusing on deepening your understanding of core concepts.

1. Deepen Your Core Knowledge: Making Your Apps Production-Ready

Section titled “1. Deepen Your Core Knowledge: Making Your Apps Production-Ready”

These topics build directly on what you already know about Pods and Deployments.

a) Probes: Liveness, Readiness, and Startup

Section titled “a) Probes: Liveness, Readiness, and Startup”

You know that Kubernetes can restart a Pod if it crashes. But what if your application is running but stuck in a deadlock, unable to serve traffic? Kubernetes won’t know it’s unhealthy.

What it is:
Probes are health checks that you configure for your Pods. Kubernetes periodically runs these checks and acts based on the results.

  • Liveness Probe: Checks if your application is still alive. If it fails, Kubernetes kills and restarts the Pod (great for recovering from deadlocks).
  • Readiness Probe: Checks if the app is ready to serve traffic. If it fails, the Pod is removed from the Service endpoints (essential for zero-downtime deployments).
  • Startup Probe: Used for slow-starting apps; disables liveness/readiness probes until the app is fully started.

Why it’s next: This is arguably the most important next step. Without probes, you don’t have true self-healing or reliable zero-downtime deployments.

b) Resource Management: Requests and Limits

Section titled “b) Resource Management: Requests and Limits”

Right now, your Pods can consume unlimited CPU and memory — dangerous in a shared cluster.

What it is:

  • Requests: Guaranteed minimum resources (used for scheduling).
  • Limits: Hard caps. Exceed memory limit → Pod is killed. Exceed CPU limit → throttled.

Why it’s next: Fundamental for stability and fairness in multi-tenant clusters.

So far, everything lives in the default namespace. As clusters grow, this becomes chaotic.

What it is:
Namespaces create virtual clusters within a physical cluster. They:

  • Scope object names
  • Allow resource quotas and access policies per namespace
  • Separate environments (dev/staging/prod) or teams

Why it’s next: The standard way to organize real-world clusters.

You know Deployments — now expand to other workload types.

Deployments are for stateless apps. What about databases where Pod identity and ordering matter?

What it is:
A controller like Deployment, but for stateful applications. Provides:

  • Stable, unique Pod identities (e.g., db-0, db-1)
  • Ordered scaling and rollout
  • Dedicated PersistentVolumeClaims per replica

Why it’s next: Required for databases, distributed systems, or any app needing stable identity.

Not everything is a long-running service.

What it is:

  • Job: Ensures one or more Pods run to completion (e.g., migrations, batch processing).
  • CronJob: Creates Jobs on a schedule (like Linux cron).

Why it’s next: Covers common batch and scheduled workload needs.

Official Kubernetes Documentation (your best resource)

Section titled “Official Kubernetes Documentation (your best resource)”
  • Katacoda / O’Reilly Interactive Learning (free browser-based Kubernetes labs)
  • Play with Kubernetes (PWK) — temporary clusters in your browser

“Kubernetes Up & Running, 3rd Edition” by Kelsey Hightower, Brendan Burns, and Joe Beda
— Practical, authoritative, and excellent for these exact topics.

Master these, and you’ll go from “I can deploy a Pod” to “I can run reliable, production-grade applications on Kubernetes.”