Learning Kubernetes After Amazon ECS - A Hands-On Beginner's Journey
June 6, 2025As cloud consultants, we often jump between different teams, projects, and tech stacks. In several recent projects, I had the situation that clients who used to work with Kubernetes had started to work with AWS Elastic Container Service (ECS) and told me “Is this like x in Kubernetes? Why can’t ECS do that?!” or “Why does xyz takes so long, it’s way faster with Kubernetes…”. Until then, I had mostly been working with ECS and felt comfortable with its simplicity. But now, my clients were asking questions that clearly originated from a Kubernetes background. So, I made a decision: I needed to learn Kubernetes, not just conceptually, but enough to speak the same language as my clients. And luckily, I wasn’t alone on that path.
My coworker Till, who had already worked extensively with Kubernetes, was preparing for the CKA (Certified Kubernetes Administrator) exam. We saw this as a perfect chance for a knowledge exchange: He could refresh his knowledge and understanding by teaching and explaining, and I’d pester him with questions. It was hands-on from day one. We set up a local Kubernetes cluster using kind, explored tools like kustomize, talked about networking, and managed configuration with ConfigMaps and encrypted Secrets with SOPS.
In this post, I want to share a beginner-friendly walkthrough of that journey, especially aimed at those who know Amazon ECS and are now stepping into the Kubernetes world.
Amazon ECS vs. Kubernetes Terminology
Coming from Amazon ECS, Kubernetes initially felt overwhelming. But mapping concepts helped to get started:
Amazon ECS Concept | Kubernetes Equivalent |
---|---|
Cluster | Kubernetes Cluster |
Service | Deployment |
Task Definition | Deployment (Manifest) |
Task | Pod |
Container | Container |
Container Instance | Node |
Load Balancer | Ingress Controller |
Setting Up NGINX on Kubernetes (with kind)
Let’s walk through a basic example: deploying an NGINX container.
1. Create a Cluster
To get started, we’ll use kind to spin up a local Kubernetes cluster. kind stands for “Kubernetes IN Docker”. It runs each cluster node as a Docker container. This makes it lightweight, fast, and ideal for local testing and development.
Instead of using a default cluster setup, we can create a custom config file to define our cluster and make it easier to add more options later:
📄 kind.conf
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
name: my-demo-cluster
nodes:
- role: control-plane
image: kindest/node:v1.31.2
- role: control-plane
image: kindest/node:v1.31.2
- role: control-plane
image: kindest/node:v1.31.2
- role: worker
image: kindest/node:v1.31.2
- role: worker
image: kindest/node:v1.31.2
- role: worker
image: kindest/node:v1.31.2
In this config we define the name
and apiVersion
of the cluster, as well as six nodes. Three of the nodes are control-plane nodes. These are responsible for managing the cluster state (e.g. the Kubernetes API server, scheduler, etcd, etc.). Having multiple control-plane nodes simulates a highly available control plane which is usually used in production setups. The other three nodes are worker nodes. These are the ones that actually run our workloads. All nodes will use the same image kindest/node:v1.31.2
that contains Kubernetes and all necessary dependencies.
To create the cluster using this configuration file, we simply run:
kind create cluster --config kind.conf
This gives us a named cluster (my-demo-cluster) we can build on.
2. Define Pods, Deployments, and Services
Before we can run and access our application in Kubernetes, it’s important to understand three foundational building blocks: Pods, Deployments, and Services.
2.1 Pods
A Pod is the most basic unit in Kubernetes. It wraps one or more containers and shares resources like storage and networking. In most cases, a Pod will run a single container, but it can host multiple containers if needed (e.g., for sidecar patterns). However, Pods are ephemeral. They can be restarted, rescheduled, or deleted, and their IP addresses can change. That’s why they are usually not created directly in production but managed by higher-level objects like Deployments, which we will get to know in the next section.
Here’s an example of a simple standalone Pod definition:
📄 nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app.kubernetes.io/name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Let’s take a look at what all these properties mean:
- apiVersion: Uses the core v1 API for basic Kubernetes objects like Pods.
- kind: Indicates we are creating a Pod.
- metadata: Data used to uniquely identify the object, such as a name string.
- labels: Key-value pairs used for selection and identification.
- spec.containers: Defines one container using the official NGINX image.
- containerPort: Specifies the port that NGINX exposes inside the container.
With this file in place, we can deploy the Pod in our cluster:
kubectl apply -f nginx-pod.yaml
Before we continue, let’s quickly verify that the pod is running:
kubectl get pods
The expected output should look like this:
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 10s
You can also port-forward this standalone Pod to access it locally:
kubectl port-forward pod/nginx-pod 8080:80
Then open http://localhost:8080 in your browser to see the NGINX welcome page. Great, the pod is running, so let’s move on with deployments!
2.2 Deployments
A Deployment allows you to define how many copies (or replicas) of a Pod should be running. Behind the scenes, Kubernetes uses a ReplicaSet to maintain that desired number. If a Pod goes down, the ReplicaSet ensures a new one is started. The Deployment adds another layer on top, enabling powerful features like rolling updates, rollbacks, and app management.
Let’s start again with creating the corresponding file:
📄 nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app.kubernetes.io/name: nginx
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: nginx
template:
metadata:
labels:
app.kubernetes.io/name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Let’s break down what each part of this deployment does (we’ll skip the parts that were already explained in the pods section):
- replicas: We want 3 instances (pods) of NGINX running.
- selector.matchLabels: This tells the deployment which pods to manage. In this case, any pod with
app.kubernetes.io/name: nginx
. - template: The template defines what each replica should look like. The labels here in the metadata must match the matchLabels above, so the deployment knows which pods belong to it.
Now we can apply the deployment to our cluster:
kubectl apply -f nginx-deployment.yaml
In order to check that everything is working as expected, we can list all deployments in our cluster:
kubectl get deployments
You should see something like this:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 10s
If you want, you can also do a port-forward on the deployment level, following the same pattern as above to verify that the deployment is running as expected. However, I will skip the specific steps here.
2.3 Services
A Service provides a stable, permanent endpoint that routes traffic to the right Pods, load-balancing across replicas and ensuring your app is reachable inside (and optionally outside) the cluster. In simpler terms: the Deployment keeps your app running, and the Service makes it reachable.
📄 nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app.kubernetes.io/name: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80
type: ClusterIP
Here is also an explanation of the properties we have not seen yet:
- ports.protocol: The protocol the Service listens on (TCP is the default and most common).
- ports.port: This is the port that clients will use to talk to the Service.
- ports.targetPort: This is the port inside the Pod that the Service forwards to. Since NGINX runs on port 80, we map directly to that.
- type: ClusterIP: This makes the Service internally accessible only within the Kubernetes cluster. To expose it externally (to your browser), we will use ingress later on.
After that we have to apply the service:
kubectl apply -f nginx-service.yaml
Now let’s verify that the application was successful:
kubectl get service
The expected output should look like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m
nginx-service ClusterIP 10.96.134.61 <none> 8080/TCP 8s
To confirm that NGINX is running correctly, we can quickly access it again using port forwarding. Let’s start by listing the available services:
kubectl get service
Then, we pick the NGINX service and forward port 8081 from our local machine to port 8080 on the service:
kubectl port-forward service/nginx-service 8081:8080
Now we can open http://localhost:8081 in our browser and check if the NGINX welcome page appears.
3. Adding an Ingress
Currently, there are multiple ways in Kubernetes to manage external access to our application:
- Ingress-Controller with Ingress or Service with type LoadBalancer
- Gateway-API with combination of Gateway & HttpRoute
The Gateway API is still pretty new, but they’ll probably transition from ingress to gateway in the future. For now, we’ll look at ingress only. If you want to explore different ingress options checkout the corresponding k8s docs.
Since we are using kind, there are two main options for setting up ingress. For the sake of simplicity we will use extraPortMappings
. This is a straightforward way to configure ingress in Kubernetes when running locally. However, keep in mind that Kind is intended only for local development, and extraPortMappings is not something you would use in a real production environment.
Alright, let’s continue adding the following snippet to one of our worker nodes in the kind.conf
file:
- role: worker
image: kindest/node:v1.31.2
labels:
ingress-controller: "true"
extraPortMappings:
- containerPort: 80
hostPort: 8081
protocol: TCP
After that, recreate the cluster:
kind delete clusters my-demo-cluster
kind create cluster --config kind.conf
… and don’t forget to create the deployment and the service again:
kubectl apply -f nginx-deployment.yaml
kubectl apply -f nginx-service.yaml
Next, we want to deploy an ingress-controller. ingress-nginx is the most popular one. For this, we need to download, install and configure it. To do so, we will use helm, a package manager for Kubernetes. It helps you define, install, and manage complex Kubernetes applications. It also offers the possibility to add some specific installation configuration (e.g., controller’s service type, resource limits etc.) using the --values
flag. Time to create the configuration file:
📄 ingress-nginx-values.yaml
controller:
nodeSelector:
ingress-controller: "true"
hostPort:
enabled: true
This configuration tells Kubernetes to deploy the ingress controller to our designated worker node and sets up port forwarding from our local machine, ensuring that the ingress controller (i.e., the load balancer) functions correctly.
We’re ready to use helm and install our ingress-nginx. Here’s the full command:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--values ingress-nginx-values.yaml
The --namespace
option tells Helm where to install the Kubernetes resources. Namespaces are a way to logically separate resources within the same cluster. By default, everything is created in the default namespace. A full list of possible values can be found using the following command after you added the repo:
helm show values ingress-nginx/ingress-nginx
Now we need to create an ingress object:
📄 ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 8080
Before we apply the ingress to our cluster, let’s take a closer look at the rules part of the manifest. We’re using a default rule with no host specified, which means it applies to all incoming requests:
- http: Indicates that we’re defining routing for HTTP traffic.
- paths: A list of URL path patterns the Ingress should match. Each path points to a backend service.
- path: /: This means the rule applies to any request that starts with
/
. Since it’s the root path, it effectively catches all traffic unless more specific paths are defined. - pathType: Prefix: Tells Kubernetes how to match the path. Prefix means it matches all requests that start with the given path (
/
in this case), including/
,/about
,/api
, etc. - backend: Specifies where the request should be forwarded once the path matches.
- service.name: The name of the Kubernetes Service that will receive the traffic.
- service.port.number: The port on the target service to forward the request to. It must match the port exposed by your service (in this case, 8080).
We’re ready to add the ingress to our cluster:
kubectl apply -f ingress.yaml
Great! Now you should be able to access http://localhost:8081 and see the NGINX welcome page!.
At this point, let’s recap the traffic flow:
- Kind: One local port (8081) from your machine is forwarded to one of the containerized kubernetes nodes (80).
You can verify this with
docker ps
or callhttp://localhost:8081
. - Ingress-Controller & Ingress: This ingress-controller on this containerized node is (by default) configured to route traffic according to rules set in Ingress objects (
ingress.yaml
). Remember, in this case it’s all http traffic (port 80) with paths that starts with/
getting forwarded to the nginx service on port 8080. - Service: The nginx service is configured to route traffic from incoming ports 8080 to all the pods with the labels as described in the selector field of the service definition on port 80, in our case to the nginx deployment on port 80.
- Nginx: The nginx deployment is serving the traffic.
Kustomize for Easier Configuration
You might have noticed applying YAML files can be annoying, especially when managing larger clusters with many services and deployments. Therefore, instead of deploying multiple YAML files one after another, we can use kustomize (especially when managing larger clusters with many Kubernetes resources) to manage deployments as a single unit.
Here’s an example kustomization.yaml
that includes our NGINX deployment, service, and ingress:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- nginx-deployment.yaml
- nginx-service.yaml
- ingress.yaml
With this file, we can deploy everything at once by running:
kubectl apply -k .
This single command applies all our resources in the order specified.
We won’t dive much deeper into all the capabilities of Kustomize here, but to give you a quick glimpse: it offers advanced templating and override capabilities helping to reduce code duplication and making it easy to use environment-specific values.
Managing Configuration with ConfigMaps
When deploying applications, we often need to inject configuration like environment variables, app settings, or feature flags. In Amazon ECS, this might be handled through task definitions environment variables. In Kubernetes, one of the most common tools for this is the ConfigMap. It is an object used to store non-sensitive configuration data as key-value pairs.
📄 my-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
APP_ENV: production
To apply this to the cluster:
kubectl apply -f my-configmap.yaml
Once the ConfigMap exists, you can mount its values into your Pods either as environment variables or as files.
Here’s how to use it in the NGINX deployment by injecting it as environment variables:
📄 nginx-deployment.yaml
(snippet)
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: app-config
Managing Secrets
Kubernetes has a built-in resource type called Secret for managing sensitive data like passwords, tokens, or SSH keys. These secrets are stored as base64-encoded strings, which means they are not encrypted. Therefore, anyone with access to the cluster could potentially read them. While Kubernetes does support encryption at rest (when configured), it’s not enabled by default. Secrets can be hard to manage correctly when using Infrastructure as Code and version control. To reduce risk, teams often use tools like SOPS to encrypt secrets stored in Git, and rely on external secret managers like HashiCorp Vault or AWS Secrets Manager to securely inject secrets into workloads at runtime.
SOPS (Secrets OPerationS) allows you to encrypt files, and still keep them readable and editable in git. It supports various encryption backends like AWS KMS, GCP KMS, HashiCorp Vault, GPG, and age.
For simplicity and ease of setup, we’ll use age.
1. Create an Age Key
age-keygen -o age.key
This creates a private key in age.key
and prints the corresponding public key to stdout (we will need this in a second).
2. Create a test file to encrypt
📄 secret.yaml
api_key: my-secret-api-key
3. Encrypt the file using SOPS + Age
sops --encrypt --age <YOUR_AGE_PUBLIC_KEY> secret.yaml > secret.enc.yaml
You now have an encrypted version of your secret. But you can still edit it with sops (as long as sops knows your private key):
export SOPS_AGE_KEY_FILE=./age.key # your previously created age.key file
sops edit secret.enc.yaml
As you can see, unlike plain Kubernetes Secret Manifests, SOPS encrypts your files using robust encryption methods, making them safe to store in Git but still manageable and editable.
Wrapping Up
This hands-on approach helped me bridge the gap from Amazon ECS to Kubernetes and gave me a solid foundation to build on. If you’re starting your Kubernetes journey too, I hope this guide gives you a clear and practical path forward.