Understanding Kubernetes Networking in OCI (Before We Begin)
In Oracle Kubernetes Engine (OKE), Kubernetes runs inside an OCI VCN.
Everything ultimately lives inside:
-
VCN CIDRs
-
Subnets
-
Route Tables
-
NSGs / Security Lists
There are two major networking approaches:
Flannel Overlay Networking (Traditional Model)
Pods live in a separate CIDR that OCI does not see.
Example:
VCN CIDR : 10.0.0.0/16
Node Subnet : 10.0.1.0/24
Pod CIDR : 172.16.0.0/16 (K8s only)
How it works
-
OCI assigns IPs to Nodes
-
Kubernetes assigns IPs to Pods
-
Traffic is encapsulated (overlay network)
Pros
-
Simple setup
-
Pods don’t consume VCN IPs
Cons
-
Extra encapsulation (performance hit)
-
OCI cannot see pods
-
No NSG per pod
Flannel hides pods inside tunnels
VCN-native makes pods real OCI network citizens
VCN-Native Pod Networking (OCI Recommended)
Pods get real VCN IPs.
VCN CIDR : 10.0.0.0/16
Node Subnet : 10.0.1.0/24
Pod Subnet : 10.0.2.0/24
Advantages
-
Direct routing (no overlay)
-
Faster performance
-
NSG support per pod
-
OCI Flow Logs visibility
-
Cleaner Load Balancer integration
Best practice for production OKE clusters
POD to
POD communication happen through IP address
Container
to container inside the same host happens to localhost. It doesn’t have any IP.
Step-by-Step: Install Minikube on Ubuntu
We’ll install Minikube locally using Docker.
logged in as ubuntu user:-
| Commands |
|---|
sudo apt update
|
Test Docker:
| Commands |
|---|
docker ps
|
Install Minikube
| Commands |
|---|
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
|
Verify:
minikube version
Start Minikube (as normal user, NOT root)
ubuntu@instance-20260221-1609:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 3m37s
v1.35.1
ubuntu@instance-20260221-1609:~$
ubuntu@instance-20260221-1609:~$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ubuntu@instance-20260221-1609:~$ cat pod1.yml
kind: Pod
apiVersion: v1
metadata:
name: testpod1
spec:
containers:
- name: c00
image: ubuntu
command:
["/bin/bash", "-c", "while true; do echo Hello-Samrat;
sleep 5 ; done"]
restartPolicy:
Never # Defaults to Always
ubuntu@instance-20260221-1609:~$ kubectl apply -f pod1.yml
pod/testpod1 created
ubuntu@instance-20260221-1609:~$ kubectl get pods
NAME READY STATUS
RESTARTS AGE
testpod1 1/1 Running
0 15s
delete the pod
ubuntu@instance-20260221-1609:~$ kubectl delete pod testpod1
pod "testpod1" deleted
ubuntu@instance-20260221-1609:~$
create another pod which will have two containers inside
it
kind: Pod
apiVersion: v1
metadata:
name: testpod1
spec:
containers:
- name: c00
image: ubuntu
command:
["/bin/bash", "-c", "while true; do echo Hello-Samrat;
sleep 5 ; done"]
- name: c01
image: httpd
ports:
-
containerPort: 80
ubuntu@instance-20260221-1609:~$ kubectl apply -f pod1.yml
pod/testpod1 created
ubuntu@instance-20260221-1609:~$ kubectl get pods
NAME READY STATUS RESTARTS
testpod1 2/2 Running
0 8s
see how many containers running inside the pod
ubuntu@instance-20260221-1609:~$ kubectl describe pod
testpod1
Name:
testpod1
Namespace:
default
Priority: 0
Service Account:
default
Node:
minikube/192.168.49.2
Start Time: Sat,
21 Feb 2026 18:10:07 +0000
Labels:
<none>
Annotations:
<none>
Status:
Running
IP:
10.244.0.4
IPs:
IP: 10.244.0.4
Containers:
c00:
Container ID:
docker://1bad9d5c30e0ffa6d290393d46928ac2bc1574647d2fbd2eae5a7260638dce09
Image: ubuntu
Image ID:
docker-pullable://ubuntu@sha256:d1e2e92c075e5ca139d51a140fff46f84315c0fdce203eab2807c7e495eff4f9
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
while true; do
echo Hello-Samrat; sleep 5 ; done
State: Running
Started: Sat, 21 Feb 2026 18:10:08 +0000
Ready: True
Restart
Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kf2dx
(ro)
c01:
Container ID:
docker://de58d82ded64b7bff6ff2856e3b9c4648cc4c40ca3852d3d026430c2d7a762a4
Image: httpd
Image ID:
docker-pullable://httpd@sha256:b89c19a390514d6767e8c62f29375d0577190be448f63b24f5f11d6b03f7bf18
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 21 Feb 2026 18:10:11 +0000
Ready: True
Restart
Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kf2dx
(ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-kf2dx:
Type: Projected (a volume that
contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName:
kube-root-ca.crt
ConfigMapOptional:
<nil>
DownwardAPI: true
QoS Class:
BestEffort
Node-Selectors:
<none>
Tolerations:
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason
Age From Message
---- ------
---- ---- -------
Normal Scheduled
27m default-scheduler Successfully assigned default/testpod1 to
minikube
Normal Pulling
27m kubelet Pulling image "ubuntu"
Normal Pulled
27m kubelet Successfully pulled image
"ubuntu" in 112ms (112ms including waiting). Image size: 78129634
bytes.
Normal Created
27m kubelet Container created
Normal Started
27m kubelet Container started
Normal Pulling
27m kubelet Pulling image "httpd"
Normal Pulled
27m kubelet Successfully pulled image
"httpd" in 2.562s (2.562s including waiting). Image size: 116779583
bytes.
Normal Created
27m kubelet Container created
Normal Started
27m kubelet Container started
ubuntu@instance-20260221-1609:~$
go inside the container
kubectl exec testpod1 -it -c c00 -- /bin/bash
ubuntu@instance-20260221-1609:~$ kubectl exec testpod1 -it
-c c00 -- /bin/bash
root@testpod1:/#
create
two pods
first pod
pod1.yml
kind: Pod
apiVersion: v1
metadata:
name: testpod1
spec:
containers:
- name: c01
image: nginx
ports:
-
containerPort: 80
Created an another yaml file pod2.yml
kind: Pod
apiVersion: v1
metadata:
name: testpod2
spec:
containers:
- name: c02
image: httpd
ports:
-
containerPort: 80
ubuntu@instance-20260221-1609:~$ kubectl apply -f pod2.yml
pod/testpod2 created
ubuntu@instance-20260221-1609:~$ kubectl apply -f pod1.yml
pod/testpod1 created
ubuntu@instance-20260221-1609:~$ kubectl get pods
NAME READY STATUS
RESTARTS AGE
testpod1 1/1 Running
0 6s
testpod2 1/1 Running
0 15s
ubuntu@instance-20260221-1609:~$
ubuntu@instance-20260221-1609:~$ kubectl get pods -o wide
NAME READY STATUS
RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
testpod1 1/1 Running
0 63s 10.244.0.6 minikube <none> <none>
testpod2 1/1 Running
0 72s 10.244.0.5 minikube <none> <none>
ubuntu@instance-20260221-1609:~$
the IP is managed internally by CNI flannel networking. To
access it from the node
ubuntu@instance-20260221-1609:~$ kubectl port-forward
pod/testpod1 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
From a duplicate session
ubuntu@instance-20260221-1609:~$ curl localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is
successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer
to
<a
href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a
href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using
nginx.</em></p>
</body>
</html>
ubuntu@instance-20260221-1609:~$
similarly for second pod
ubuntu@instance-20260221-1609:~$ curl localhost:8080
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML
4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>It works! Apache httpd</title>
</head>
<body>
<p>It works!</p>
</body>
</html>
ubuntu@instance-20260221-1609:~$
the
alternative way is to expose is via nodeport
from the node VM
label the pod
kubectl
label pod testpod1 app=testpod1
kubectl
label pod testpod2 app=testpod2
kubectl
expose pod testpod1 --type=NodePort --port=80 --name=testpod1-svc
kubectl
expose pod testpod2 --type=NodePort --port=80 --name=testpod2-svc
ubuntu@instance-20260221-1609:~$ kubectl get svc
NAME
TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes
ClusterIP 10.96.0.1 <none> 443/TCP
156m
testpod1-svc
NodePort 10.102.38.2 <none> 80:31003/TCP 19s
testpod2-svc
NodePort 10.96.172.210 <none> 80:30589/TCP 9s
ubuntu@instance-20260221-1609:~$ minikube ip
192.168.49.2
ubuntu@instance-20260221-1609:~$ curl 192.168.49.2:31003
curl: (6) Could not resolve host: curl
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is
successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer
to
<a
href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a
href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using
nginx.</em></p>
</body>
</html>
ubuntu@instance-20260221-1609:~$
ubuntu@instance-20260221-1609:~$ curl 192.168.49.2:30589
curl: (6) Could not resolve host: curl
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML
4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>It works! Apache httpd</title>
</head>
<body>
<p>It works!</p>
</body>
</html>
ubuntu@instance-20260221-1609:~$
I will create a new pod now using the new yml file
ubuntu@instance-20260221-1609:~$ cat deploymenthttpd.yml
kind: Deployment
apiVersion: apps/v1
metadata:
name: mydeployments
spec:
replicas: 1
selector: # tells the controller which pods to
watch/belong to
matchLabels:
name: deployment
template:
metadata:
name: testpod1
labels:
name:
deployment
spec:
containers:
- name: c00
image: httpd
ports:
-
containerPort: 80
ubuntu@instance-20260221-1609:~$
kubectl apply -f deploymenthttpd.yml
deployment.apps/mydeployments created
ubuntu@instance-20260221-1609:~$ kubectl get pods -o wide
NAME READY STATUS
RESTARTS AGE IP
NODE NOMINATED NODE READINESS GATES
mydeployments-5b469d887b-svs6b 1/1
Running 0 36s
10.244.0.7 minikube <none> <none>
ubuntu@instance-20260221-1609:~$
if I delete this pod, an another pod will be created with a
different IP because replicaas is set to 1
The above scenario is going to break the application which
connects to the POD using the IP. So to overcome this scenario, we have a
object called service which will always map the static IP if the Service to the
backend of POD.
Create
a service
ubuntu@instance-20260221-1609:~$ cat service.yml
kind: Service # Defines to
create Service type Object
apiVersion: v1
metadata:
name: demoservice
spec:
ports:
- port: 80 # Containers
port exposed
targetPort:
80 # Pods port
selector:
name:
deployment # Apply
this service to any pods which has the specific label
type: ClusterIP
kubectl apply -f service.yml
ubuntu@instance-20260221-1609:~$ kubectl get svc
NAME
TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoservice ClusterIP
10.106.98.216
<none> 80/TCP 8s
kubernetes
ClusterIP 10.96.0.1 <none> 443/TCP
25h
testpod1-svc
NodePort 10.102.38.2 <none> 80:31003/TCP 23h
testpod2-svc
NodePort 10.96.172.210 <none> 80:30589/TCP 23h
Storage inside the container. The volume is mounted in the
pod and it is mounted to containers. If the pod restarts, the volume is also
gone, so it is not persistent.
persistent volume:-
we will leverage block volume or FSS from OCI and same will
be mounted in the worked nodes for OKE.
In this case I am going to use the block volume as an
example
I have one worker node defined
samsin16@cloudshell:~ (us-ashburn-1)$ kubectl get nodes
NAME
STATUS ROLES AGE
VERSION
10.0.10.97
Ready node 7m28s
v1.34.2
samsin16@cloudshell:~ (us-ashburn-1)$
Creating a PVC on a Block Volume Using the CSI Volume
Plugin
You define a PVC in a file called csi-bvs-pvc.yaml. For
example:
csi-bvs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mynginxclaim
spec:
storageClassName:
"oci-bv"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
samsin16@cloudshell:oke (us-ashburn-1)$ kubectl create
-f csi-bvs-pvc.yaml
persistentvolumeclaim/mynginxclaim created
samsin16@cloudshell:oke (us-ashburn-1)$
Verify that the PVC has been created by running kubectl
get pvc
samsin16@cloudshell:oke (us-ashburn-1)$ kubectl get pvc
NAME
STATUS VOLUME CAPACITY
ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
mynginxclaim
Pending
oci-bv <unset> 35s
samsin16@cloudshell:oke (us-ashburn-1)$
The PVC has a status of Pending because
the oci-bv storage class's definition
includes volumeBindingMode: WaitForFirstConsumer.
Now we will create a new pod from the following pod
definition, which instructs the system to use the mynginxclaim PVC as the nginx
volume, which is mounted by the pod at /data.
samsin16@cloudshell:oke (us-ashburn-1)$ cat pod.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image:
docker.io/library/nginx:latest
ports:
-
containerPort: 80
volumeMounts:
- name: data
mountPath:
/usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName:
mynginxclaim
samsin16@cloudshell:oke (us-ashburn-1)$
samsin16@cloudshell:oke (us-ashburn-1)$ vi pod.yml-content above
samsin16@cloudshell:oke (us-ashburn-1)$ kubectl apply -f
pod.yml
pod/nginx created
samsin16@cloudshell:oke (us-ashburn-1)$
samsin16@cloudshell:oke (us-ashburn-1)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 0/1 ImagePullBackOff 0
9m13s
samsin16@cloudshell:oke (us-ashburn-1)$
samsin16@cloudshell:oke (us-ashburn-1)$ kubectl get pvc
NAME
STATUS VOLUME
CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
mynginxclaim
Bound
csi-30a3bc85-b9e4-42a7-8307-d174ea57b077 50Gi
RWO oci-bv <unset> 12m
samsin16@cloudshell:oke (us-ashburn-1)$
You can verify that the pod is using the new persistent
volume claim by entering:
samsin16@cloudshell:oke (us-ashburn-1)$ kubectl get pods
NAME READY STATUS
RESTARTS AGE
nginx 1/1 Running
0 7m52s
samsin16@cloudshell:oke (us-ashburn-1)$
samsin16@cloudshell:oke (us-ashburn-1)$ kubectl get pvc
NAME
STATUS VOLUME
CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
mynginxclaim
Bound
csi-30a3bc85-b9e4-42a7-8307-d174ea57b077 50Gi
RWO oci-bv <unset> 35m
samsin16@cloudshell:oke (us-ashburn-1)$
go inside the pod and check if the mount point exist or not
samsin16@cloudshell:oke (us-ashburn-1)$ kubectl exec -it
nginx -- /bin/bash
root@nginx:/# df -kh
Filesystem
Size Used Avail Use% Mounted on
overlay
36G 19G 18G
52% /
tmpfs
64M 0 64M
0% /dev
tmpfs
4.8G 0 4.8G
0% /sys/fs/cgroup
shm
64M 0 64M
0% /dev/shm
tmpfs
4.8G 10M 4.8G
1% /etc/hostname
/dev/mapper/ocivolume-root
36G 19G 18G
52% /etc/hosts
/dev/sdb
49G 24K 49G
1% /usr/share/nginx/html
tmpfs
8.3G 12K 8.3G
1% /run/secrets/kubernetes.io/serviceaccount
tmpfs
4.8G 0 4.8G
0% /proc/acpi
tmpfs
4.8G 0 4.8G
0% /proc/scsi
tmpfs
4.8G 0 4.8G
0% /sys/firmware
tmpfs
4.8G 0 4.8G
0% /sys/fs/selinux
root@nginx:/# mount | grep /usr/share/nginx/html
/dev/sdb on /usr/share/nginx/html type ext4
(rw,relatime,seclabel,stripe=256)
root@nginx:/#
liveness probe
it is is a sort of healthcheck which does a probe to the pod
and if it found unhealthy, it terminates the existing pod and create a new one
ubuntu@instance-20260221-1609:~$ cat livenessprobe.yml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name:
mylivenessprobe
spec:
containers:
- name: liveness
image: ubuntu
args:
- /bin/sh
- -c
- touch
/tmp/healthy; sleep 1000
livenessProbe:
exec:
command:
- cat
-
/tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds:
30
ubuntu@instance-20260221-1609:~$
ubuntu@instance-20260221-1609:~$ kubectl apply -f
livenessprobe.yml
pod/mylivenessprobe created
ubuntu@instance-20260221-1609:~$ kubectl get pods
NAME
READY STATUS RESTARTS
AGE
mylivenessprobe
1/1 Running 0
10s
ubuntu@instance-20260221-1609:~$ kubectl describe pods
Name:
mylivenessprobe
Namespace:
default
Priority: 0
Service Account:
default
Node:
minikube/192.168.49.2
Start Time: Fri,
27 Feb 2026 19:01:42 +0000
Labels:
test=liveness
Annotations:
<none>
Status:
Running
IP:
10.244.0.16
IPs:
IP: 10.244.0.16
Containers:
liveness:
Container ID:
docker://5c359d3b2612364e9f239d7f032f3a666fb27e71d9fc0286edfa96ed049afe80
Image: ubuntu
Image ID:
docker-pullable://ubuntu@sha256:d1e2e92c075e5ca139d51a140fff46f84315c0fdce203eab2807c7e495eff4f9
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
touch
/tmp/healthy; sleep 1000
State: Running
Started: Fri, 27 Feb 2026 19:01:43 +0000
Ready: True
Restart
Count: 0
Liveness: exec [cat /tmp/healthy] delay=5s
timeout=30s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4j9m9
(ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-4j9m9:
Type: Projected (a volume that
contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName:
kube-root-ca.crt
ConfigMapOptional:
<nil>
DownwardAPI: true
QoS Class:
BestEffort
Node-Selectors:
<none>
Tolerations:
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason
Age From Message
---- ------
---- ---- -------
Normal Scheduled
72s default-scheduler Successfully assigned default/mylivenessprobe
to minikube
Normal Pulling
72s kubelet Pulling image "ubuntu"
Normal Pulled
71s kubelet Successfully pulled image
"ubuntu" in 209ms (209ms including waiting). Image size: 78129634
bytes.
Normal Created
71s kubelet Container created
Normal Started
71s kubelet Container started
Go inside the container as this pod has only one container
ubuntu@instance-20260221-1609:~$ kubectl exec -it
mylivenessprobe -- /bin/bash
I am inside the container now
root@mylivenessprobe:/# ls /tmp/healthy
/tmp/healthy
root@mylivenessprobe:/#
check the healthcheck
root@mylivenessprobe:/# echo $?
0
root@mylivenessprobe:/#
let us delete the file healthy
root@mylivenessprobe:/# rm /tmp/healthy
root@mylivenessprobe:/# echo $?
1
root@mylivenessprobe:/#
NAME
READY STATUS RESTARTS AGE
mylivenessprobe
1/1 Running 1 (24s ago)
5m54s
ubuntu@instance-20260221-1609:~$ kubectl describe pod
Name:
mylivenessprobe
Namespace:
default
Priority: 0
Service Account:
default
Node:
minikube/192.168.49.2
Start Time: Fri,
27 Feb 2026 19:01:42 +0000
Labels:
test=liveness
Annotations:
<none>
Status:
Running
IP:
10.244.0.16
IPs:
IP: 10.244.0.16
Containers:
liveness:
Container ID:
docker://71181383e5a6d03405cc76be582a1c69701287b025cdf2d44fc260ee41092a3c
Image: ubuntu
Image ID:
docker-pullable://ubuntu@sha256:d1e2e92c075e5ca139d51a140fff46f84315c0fdce203eab2807c7e495eff4f9
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
touch
/tmp/healthy; sleep 1000
State: Running
Started: Fri, 27 Feb 2026 19:07:13 +0000
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Fri, 27 Feb 2026 19:01:43 +0000
Finished: Fri, 27 Feb 2026 19:07:12 +0000
Ready: True
Restart
Count: 1
Liveness: exec [cat /tmp/healthy] delay=5s
timeout=30s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4j9m9
(ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-4j9m9:
Type: Projected (a volume that
contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName:
kube-root-ca.crt
ConfigMapOptional:
<nil>
DownwardAPI: true
QoS Class:
BestEffort
Node-Selectors:
<none>
Tolerations:
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason
Age From Message
---- ------
---- ---- -------
Normal Scheduled
6m3s
default-scheduler Successfully
assigned default/mylivenessprobe to minikube
Normal Pulled
6m2s kubelet Successfully pulled image
"ubuntu" in 209ms (209ms including waiting). Image size: 78129634
bytes.
Warning Unhealthy
63s (x3 over 73s) kubelet Liveness probe failed: cat:
/tmp/healthy: No such file or directory
Normal Killing
63s kubelet Container liveness failed liveness
probe, will be restarted
Normal Pulling
33s (x2 over 6m3s) kubelet Pulling image "ubuntu"
Normal Created
33s (x2 over 6m2s) kubelet Container created
Normal Pulled
33s kubelet Successfully pulled image
"ubuntu" in 129ms (129ms including waiting). Image size: 78129634
bytes.
Normal Started
32s (x2 over 6m2s) kubelet Container started
ubuntu@instance-20260221-1609:~$
we can see that the moment healthcheck failed, kubernets
created an another container
horizontal scaling
download metrics server API
ubuntu@instance-20260221-1609:~$ wget -O metricserver.yml
https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
--2026-03-01 19:27:40--
https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Resolving github.com (github.com)... 140.82.114.4
Connecting to github.com (github.com)|140.82.114.4|:443...
connected.
HTTP request sent, awaiting response... 302 Found
Location:
https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.8.1/components.yaml
[following]
--2026-03-01 19:27:40--
https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.8.1/components.yaml
Reusing existing connection to github.com:443.
HTTP request sent, awaiting response... 302 Found
Location:
https://release-assets.githubusercontent.com/github-production-release-asset/92132038/09d89427-21cb-408b-8673-8ecbfbe5a0ec?sp=r&sv=2018-11-09&sr=b&spr=https&se=2026-03-01T20%3A24%3A41Z&rscd=attachment%3B+filename%3Dcomponents.yaml&rsct=application%2Foctet-stream&skoid=96c2d410-5711-43a1-aedd-ab1947aa7ab0&sktid=398a6654-997b-47e9-b12b-9515b896b4de&skt=2026-03-01T19%3A23%3A59Z&ske=2026-03-01T20%3A24%3A41Z&sks=b&skv=2018-11-09&sig=qd6rvAYFKmLOgI%2BHrOx5N%2BVFtnGZgrRX599qGBdKGnI%3D&jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmVsZWFzZS1hc3NldHMuZ2l0aHVidXNlcmNvbnRlbnQuY29tIiwia2V5Ijoia2V5MSIsImV4cCI6MTc3MjM5MzU2MCwibmJmIjoxNzcyMzkzMjYwLCJwYXRoIjoicmVsZWFzZWFzc2V0cHJvZHVjdGlvbi5ibG9iLmNvcmUud2luZG93cy5uZXQifQ.3B497J6iPHtSYvZUdwM3BDMyIK7A2eXNSh2L9B0e8TQ&response-content-disposition=attachment%3B%20filename%3Dcomponents.yaml&response-content-type=application%2Foctet-stream
[following]
--2026-03-01 19:27:40--
https://release-assets.githubusercontent.com/github-production-release-asset/92132038/09d89427-21cb-408b-8673-8ecbfbe5a0ec?sp=r&sv=2018-11-09&sr=b&spr=https&se=2026-03-01T20%3A24%3A41Z&rscd=attachment%3B+filename%3Dcomponents.yaml&rsct=application%2Foctet-stream&skoid=96c2d410-5711-43a1-aedd-ab1947aa7ab0&sktid=398a6654-997b-47e9-b12b-9515b896b4de&skt=2026-03-01T19%3A23%3A59Z&ske=2026-03-01T20%3A24%3A41Z&sks=b&skv=2018-11-09&sig=qd6rvAYFKmLOgI%2BHrOx5N%2BVFtnGZgrRX599qGBdKGnI%3D&jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmVsZWFzZS1hc3NldHMuZ2l0aHVidXNlcmNvbnRlbnQuY29tIiwia2V5Ijoia2V5MSIsImV4cCI6MTc3MjM5MzU2MCwibmJmIjoxNzcyMzkzMjYwLCJwYXRoIjoicmVsZWFzZWFzc2V0cHJvZHVjdGlvbi5ibG9iLmNvcmUud2luZG93cy5uZXQifQ.3B497J6iPHtSYvZUdwM3BDMyIK7A2eXNSh2L9B0e8TQ&response-content-disposition=attachment%3B%20filename%3Dcomponents.yaml&response-content-type=application%2Foctet-stream
Resolving release-assets.githubusercontent.com
(release-assets.githubusercontent.com)... 185.199.110.133, 185.199.109.133,
185.199.108.133, ...
Connecting to release-assets.githubusercontent.com
(release-assets.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4330 (4.2K) [application/octet-stream]
Saving to: ‘metricserver.yml’
metricserver.yml
100%[============================================================================>] 4.23K
--.-KB/s in 0s
2026-03-01 19:27:40 (46.1 MB/s) - ‘metricserver.yml’ saved
[4330/4330]
ubuntu@instance-20260221-1609:~$
open the metricserver.yml file and add the line
pec:
containers:
- args:
-
--cert-dir=/tmp
-
--secure-port=10250
- --kubelet-insecure-tls
-
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
-
--kubelet-use-node-status-port
-
--metric-resolution=15s
ubuntu@instance-20260221-1609:~$ kubectl apply -f metricserver.yml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader
created
clusterrole.rbac.authorization.k8s.io/system:metrics-server
created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader
created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator
created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server
created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io
created
ubuntu@instance-20260221-1609:~$
ubuntu@instance-20260221-1609:~$ kubectl get namespaces
NAME
STATUS AGE
default
Active 8d
kube-node-lease
Active 8d
kube-public
Active 8d
kube-system Active
8d
ubuntu@instance-20260221-1609:~$
ubuntu@instance-20260221-1609:~$ kubectl get pods -n
kube-system
NAME READY STATUS
RESTARTS AGE
coredns-7d764666f9-dnhs7 1/1 Running
0 8d
etcd-minikube 1/1 Running
0 8d
kube-apiserver-minikube 1/1 Running
0 8d
kube-controller-manager-minikube 1/1
Running 0 8d
kube-proxy-l4k5k 1/1 Running
0 8d
kube-scheduler-minikube 1/1 Running
0 8d
metrics-server-6cb56849d5-f7twf 1/1
Running 0 78s
storage-provisioner 1/1 Running
0 8d
ubuntu@instance-20260221-1609:~$
ubuntu@instance-20260221-1609:~$ kubectl logs -f metrics-server-6cb56849d5-f7twf -n
kube-system
I0301 19:31:10.933550
1 serving.go:380] Generated
self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0301 19:31:11.520992
1 handler.go:288] Adding GroupVersion metrics.k8s.io v1beta1 to
ResourceManager
I0301 19:31:11.628432
1 configmap_cafile_content.go:205] "Starting controller"
name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0301 19:31:11.628519
1 shared_informer.go:350] "Waiting for caches to sync"
controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0301 19:31:11.628433
1 requestheader_controller.go:180] Starting
RequestHeaderAuthRequestController
I0301 19:31:11.628597
1 shared_informer.go:350] "Waiting for caches to sync"
controller="RequestHeaderAuthRequestController"
I0301 19:31:11.628443
1 configmap_cafile_content.go:205] "Starting controller"
name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0301 19:31:11.628915
1 shared_informer.go:350] "Waiting for caches to sync"
controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0301 19:31:11.629250
1 dynamic_serving_content.go:135] "Starting controller"
name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key"
I0301 19:31:11.630156
1 secure_serving.go:211] Serving securely on [::]:10250
I0301 19:31:11.630202
1 tlsconfig.go:243] "Starting
DynamicServingCertificateController"
I0301 19:31:11.728888
1 shared_informer.go:357] "Caches are synced"
controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0301 19:31:11.729022
1 shared_informer.go:357] "Caches are synced"
controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0301 19:31:11.729184
1 shared_informer.go:357] "Caches are synced"
controller="RequestHeaderAuthRequestController"
Create a yml file as
kind: Deployment
apiVersion: apps/v1
metadata:
name: mydeploy
spec:
replicas: 1
selector:
matchLabels:
name: deployment
template:
metadata:
name: testpod8
labels:
name:
deployment
spec:
containers:
- name: c00
image: httpd
ports:
-
containerPort: 80
resources:
limits:
cpu:
500m
requests:
cpu:
200m
=file name- deploymenthpa.yml
ubuntu@instance-20260221-1609:~$ kubectl apply -f
deploymenthpa.yml
deployment.apps/mydeploy created
ubuntu@instance-20260221-1609:~$
ubuntu@instance-20260221-1609:~$ kubectl get all
NAME READY STATUS
RESTARTS AGE
pod/mydeploy-75ccf675b9-827kh 1/1
Running 0 43s
NAME
TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demoservice
NodePort 10.97.24.225 <none> 80:30717/TCP 7d
service/kubernetes
ClusterIP 10.96.0.1 <none> 443/TCP 8d
service/testpod1-svc
NodePort 10.102.38.2 <none> 80:31003/TCP 7d23h
service/testpod2-svc
NodePort 10.96.172.210 <none> 80:30589/TCP 7d23h
NAME
READY UP-TO-DATE AVAILABLE
AGE
deployment.apps/mydeploy
1/1 1 1 43s
NAME DESIRED CURRENT
READY AGE
replicaset.apps/mydeploy-75ccf675b9 1
1 1 43s
ubuntu@instance-20260221-1609:~$
ubuntu@instance-20260221-1609:~$ kubectl autoscale
deployment mydeploy --cpu-percent=20 --min=1 --max=10
horizontalpodautoscaler.autoscaling/mydeploy autoscaled
ubuntu@instance-20260221-1609:~$
ubuntu@instance-20260221-1609:~$ kubectl get all
NAME READY STATUS
RESTARTS AGE
pod/mydeploy-75ccf675b9-827kh 1/1
Running 0 2m33s
NAME
TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demoservice
NodePort 10.97.24.225 <none> 80:30717/TCP 7d
service/kubernetes
ClusterIP 10.96.0.1 <none> 443/TCP 8d
service/testpod1-svc
NodePort 10.102.38.2 <none> 80:31003/TCP 7d23h
service/testpod2-svc
NodePort 10.96.172.210 <none> 80:30589/TCP 7d23h
NAME
READY UP-TO-DATE AVAILABLE
AGE
deployment.apps/mydeploy
1/1 1 1 2m33s
NAME DESIRED CURRENT
READY AGE
replicaset.apps/mydeploy-75ccf675b9 1
1 1 2m33s
NAME
REFERENCE TARGETS MINPODS MAXPODS
REPLICAS AGE
horizontalpodautoscaler.autoscaling/mydeploy Deployment/mydeploy cpu: 0%/20% 1
10 0 11s
ubuntu@instance-20260221-1609:~$
as the load increases, the pod will increase
at present one pod
ubuntu@instance-20260221-1609:~$ kubectl get pods
NAME
READY STATUS RESTARTS
AGE
mydeploy-75ccf675b9-827kh
1/1 Running 0
5m22s
ubuntu@instance-20260221-1609:~$
simulate
go inside the pod in a duplicate session
ubuntu@instance-20260221-1609:~$ kubectl exec
mydeploy-75ccf675b9-827kh -it -- /bin/bash
root@mydeploy-75ccf675b9-827kh:/usr/local/apache2#
in the original session
watch
kubectl get all
in the duplicate session
inside the pod run apt update to increase the load
As the load decreases, after the default cooldown period of
5 minutes, the extra pods will be deleted
Pods are ephemeral
Pod IP is not stable
Services provide abstraction
Deployments ensure self-healing
Liveness probes detect failures
EmptyDir is temporary
PVC gives persistence
HPA enables auto-scaling
OCI networking choice impacts performance
Hope this helps someone. Happy learning!!!!
No comments:
Post a Comment