Each pod has its own IP, hostname, processes, network interfaces and other resources.
A pod contains one or more containers. These containers share a node and some Linux namespaces (it's configurable, e.g. pod's containers by default do not share PID namespace).
Sharing network namespaces by pod's containers:
It makes sense to have multiple containers in a pod if these containers need to work together closely. The example below shows some web app and a sidecar container providing TLS support:
- some web API container and separate container that prepares data for that API (in some common storage)
- Istio service mesh
Containers in a single pod are scaled together.
All pods can talk to each other, and they are reachable from any node. Pods may define the ports they use in YAML, but they do not have to. It's only informative (and it gives possibility to name the ports, which is useful when creating Services for them).
Checking if pod accepts requests:
SSH into any node, and try to curl the IP address of a pod
create another Pod, get into it, and curl the other pod from it
k port-forward <pod-name> <pod-port>- it's the easiest thing to do, but the most complex under the hood. It can also be used with Services.
k logs <pod> - displays pod logs
-p- logs from the previous instance of the pod (if died)
Deleting a pod deletes the logs as well. Logs are kept in
on the nodes.
# Accessing Pods
k cp kiada:html/index.html /tmp/index.html- copying files to/from containers
k exec <pod> -- ps aux- execute a command in a pod (
k attach- attaches to stdin, stderr, stdin (useful if pod's container expects input). If stdin is not needed, it is not different than
k logs -f
- ephemeral containers - currently in Alpha
# Init Containers
Pod can have init container(s) specified. They start before the "main" container(s), finish successfully, and only then the "main" container(s) start.
"Main" containers run in parallel. Init containers run consecutively (1 at a time).
- prepare some files on a volume for main container
- configure some networking
- delay container start until some condition is met
- notify some external service that the pod is starting
Init containers should work in an idempotent way.
Pod's conditions (a part of "status"):
- PodScheduled - pod scheduled to a worker node
Restart policies of pods:
- OnFailure - only non-zero exit code causes a restart
These policies can be defined only on the pod level, not on the container level.
When container dies and gets restarted, there is a varying delay before the container starts:
First time is immediate, then it grows exponentially up to 5 minutes. This behaviour is reset if the container has run successfully for at least 10 minutes.
# Liveness Probe
K8s automatically restarts container that crashes. However, there are situtions when the app does not terminate, but is unhealthy (i.e., in a deadlock).
Liveness probe may be defined for every cotnainer in a pod. It is checked periodically. If container does not respond, it is considered unhealthy and is terminated (and restarted). They cannot be used with init containers.
- HTTP GET - response code between 200-399 is considered successful
- TCP socket - just a TCP connection is attempted
- Exec - executes a command inside of the container and checks exit code
Additional configurable parameters:
K8s does not mention anywhere that probe is successful. It could be logged by a tested container by an app running on it.
When probe fails, K8s tries to terminate the app gracefully (with TERM). If that
fails, it kills is forcefully (
# Readiness Probe
It is used to check if the pod is ready to accept requests. Like Liveness Probe supports three types.
Containers without Readiness probe are considered Ready as soon as they are started.
If Readiness Probe fails, a pod is removed from Endpoints object of a service. It is not restarted, like with the Liveness Probe.
# Startup Probe
Additional proble - Startup Probe may be added if the app is known to start long. Then, separate configuration is used to check app health during startup. We might define how long it should take for the app to start responding for health checks. It's perfectly normal for the Startup probe to fail a couple times initially. Successful Startup probe indicates that K8s should switch to the Liveness probe. It is usually executed at shorted intervals making sure the app is alive.
Only after Complete Startup the container may become Ready (via the Readiness Probe).
Usually both probes use the same endpoint, but they can be different (or even use different type).
There are two hooks:
- post-start - run immediately after the container is started parallelly to it
They are specified per container.
Init containes are similar to post-start hooks, but they're defined per pod.
They can be defined as HTTP GET, or as "exec" (just like probes). "tcpSocket" is not supported.
Container is in the "Pending" state until post-start is completed. Logs also cannot be seen even though container is already running. If the hook fails, the container is restarted.
The post-start HTTP hook should not be used if we want to target the same container with it (or the same pod in general). It could happen that the hook will be executed before the webserver is started in the container. It will cause a restart since HTTP request will fail. We end up in a restart loop. HTTP hook is good for notifying some other apps about the container starting.
The pre-stop hook is not invoked if the app terminates by itself.
If the pod is to be killed, a
TERM signal is sent to it. By default, 30s is
given for the container to shut down gracefully. It can be changed with the
terminationGracePeriodSeconds setting in
spec. If the time passes and
container still lives,
KILL is sent to it.