Pod & Container: Probes

In Kubernetes, a probe is a mechanism used to determine the health and readiness of a container running within a pod. Probes defined in the pod specification and performed periodically to make sure that the containers inside a pod running properly.

Probe Types

Kubernetes provides three types of probes to monitor and manage the health of our containers.

Liveness Probe

This probe checks whether a container is alive (still running properly). If it fails, Kubernetes restarts the container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a container in such a state can help to make the application more available despite bugs.

Readiness Probe

This probe checks whether a container is ready to accept traffic. If it fails, Kubernetes removes the pod from the Service's endpoints. For example, an application need time to reload the configuration and became temporary unavailable. In such cases, you don't want to kill the application, but you don't want to send it requests either.

Startup Probe

This probe checks whether a slow-starting container has fully started. It gives the container extra time to fully start before liveness or readiness checks begin. If it fails, Kubernetes restarts the container.For example, an application takes 90 seconds to start. A startup probe prevents Kubernetes from thinking it's "dead" during that time.

Ways to Define Probe Status

And for each probe there are three ways to define a probe’s status:

Command Execution (exec)

Runs a command inside the container.

  • Success: Exit code 0.

  • Failure: Non-zero exit code.

HTTP Request (httpGet)

Sends an HTTP GET request to a specific endpoint inside the container.

  • Success: HTTP status 200-399.

  • Failure: Any other status code.

TCP Socket (tcpSocket)

Opens a TCP connection on a specific port.

  • Success: Port is open.

  • Failure: Port is closed.

Example

Lets add a liveness probe and ready probe for our simple-go app. For readiness probe we will use the root / path endpoint. Edit the deployment.yaml file and add this

For liveness probe we will create a new endpoint /health. This endpoint will check a variable status if true then it will return a response with HTTP Code 200, and if false will return a response with HTTP Code 500.

Edit the main.go file to add the new /health endpoint.

Then add liveness probe configuration in the deployment.yaml file.

Rebuild the apps using docker and then re-apply the deployment.yaml file using kubectl apply.

Your apps should be working with liveness probe and readiness probe.

Simulate Liveness Failure

Lets edit again our app's /health endpoint. This time we add a counter if the counter is above threshold the it will return a response with HTTP code 500.

Rebuild again the apps using docker and then restart your deployment. After few minutes, check the pods list and you should see the pods restarting multiple times.

And if you check the events you will see the pods is Killed because of liveness probe failure.

References

Last updated