Job
Last updated
Last updated
Jobs represent one-off tasks that run to completion and then stop.
A Kubernetes Job is a workload resource that is used to run a task to completion. Unlike Deployments or StatefulSets, which are designed for long-running applications, a Job is for one-time or batch tasks that need to start, run, and finish successfully.
Task Completion → Ensures a specific task runs and finishes successfully.
Pod Management → Creates and manages Pods until they complete successfully.
Retry Mechanism → If a Pod fails, the Job automatically retries it based on the defined policy.
Parallel Execution → Supports parallel tasks using multiple Pods.
Single Job (Default): Runs one Pod until it completes. Ideal for simple, single-instance tasks.
Parallel Jobs: Runs multiple Pods in parallel to speed up large tasks.
Completion Mode: The Job is considered complete when there have been .spec.completions
successfully completed Pods.
Indexed Parallel Jobs: Provides unique identifiers (JOB_COMPLETION_INDEX
in environment variables) for each pod—useful for partitioning tasks.
Lets create a job that perform backup of our PostgreSQL database. You should already have a PostgreSQL database service up and running in your cluster. If not you can follow instruction in here: .
First lets create a Persistent Volume (PV) and Persistent Volume Claim (PVC) to store the backup data. In here I put both into single file called postgres-backup-pv-pvc.yaml
.
Apply the yaml
file and validate, we should see the PV and PVC that we just created.
Now lets create a job to run pg_dump
command. This command is to makes consistent backups even if the database is being used concurrently.
Lets create a file called postgres-backup-job.yaml
and put below definition there.
apiVersion: batch/v1
: Tells Kubernetes to use the batch/v1
API, which manages Jobs.
kind: Job
: Resource type.
metadata.name
: Names the Job postgres-backup
.
spec
: Define job specification.
template.spec
: The Pod specification (like a blueprint for what Pods the Job will create).
containers
: Defines the list of containers within the pod.
name: postgres-backup
: Name of this container.
image: postgres:17
: Uses the official PostgreSQL v17
Docker image, which includes the pg_dump
utility.
command
: Overrides the container's default entrypoint.
["/bin/sh", "-c"]
: Executes a shell (sh) to run a multi-line script.
args
: Defines the script to run inside the container.
Backup Script:
This script will run pg_dump
with host pointed to our postgres service and database name mydb
.
It will use provided PGUSER
and PGPASSWORD
in the environment variables.
/backup/postgres_backup_$(date +%Y%m%d%H%M%S).sql
: Saves the backup to a timestamped file in /backup
directory (for unique names).
Lets apply this file using kubectl apply
and you should see your job with status Running
or even already Complete
.
We can check the logs to make sure that the script run properly using kubectl logs
command.
In minikube we can easily check the Persistent Volume content using minikube ssh
to make sure that the our backup script is correct.
When your Job has finished, kubernetes will not delete the job. It's useful to keep that Job so that we can tell whether the Job succeeded or failed.
But after some given times we might want to cleanup that job. And kubernetes proved a way to do this using .spec.ttlSecondsAfterFinished
. Specify this field in the Job manifest/configuration, so that a Job can be cleaned up automatically some time after it finishes.
Lets try to add TTL configuration into our postgres-backup
job configuration. Edit the yaml
file, put this changes below (we set TTL to 60
).
With this configuration the job will be deleted 1
minute after finished. Re-apply the configuration, and if after that the postgres-backup
job should be gone from the job list.
We will use postgres
image because this image already have pg_dump
installed. So we don't need to install it manually every time this job run. We will use credentials available in our secrets. If your postgres service still haven't use any secrets you can follow this instructions: .