Container lifecycles are an important part of any Kubernetes deployment, and it can be difficult to track them manually. With hooks, you can automate the process of tracking container lifecycles and get more accurate information about how your containers are performing. To use hooks, you first need to create a hook object. You can do this by using the kubectl create command: kubectl create -f my-hook.yaml The my-hook.yaml file contains the following: apiVersion: extensions/v1beta1 kind: Hook metadata: name: my-hook namespace: default spec: templatePaths: - path:/etc/kubernetes/templates/metadata.json — apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-deployment namespace: default labels : app : my-app spec : replicas : 1 template : metadata : labels : app : my-app spec : containers : - name : my-container imagePullPolicy : Always ports : - containerPort = 8080 protocolPort = 8080 resources : limits . cpuUsageLimit = “100m” limits . memoryLimit = “512Mi” — apiVersion: v1 kind: ServiceAccount metadata: name=myserviceaccount namespace=default credentials=password secret=secret serviceAccountName=myserviceaccount ..
Hooks are commonly used to log container events, implement clean-up scripts, and run asynchronous tasks after a new Pod joins your cluster. In this article, we’ll show how to attach hook handlers to your Pods and gain more control over container lifecycles.
The Two Available Hooks
Current Kubernetes releases support two container lifecycle hooks:
PostStart – Handlers for this hook are called immediately after a new container is created. PreStop – This hook’s invoked immediately before Kubernetes terminates a container.
They can be handled using two different mechanisms:
Exec – Runs a specified command inside the container. HTTP – Makes an HTTP request to a URL inside the container.
Neither of the hooks provide any arguments to their handlers. Each container supports a single handler per-hook; it’s not possible to call multiple endpoints or combine an exec command with an HTTP request.
Defining Hook Handlers
You define hook handlers for Pods using their containers.lifecycle manifest field. Within this field, set the postStart and preStop properties to implement one or both of the available hooks.
Here’s a simple Pod which logs a message when it starts up:
Apply the Pod to your cluster using Kubectl:
Now get a shell to the running container inside the Pod:
Read the contents of the /startup_message file:
This demonstrates that the hook was called successfully. An exec hook is considered successful if its command exits with a zero status code.
HTTP Handlers
You can configure an HTTP handler by replacing the exec field with httpGet. Only HTTP GET requests are supported (there’s no httpPost field).
In this example, Kubernetes makes a GET request to /startup on the container’s port 80. The httpGet field also accepts scheme and host properties to further configure the request.
Here’s a Pod where /shutdown is called over HTTPS before container termination occurs:
HTTP hook handlers are deemed to have succeeded if the HTTP response code lies in the 200-299 range.
Debugging Your Handlers
Hook handlers are managed independently of the Pods they’re attached to. Their logs aren’t collected or stored alongside normal Pod logs, so you won’t see exec commands like echo Started when running kubectl logs pod/pod-with-hooks.
You can debug hooks by viewing a Pod’s event history. Failed invocations will be reported as FailedPostStartHook and FailedPreStophook events. The error message includes a description of what prompted the error.
Try adding this Pod to your cluster:
The broken PostStart hook will cause the Pod’s startup to fail. Use kubectl describe to access its event history:
The FailedPostStartHook event reveals the handler failed because missing-command isn’t a valid executable inside the container. This caused the container to be killed and restarted in a back-off loop. It’ll be stuck like this perpetually as missing-command will never be executable.
Gotchas to Watch For
Hook invocations have a few characteristics which can catch you out. Keeping these in mind can help avoid odd behavior and unexpected failures.
Hooks may be called more than once. Kubernetes guarantees your PostStart and PreStop handlers will be called “at least” once for each container. In some situations, a hook might be invoked multiple times. Your handlers should be idempotent so they can withstand this possibility. Failed hooks kill their container. As the debugging example above illustrates, failed hooks immediately kill their container. You need to make sure your commands and HTTP endpoints are free of errors to avoid unexpected Pod startup issues. Hook handlers should be lightweight and free of dependencies. Don’t try to access a resource that might not be available immediately after your container starts. PostStart hooks race the container’s ENTRYPOINT. PostStart fires at about the same time as the container is created. Kubernetes doesn’t wait for the hook though – it’ll be called asynchronously alongside the container’s ENTRYPOINT, which could complete before your hook handler is invoked. This means your container’s entrypoint script will begin running even if your handler reports an error and ends up killing the container. PreStop hooks will block container termination. Kubernetes guarantees your containers won’t terminate until their PreStop hooks have completed, up to a maximum time defined by the Pod’s termination grace period. The container will be terminated regardless if the hook’s still running when the grace period ends. PreStop hooks aren’t called for completed Pods. This one can be particularly impactful depending on your use case. The current implementation of PreStop only fires when a Pod is terminated due to deletion, resource exhaustion, a probe failure, or a similar event. The hook will not be called for containers that stop naturally because their process finishes its task and exits with a zero error code.
Hooks directly impact the lifecycle progression of your Pods. Pods can’t be marked as Running until their PostStart hook completes; similarly, a Pod will be stuck Terminating until PreStop has finished.
Conclusion
Kubernetes lifecycle events are a way to notify containers of their own creation and impending deletion. By providing commands or API endpoints inside your container, you can track the critical lifecycle stages and report them to other components of your infrastructure.
Lifecycle events are easy to set up but they also have some common pitfalls. You can use adjacent mechanisms such as startup and readiness probes if you need a more reliable invocation. These are a better option for scripts that are essential when preparing a new container’s environment.