Occasionally, I have a need to delete and restart k8s pods after I’ve been hammering on a cluster, when I need to perform node maintenance, or when a project has run its course and I want to free up the namespace and/or resources. Below is a general overview of that procedure but note that it is specific to certain scenarios only and assumes you know the consequences of your actions. I do not warranty any of this!
1) List pods to get the name of the node(s) you need to work on:
kubectl get pods --all-namespaces -o wide
2) OPTIONAL: Best practice dictates that we at least attempt to drain the node(s) and let Kubernetes handle the pod deletion beforehand. This may not be practical if your project namespace spans multiple nodes:
kubectl drain NODE_NAME(S)
3) At this point, you will either be successful (jump to step 5) or you will need to manually force-delete pods. If you have to manually delete pods, do the following:
kubectl delete --all pods -n=NAMESPACE
Note: The above applies if you want to delete only pods from one namespace. If you want to delete pods from multiple name spaces, it’s best you do so one namespace at a time so as not to make your cluster unusable.
4) List the pods again to make sure that they were deleted. Some may be in Pending status if you drained the node and your cluster is trying to replicate them.
kubectl get pods --all-namespaces -o wide
5) Uncordon your nodes so kube-scheduler will start using them again:
kubectl uncordon NODE_NAME(S)
Again, this is just a quick dirty way to get rid of some pods – it assumes you know what the consequences of your actions are. I don’t do it manually with kubectl often enough to remember the sequence so am putting it down here for posterity.