If Pods Do Not Start from Deployment in EKS, Warning FailedScheduling May Indicate Resource Insufficiency

kubernetes
2021-02-11 23:11 (3 years ago) ytyng

That's true, but after using a large Kubernetes cluster within the company and then using a small EKS cluster for verification, I overlooked it.

Looking at the deployment with get,

% kubectl get deployment -n kube-system (git)-[master]
NAME READY UP-TO-DATE AVAILABLE AGE
alb-ingress-controller 0/1 1 0 7m6s
coredns 2/2 2 2 149d

I want to start this alb-ingress-controller, but READY is 0.

In this case, you can't see the pod with kubectl get pod (maybe you can see it with some options? I'm not sure),

but from the EKS web console, you can find out the name of the pod that is trying to start and describe it.

% kubectl -n kube-system describe pod/alb-ingress-controller-5686444fbf-cjmqj


...
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 0s (x3 over 75s) default-scheduler 0/1 nodes are available: 1 Too many pods.

When Warning FailedScheduling is shown, it means the Pod hasn't tried to start due to external factors.

It's probably due to lack of resources, so I deleted other pods. Then it worked.

Current rating: 5

Comments

Archive

2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011