背景
测试的时候,通常需要将 Pod 中的 container 频繁地杀死,重启。在这个过程中,Pod 的状态经常会出现 CrashLoopBackOff,而且 container 重启的时间越来越长。
分析
为了避免 container 频繁地 restart,k8s 对 container restart 过程做了限制,使用 back-off 的方法,官方文档中的说法是:
Failed containers that are restarted by Kubelet, are restarted with an exponential back-off delay, the delay is in multiples of sync-frequency 0, 1x, 2x, 4x, 8x … capped at 5 minutes and is reset after 10 minutes of successful execution.
- https://kubernetes-v1-4.github.io/docs/user-guide/pod-states/
这里先直接给出结论:
- 在 Pod 中 restart container 的时候(具体时机是,周期性执行 SyncPod() 的时候),Pod 会通过自身的 Status 结构找到当前这个 container(因为 Pod 中可能有多个 container)上一次退出的时间,记为 ts
- 如果是第一次 restart,那么直接重启 container,并且在 Kubelet 的 backOff.perItemBackoff (一个 map 结构,key 是根据 container 和所在 pod 对象计算出来的 id)中记录下次 backoff 的时间(初始值为 10s,然后按照指数增长,最大 5min)
- 如果不是第一次 restart,即 Kubelet 的 backOff.perItemBackoff 中已经有这个 container 的 backOff 记录,计为 backoff,那么
- 如果 now() - ts < backoff,表明等待的时间还不够,抛出 CrashLoopBackOff Event(然后等到下一个 SyncPod 的周期到的时候,重新比较这个值)
- 否则,说明已经等待 backoff 时间了,可以 restart 了,此时执行 backOff.Next(),将该容器对应的 backoff 翻倍,然后执行 restart 操作
- 在步骤 3 中计算 backoff 的过程中,还会去检查当前时间距离上一次 container 退出时的间隔,如果大于 2 * MaxContainerBackOff = 10 minutes,那么会将这个 container 对应的 backoff 重置为初始值 10s
源码细节
-
kubernetes/pkg/kubelet/kubelet.go 通过源码发现,kubernetes/pkg/kubelet/kubelet.go 文件中有两个常量:
MaxContainerBackOff = 300 * time.Second backOffPeriod = time.Second * 10
使用这两个变量构造了一个 BackOff 对象,这个是 kubelet 的属性,对该 node 上所有 pod 都适用
klet.backOff = flowcontrol.NewBackOff(backOffPeriod, MaxContainerBackOff)
BackOff 结构如下
type Backoff struct {
sync.Mutex
Clock clock.Clock
defaultDuration time.Duration
maxDuration time.Duration
perItemBackoff map[string]*backoffEntry
}
然后在 SyncPod 方法中使用这个对象
// Call the container runtime's SyncPod callback
result := kl.containerRuntime.SyncPod(pod, apiPodStatus, podStatus, pullSecrets, kl.backOff)
- kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go
SyncPod 具体做的事有:
// SyncPod syncs the running pod into the desired pod by executing following steps:
//
// 1. Compute sandbox and container changes.
// 2. Kill pod sandbox if necessary.
// 3. Kill any containers that should not be running.
// 4. Create sandbox if necessary.
// 5. Create init containers.
// 6. Create normal containers.
func (m *kubeGenericRuntimeManager) SyncPod(pod *v1.Pod, _ v1.PodStatus, podStatus *kubecontainer.PodStatus, pullSecrets []v1.Secret, backOff *flowcontrol.Backoff) (result kubecontainer.PodSyncResult) {
同样在这个文件中,有一个关键的函数
// If a container is still in backoff, the function will return a brief backoff error and
// a detailed error message.
func (m *kubeGenericRuntimeManager) doBackOff(pod *v1.Pod, container *v1.Container, podStatus *kubecontainer.PodStatus, backOff *flowcontrol.Backoff) (bool, string, error) {
var cStatus *kubecontainer.ContainerStatus
for _, c := range podStatus.ContainerStatuses {
if c.Name == container.Name && c.State == kubecontainer.ContainerStateExited {
cStatus = c
break
}
}
if cStatus == nil {
return false, "", nil
}
glog.Infof("checking backoff for container %q in pod %q", container.Name, format.Pod(pod))
// Use the finished time of the latest exited container as the start point to calculate whether to do back-off.
ts := cStatus.FinishedAt
// backOff requires a unique key to identify the container.
key := getStableKey(pod, container)
if backOff.IsInBackOffSince(key, ts) {
if ref, err := kubecontainer.GenerateContainerRef(pod, container); err == nil {
m.recorder.Eventf(ref, v1.EventTypeWarning, events.BackOffStartContainer, "Back-off restarting failed container")
}
err := fmt.Errorf("Back-off %s restarting failed container=%s pod=%s", backOff.Get(key), container.Name, format.Pod(pod))
glog.Infof("%s", err.Error())
return true, err.Error(), kubecontainer.ErrCrashLoopBackOff
}
backOff.Next(key, ts)
return false, "", nil
}
其中 backOff.Next 函数定义如下
// move backoff to the next mark, capping at maxDuration
func (p *Backoff) Next(id string, eventTime time.Time) {
p.Lock()
defer p.Unlock()
entry, ok := p.perItemBackoff[id]
if !ok || hasExpired(eventTime, entry.lastUpdate, p.maxDuration) {
entry = p.initEntryUnsafe(id)
} else {
delay := entry.backoff * 2 // exponential
entry.backoff = time.Duration(integer.Int64Min(int64(delay), int64(p.maxDuration)))
}
entry.lastUpdate = p.Clock.Now()
}