日韩av无码中文字幕,国产午夜亚洲精品国产成人小说,成人影院午夜男女爽爽爽,欧美 亚洲 中文 国产 综合

首頁 熱點 要聞 國內 產業(yè) 財經 滾動 理財 股票

k8s驅逐篇(7)-kube-controller-manager驅逐-taintManager源碼分析 世界時快訊

2023-06-25 10:48:34 來源 : 博客園

概述

taintManager的主要功能為:當某個node被打上NoExecute污點后,其上面的pod如果不能容忍該污點,則taintManager將會驅逐這些pod,而新建的pod也需要容忍該污點才能調度到該node上;

通過kcm啟動參數(shù)--enable-taint-manager來確定是否啟動taintManager,true時啟動(啟動參數(shù)默認值為true);


【資料圖】

kcm啟動參數(shù)--feature-gates=TaintBasedEvictions=xxx,默認值true,配合--enable-taint-manager共同作用,兩者均為true,才會開啟污點驅逐;

kcm污點驅逐

當node出現(xiàn)NoExecute污點時,判斷node上的pod是否能容忍node的污點,不能容忍的pod,會被立即刪除,能容忍所有污點的pod,則等待所有污點的容忍時間里最小值后,pod被刪除;

源碼分析1.結構體分析1.1 NoExecuteTaintManager結構體分析

NoExecuteTaintManager結構體為taintManager的主要結構體,其主要屬性有:(1)taintEvictionQueue:不能容忍node上NoExecute的污點的pod,會被加入到該隊列中,然后pod會被刪除;(2)taintedNodes:記錄了每個node的taint;(3)nodeUpdateQueue:當node對象發(fā)生add、delete、update(新舊node對象的taint不相同)事件時,node會進入該隊列;(4)podUpdateQueue:當pod對象發(fā)生add、delete、update(新舊pod對象的NodeNameTolerations不相同)事件時,pod會進入該隊列;(5)nodeUpdateChannelsnodeUpdateChannels即8個nodeUpdateItem類型的channel,有worker負責消費nodeUpdateQueue隊列,然后根據(jù)node name計算出index,把node放入其中1個nodeUpdateItem類型的channel中;(6)podUpdateChannelspodUpdateChannels即8個podUpdateItem類型的channel,有worker負責消費podUpdateQueue隊列,然后根據(jù)pod的node name計算出index,把pod放入其中1個podUpdateItem類型的channel中;

// pkg/controller/nodelifecycle/scheduler/taint_manager.gotype NoExecuteTaintManager struct {client                clientset.Interfacerecorder              record.EventRecordergetPod                GetPodFuncgetNode               GetNodeFuncgetPodsAssignedToNode GetPodsByNodeNameFunctaintEvictionQueue *TimedWorkerQueue// keeps a map from nodeName to all noExecute taints on that NodetaintedNodesLock sync.MutextaintedNodes     map[string][]v1.TaintnodeUpdateChannels []chan nodeUpdateItempodUpdateChannels  []chan podUpdateItemnodeUpdateQueue workqueue.InterfacepodUpdateQueue  workqueue.Interface}
1.2 taintEvictionQueue分析

taintEvictionQueue屬性是一個TimedWorkerQueue類型的隊列,調用tc.taintEvictionQueue.AddWork,會將pod添加到該隊列中,會添加一個定時器,然后到期之后會自動執(zhí)行workFunc,初始化taintEvictionQueue時,傳入的workFuncdeletePodHandler函數(shù),作用是刪除pod;

所以進入taintEvictionQueue中的pod,會在設置好的時間,被刪除;

1.3 pod.Spec.Tolerations分析

pod.Spec.Tolerations配置的是pod的污點容忍信息;

// vendor/k8s.io/api/core/v1/types.gotype Toleration struct {Key string `json:"key,omitempty" protobuf:"bytes,1,opt,name=key"`Operator TolerationOperator `json:"operator,omitempty" protobuf:"bytes,2,opt,name=operator,casttype=TolerationOperator"`Value string `json:"value,omitempty" protobuf:"bytes,3,opt,name=value"`Effect TaintEffect `json:"effect,omitempty" protobuf:"bytes,4,opt,name=effect,casttype=TaintEffect"`TolerationSeconds *int64 `json:"tolerationSeconds,omitempty" protobuf:"varint,5,opt,name=tolerationSeconds"`}

Tolerations的屬性值解析如下:(1)Key:匹配node污點的Key;(2)Operator:表示Tolerations中Key與node污點的Key相同時,其Value與node污點的Value的關系,默認值Equal,代表相等,Exists則代表Tolerations中Key與node污點的Key相同即可,不用比較其Value值;(3)Value:匹配node污點的Value;(4)Effect:匹配node污點的Effect;(5)TolerationSeconds:node污點容忍時間;

配置示例:

tolerations:- key: "key1"  operator: "Equal"  value: "value1"  effect: "NoExecute"  tolerationSeconds: 3600

上述配置表示如果該pod正在運行,同時一個匹配的污點被添加到其所在的node節(jié)點上,那么該pod還將繼續(xù)在節(jié)點上運行3600秒,然后會被驅逐(如果在此之前其匹配的node污點被刪除了,則該pod不會被驅逐);

2.初始化分析2.1 NewNodeLifecycleController

NewNodeLifecycleControllerNodeLifecycleController的初始化函數(shù),里面給taintManager注冊了pod與node的EventHandler,Add、Update、Delete事件都會調用taintManagerPodUpdatedNodeUpdated方法來做處理;

// pkg/controller/nodelifecycle/node_lifecycle_controller.gofunc NewNodeLifecycleController(    ...    podInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{AddFunc: func(obj interface{}) {...if nc.taintManager != nil {nc.taintManager.PodUpdated(nil, pod)}},UpdateFunc: func(prev, obj interface{}) {...if nc.taintManager != nil {nc.taintManager.PodUpdated(prevPod, newPod)}},DeleteFunc: func(obj interface{}) {...if nc.taintManager != nil {nc.taintManager.PodUpdated(pod, nil)}},})    ...    if nc.runTaintManager {podGetter := func(name, namespace string) (*v1.Pod, error) { return nc.podLister.Pods(namespace).Get(name) }nodeLister := nodeInformer.Lister()nodeGetter := func(name string) (*v1.Node, error) { return nodeLister.Get(name) }nc.taintManager = scheduler.NewNoExecuteTaintManager(kubeClient, podGetter, nodeGetter, nc.getPodsAssignedToNode)nodeInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{AddFunc: nodeutil.CreateAddNodeHandler(func(node *v1.Node) error {nc.taintManager.NodeUpdated(nil, node)return nil}),UpdateFunc: nodeutil.CreateUpdateNodeHandler(func(oldNode, newNode *v1.Node) error {nc.taintManager.NodeUpdated(oldNode, newNode)return nil}),DeleteFunc: nodeutil.CreateDeleteNodeHandler(func(node *v1.Node) error {nc.taintManager.NodeUpdated(node, nil)return nil}),})}...}
2.1.1 tc.NodeUpdated

tc.NodeUpdated方法會判斷新舊node對象的taint是否相同,不相同則調用tc.nodeUpdateQueue.Add,將該node放入到nodeUpdateQueue隊列中;

// pkg/controller/nodelifecycle/scheduler/taint_manager.gofunc (tc *NoExecuteTaintManager) NodeUpdated(oldNode *v1.Node, newNode *v1.Node) {nodeName := ""oldTaints := []v1.Taint{}if oldNode != nil {nodeName = oldNode.NameoldTaints = getNoExecuteTaints(oldNode.Spec.Taints)}newTaints := []v1.Taint{}if newNode != nil {nodeName = newNode.NamenewTaints = getNoExecuteTaints(newNode.Spec.Taints)}if oldNode != nil && newNode != nil && helper.Semantic.DeepEqual(oldTaints, newTaints) {return}updateItem := nodeUpdateItem{nodeName: nodeName,}tc.nodeUpdateQueue.Add(updateItem)}
2.1.2 tc.PodUpdated

tc.PodUpdated方法會判斷新舊pod對象的NodeNameTolerations是否相同,不相同則調用tc.podUpdateQueue.Add,將該pod放入到podUpdateQueue隊列中;

// pkg/controller/nodelifecycle/scheduler/taint_manager.gofunc (tc *NoExecuteTaintManager) PodUpdated(oldPod *v1.Pod, newPod *v1.Pod) {podName := ""podNamespace := ""nodeName := ""oldTolerations := []v1.Toleration{}if oldPod != nil {podName = oldPod.NamepodNamespace = oldPod.NamespacenodeName = oldPod.Spec.NodeNameoldTolerations = oldPod.Spec.Tolerations}newTolerations := []v1.Toleration{}if newPod != nil {podName = newPod.NamepodNamespace = newPod.NamespacenodeName = newPod.Spec.NodeNamenewTolerations = newPod.Spec.Tolerations}if oldPod != nil && newPod != nil && helper.Semantic.DeepEqual(oldTolerations, newTolerations) && oldPod.Spec.NodeName == newPod.Spec.NodeName {return}updateItem := podUpdateItem{podName:      podName,podNamespace: podNamespace,nodeName:     nodeName,}tc.podUpdateQueue.Add(updateItem)}
2.2 taintEvictionQueue

看到TaintManager的初始化方法NewNoExecuteTaintManager中,調用CreateWorkerQueuetaintEvictionQueue做了初始化;

// pkg/controller/nodelifecycle/scheduler/taint_manager.gofunc NewNoExecuteTaintManager(...) ... {    ...    tm.taintEvictionQueue = CreateWorkerQueue(deletePodHandler(c, tm.emitPodDeletionEvent))    ...}

CreateWorkerQueue函數(shù)初始化并返回TimedWorkerQueue結構體;

// pkg/controller/nodelifecycle/scheduler/timed_workers.gofunc CreateWorkerQueue(f func(args *WorkArgs) error) *TimedWorkerQueue {return &TimedWorkerQueue{workers:  make(map[string]*TimedWorker),workFunc: f,}}
2.2.1 deletePodHandler

初始化taintEvictionQueue時傳入了deletePodHandler作為隊列中元素的處理方法;deletePodHandler函數(shù)的主要邏輯是請求apiserver,刪除pod對象,所以說,被放入到taintEvictionQueue隊列中的pod,會被刪除;

// pkg/controller/nodelifecycle/scheduler/taint_manager.gofunc deletePodHandler(c clientset.Interface, emitEventFunc func(types.NamespacedName)) func(args *WorkArgs) error {return func(args *WorkArgs) error {ns := args.NamespacedName.Namespacename := args.NamespacedName.Nameklog.V(0).Infof("NoExecuteTaintManager is deleting Pod: %v", args.NamespacedName.String())if emitEventFunc != nil {emitEventFunc(args.NamespacedName)}var err errorfor i := 0; i < retries; i++ {err = c.CoreV1().Pods(ns).Delete(name, &metav1.DeleteOptions{})if err == nil {break}time.Sleep(10 * time.Millisecond)}return err}}
2.2.2 tc.taintEvictionQueue.AddWork

再來看一下tc.taintEvictionQueue.AddWork方法,作用是添加pod進入taintEvictionQueue隊列,即調用CreateWorker給該pod創(chuàng)建一個worker來刪除該pod;

// pkg/controller/nodelifecycle/scheduler/timed_workers.gofunc (q *TimedWorkerQueue) AddWork(args *WorkArgs, createdAt time.Time, fireAt time.Time) {key := args.KeyFromWorkArgs()klog.V(4).Infof("Adding TimedWorkerQueue item %v at %v to be fired at %v", key, createdAt, fireAt)q.Lock()defer q.Unlock()if _, exists := q.workers[key]; exists {klog.Warningf("Trying to add already existing work for %+v. Skipping.", args)return}worker := CreateWorker(args, createdAt, fireAt, q.getWrappedWorkerFunc(key))q.workers[key] = worker}

CreateWorker函數(shù)會先判斷是否應該立即執(zhí)行workFunc,是的話立即拉起一個goroutine來執(zhí)行workFunc并返回,否則定義一個timer定時器,到時間后自動拉起一個goroutine執(zhí)行workFunc;

// pkg/controller/nodelifecycle/scheduler/timed_workers.gofunc CreateWorker(args *WorkArgs, createdAt time.Time, fireAt time.Time, f func(args *WorkArgs) error) *TimedWorker {delay := fireAt.Sub(createdAt)if delay <= 0 {go f(args)return nil}timer := time.AfterFunc(delay, func() { f(args) })return &TimedWorker{WorkItem:  args,CreatedAt: createdAt,FireAt:    fireAt,Timer:     timer,}}
2.2.3 tc.taintEvictionQueue.Cancel

tc.taintEvictionQueue.AddWork方法,作用是停止對應的pod的timer,即停止執(zhí)行對應pod的workFunc(不刪除pod);

// pkg/controller/nodelifecycle/scheduler/timed_workers.gofunc (w *TimedWorker) Cancel() {if w != nil {w.Timer.Stop()}}
3.核心處理邏輯分析nc.taintManager.Run

nc.taintManager.RuntaintManager的啟動方法,處理邏輯都在這,主要是判斷node上的pod是否能容忍node的NoExecute污點,不能容忍的pod,會被刪除,能容忍所有污點的pod,則等待所有污點的容忍時間里最小值后,被刪除;

主要邏輯:(1)創(chuàng)建8個類型為nodeUpdateItem的channel(緩沖區(qū)大小10),并賦值給tc.nodeUpdateChannels;創(chuàng)建8個類型為podUpdateItem的channel(緩沖區(qū)大小1),并賦值給podUpdateChannels;

(2)消費tc.nodeUpdateQueue隊列,根據(jù)node name計算hash,將node放入對應的tc.nodeUpdateChannels[hash]中;

(3)消費tc.podUpdateQueue隊列,根據(jù)pod的node name計算hash,將node放入對應的tc.podUpdateChannels[hash]中;

(4)啟動8個goroutine,調用tc.worker對其中一個tc.nodeUpdateChannelstc.podUpdateChannels做處理,判斷node上的pod是否能容忍node的NoExecute污點,不能容忍的pod,會被刪除,能容忍所有污點的pod,則等待所有污點的容忍時間里最小值后,被刪除;

// pkg/controller/nodelifecycle/scheduler/taint_manager.gofunc (tc *NoExecuteTaintManager) Run(stopCh <-chan struct{}) {klog.V(0).Infof("Starting NoExecuteTaintManager")for i := 0; i < UpdateWorkerSize; i++ {tc.nodeUpdateChannels = append(tc.nodeUpdateChannels, make(chan nodeUpdateItem, NodeUpdateChannelSize))tc.podUpdateChannels = append(tc.podUpdateChannels, make(chan podUpdateItem, podUpdateChannelSize))}// Functions that are responsible for taking work items out of the workqueues and putting them// into channels.go func(stopCh <-chan struct{}) {for {item, shutdown := tc.nodeUpdateQueue.Get()if shutdown {break}nodeUpdate := item.(nodeUpdateItem)hash := hash(nodeUpdate.nodeName, UpdateWorkerSize)select {case <-stopCh:tc.nodeUpdateQueue.Done(item)returncase tc.nodeUpdateChannels[hash] <- nodeUpdate:// tc.nodeUpdateQueue.Done is called by the nodeUpdateChannels worker}}}(stopCh)go func(stopCh <-chan struct{}) {for {item, shutdown := tc.podUpdateQueue.Get()if shutdown {break}// The fact that pods are processed by the same worker as nodes is used to avoid races// between node worker setting tc.taintedNodes and pod worker reading this to decide// whether to delete pod.// It"s possible that even without this assumption this code is still correct.podUpdate := item.(podUpdateItem)hash := hash(podUpdate.nodeName, UpdateWorkerSize)select {case <-stopCh:tc.podUpdateQueue.Done(item)returncase tc.podUpdateChannels[hash] <- podUpdate:// tc.podUpdateQueue.Done is called by the podUpdateChannels worker}}}(stopCh)wg := sync.WaitGroup{}wg.Add(UpdateWorkerSize)for i := 0; i < UpdateWorkerSize; i++ {go tc.worker(i, wg.Done, stopCh)}wg.Wait()}
tc.worker

tc.worker方法負責消費nodeUpdateChannelspodUpdateChannels,分別調用tc.handleNodeUpdatetc.handlePodUpdate方法做進一步處理;

// pkg/controller/nodelifecycle/scheduler/taint_manager.gofunc (tc *NoExecuteTaintManager) worker(worker int, done func(), stopCh <-chan struct{}) {defer done()// When processing events we want to prioritize Node updates over Pod updates,// as NodeUpdates that interest NoExecuteTaintManager should be handled as soon as possible -// we don"t want user (or system) to wait until PodUpdate queue is drained before it can// start evicting Pods from tainted Nodes.for {select {case <-stopCh:returncase nodeUpdate := <-tc.nodeUpdateChannels[worker]:tc.handleNodeUpdate(nodeUpdate)tc.nodeUpdateQueue.Done(nodeUpdate)case podUpdate := <-tc.podUpdateChannels[worker]:// If we found a Pod update we need to empty Node queue first.priority:for {select {case nodeUpdate := <-tc.nodeUpdateChannels[worker]:tc.handleNodeUpdate(nodeUpdate)tc.nodeUpdateQueue.Done(nodeUpdate)default:break priority}}// After Node queue is emptied we process podUpdate.tc.handlePodUpdate(podUpdate)tc.podUpdateQueue.Done(podUpdate)}}}
3.1 tc.handleNodeUpdate

tc.handleNodeUpdate方法主要是判斷node上的pod是否能容忍node的NoExecute污點,不能容忍的pod,會被刪除,能容忍所有污點的pod,則等待所有污點的容忍時間里最小值后,被刪除;

主要邏輯:(1)從informer本地緩存中獲取node對象;(2)從node.Spec.Taints中獲取NoExecutetaints;(3)將該node的NoExecutetaints更新到tc.taintedNodes中;(4)調用tc.getPodsAssignedToNode,獲取該node上的所有pod,如果pod數(shù)量為0,直接return;(5)如果node的NoExecutetaints數(shù)量為0,則遍歷該node上所有pod,調用tc.cancelWorkWithEvent,將該pod從taintEvictionQueue隊列中移除,然后直接return;(6)遍歷該node上所有pod,調用tc.processPodOnNode,對pod做進一步處理;

// pkg/controller/nodelifecycle/scheduler/taint_manager.gofunc (tc *NoExecuteTaintManager) handleNodeUpdate(nodeUpdate nodeUpdateItem) {node, err := tc.getNode(nodeUpdate.nodeName)if err != nil {if apierrors.IsNotFound(err) {// Deleteklog.V(4).Infof("Noticed node deletion: %#v", nodeUpdate.nodeName)tc.taintedNodesLock.Lock()defer tc.taintedNodesLock.Unlock()delete(tc.taintedNodes, nodeUpdate.nodeName)return}utilruntime.HandleError(fmt.Errorf("cannot get node %s: %v", nodeUpdate.nodeName, err))return}// Create or Updateklog.V(4).Infof("Noticed node update: %#v", nodeUpdate)taints := getNoExecuteTaints(node.Spec.Taints)func() {tc.taintedNodesLock.Lock()defer tc.taintedNodesLock.Unlock()klog.V(4).Infof("Updating known taints on node %v: %v", node.Name, taints)if len(taints) == 0 {delete(tc.taintedNodes, node.Name)} else {tc.taintedNodes[node.Name] = taints}}()// This is critical that we update tc.taintedNodes before we call getPodsAssignedToNode:// getPodsAssignedToNode can be delayed as long as all future updates to pods will call// tc.PodUpdated which will use tc.taintedNodes to potentially delete delayed pods.pods, err := tc.getPodsAssignedToNode(node.Name)if err != nil {klog.Errorf(err.Error())return}if len(pods) == 0 {return}// Short circuit, to make this controller a bit faster.if len(taints) == 0 {klog.V(4).Infof("All taints were removed from the Node %v. Cancelling all evictions...", node.Name)for i := range pods {tc.cancelWorkWithEvent(types.NamespacedName{Namespace: pods[i].Namespace, Name: pods[i].Name})}return}now := time.Now()for _, pod := range pods {podNamespacedName := types.NamespacedName{Namespace: pod.Namespace, Name: pod.Name}tc.processPodOnNode(podNamespacedName, node.Name, pod.Spec.Tolerations, taints, now)}}
3.1.1 tc.processPodOnNode

tc.processPodOnNode方法主要作用是判斷pod是否能容忍node上所有的NoExecute的污點,如果不能,則將該pod加到taintEvictionQueue隊列中,能容忍所有污點的pod,則等待所有污點的容忍時間里最小值后,加到taintEvictionQueue隊列中;

主要邏輯:(1)如果node的NoExecutetaints數(shù)量為0,則調用tc.cancelWorkWithEvent,將該pod從taintEvictionQueue隊列中移除;(2)調用v1helper.GetMatchingTolerations,判斷pod是否容忍node上所有的NoExecute的taints,以及獲取能容忍taints的容忍列表;(3)如果不能容忍所有污點,則調用tc.taintEvictionQueue.AddWork,將該pod加到taintEvictionQueue隊列中;(4)如果能容忍所有污點,則等待所有污點的容忍時間里最小值后,再調用tc.taintEvictionQueue.AddWork,將該pod加到taintEvictionQueue隊列中;

// pkg/controller/nodelifecycle/scheduler/taint_manager.gofunc (tc *NoExecuteTaintManager) processPodOnNode(podNamespacedName types.NamespacedName,nodeName string,tolerations []v1.Toleration,taints []v1.Taint,now time.Time,) {if len(taints) == 0 {tc.cancelWorkWithEvent(podNamespacedName)}allTolerated, usedTolerations := v1helper.GetMatchingTolerations(taints, tolerations)if !allTolerated {klog.V(2).Infof("Not all taints are tolerated after update for Pod %v on %v", podNamespacedName.String(), nodeName)// We"re canceling scheduled work (if any), as we"re going to delete the Pod right away.tc.cancelWorkWithEvent(podNamespacedName)tc.taintEvictionQueue.AddWork(NewWorkArgs(podNamespacedName.Name, podNamespacedName.Namespace), time.Now(), time.Now())return}minTolerationTime := getMinTolerationTime(usedTolerations)// getMinTolerationTime returns negative value to denote infinite toleration.if minTolerationTime < 0 {klog.V(4).Infof("New tolerations for %v tolerate forever. Scheduled deletion won"t be cancelled if already scheduled.", podNamespacedName.String())return}startTime := nowtriggerTime := startTime.Add(minTolerationTime)scheduledEviction := tc.taintEvictionQueue.GetWorkerUnsafe(podNamespacedName.String())if scheduledEviction != nil {startTime = scheduledEviction.CreatedAtif startTime.Add(minTolerationTime).Before(triggerTime) {return}tc.cancelWorkWithEvent(podNamespacedName)}tc.taintEvictionQueue.AddWork(NewWorkArgs(podNamespacedName.Name, podNamespacedName.Namespace), startTime, triggerTime)}
3.2 tc.handlePodUpdate

tc.handlePodUpdate方法最終也是調用了tc.processPodOnNode對pod做進一步處理;

tc.processPodOnNode方法在上面已經分析過了,這里不再進行分析;

主要邏輯:(1)從informer本地緩存中獲取pod對象;(2)獲取pod的node name,如果為空,直接return;(3)根據(jù)node name從tc.taintedNodes中獲取node的污點,如果污點為空,直接return;(4)調用tc.processPodOnNode對pod做進一步處理;

// pkg/controller/nodelifecycle/scheduler/taint_manager.gofunc (tc *NoExecuteTaintManager) handlePodUpdate(podUpdate podUpdateItem) {pod, err := tc.getPod(podUpdate.podName, podUpdate.podNamespace)if err != nil {if apierrors.IsNotFound(err) {// DeletepodNamespacedName := types.NamespacedName{Namespace: podUpdate.podNamespace, Name: podUpdate.podName}klog.V(4).Infof("Noticed pod deletion: %#v", podNamespacedName)tc.cancelWorkWithEvent(podNamespacedName)return}utilruntime.HandleError(fmt.Errorf("could not get pod %s/%s: %v", podUpdate.podName, podUpdate.podNamespace, err))return}// We key the workqueue and shard workers by nodeName. If we don"t match the current state we should not be the one processing the current object.if pod.Spec.NodeName != podUpdate.nodeName {return}// Create or UpdatepodNamespacedName := types.NamespacedName{Namespace: pod.Namespace, Name: pod.Name}klog.V(4).Infof("Noticed pod update: %#v", podNamespacedName)nodeName := pod.Spec.NodeNameif nodeName == "" {return}taints, ok := func() ([]v1.Taint, bool) {tc.taintedNodesLock.Lock()defer tc.taintedNodesLock.Unlock()taints, ok := tc.taintedNodes[nodeName]return taints, ok}()// It"s possible that Node was deleted, or Taints were removed before, which triggered// eviction cancelling if it was needed.if !ok {return}tc.processPodOnNode(podNamespacedName, nodeName, pod.Spec.Tolerations, taints, time.Now())}
總結

taintManager的主要功能為:當某個node被打上NoExecute污點后,其上面的pod如果不能容忍該污點,則taintManager將會驅逐這些pod,而新建的pod也需要容忍該污點才能調度到該node上;

通過kcm啟動參數(shù)--enable-taint-manager來確定是否啟動taintManagertrue時啟動(啟動參數(shù)默認值為true);

kcm啟動參數(shù)--feature-gates=TaintBasedEvictions=xxx,默認值true,配合--enable-taint-manager共同作用,兩者均為true,才會開啟污點驅逐;

kcm污點驅逐

當node出現(xiàn)NoExecute污點時,判斷node上的pod是否能容忍node的污點,不能容忍的pod,會被立即刪除,能容忍所有污點的pod,則等待所有污點的容忍時間里最小值后,pod被刪除;

關鍵詞:
相關文章

最近更新
精彩推送
快消息!黃色預警! 2023-06-25 10:49:34
普里戈任的一場鬧劇 2023-06-25 10:46:34
焦點熱議:假日周邊游 2023-06-25 10:45:45
賴蛤蟆工具箱(賴蛤?。? 2023-06-25 10:43:31
廣西百色:“芒”起來!140萬畝“芒果海”采摘上市 2023-06-25 10:41:57