Skip to content

Instantly share code, notes, and snippets.

@blankdots
Last active January 15, 2025 07:38
Show Gist options
  • Save blankdots/b433f8b40b65fa0c061e0c46536db7bf to your computer and use it in GitHub Desktop.
Save blankdots/b433f8b40b65fa0c061e0c46536db7bf to your computer and use it in GitHub Desktop.
---
# Source: cilium/templates/cilium-agent/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: "cilium"
namespace: kube-system
---
# Source: cilium/templates/cilium-envoy/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: "cilium-envoy"
namespace: kube-system
---
# Source: cilium/templates/cilium-operator/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: "cilium-operator"
namespace: kube-system
---
# Source: cilium/templates/cilium-ca-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: cilium-ca
namespace: kube-system
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGRENDQWZ5Z0F3SUJBZ0lSQU5LVW9DQUMvMUFxQ3c5NXZDWFJuWjh3RFFZSktvWklodmNOQVFFTEJRQXcKRkRFU01CQUdBMVVFQXhNSlEybHNhWFZ0SUVOQk1CNFhEVEkxTURFeE5UQTNNemMwT0ZvWERUSTRNREV4TlRBMwpNemMwT0Zvd0ZERVNNQkFHQTFVRUF4TUpRMmxzYVhWdElFTkJNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DCkFROEFNSUlCQ2dLQ0FRRUExSTNpZlIzMW91TkhNVkh2bGMvcFBDd0Q3dVkxS0lPV1FJZmJkY0lRMThtalNCSEYKYUhZL0lFaE0zNElFenFnVVQ2WE9iQUVKMmp6WmhqR1NIeEpMaVI0Mmg1a212UWVGdXIxSytUT1pVdGZqTEVtbApUTVZvNWRIU1QzeWlzM3ZkZ0Irbmc4OGRodFlKVlhWNCtISllQTkx5UExCLzczOS9rcFpOZTlYa2VRT3h5WE1aClNXdUk5UXFPL1pjZ1p2S1NBQkJacDc3NTJyNkkrMllmQzRZUzBwK3dsQ1pIcHdBeUNuMlgyS2lSY2lqSENDdVcKZ2lHZzRGZ25jT1ozSzFqZU5TQXNaSkVpU0dtdXZnK00yNG0yanROL3AvcHVweWNad1hocUZDR3RwUlhkZzJqbwo4cnF0QTlVNDkrOVcxZUtpOXBSQzBHUjRSQ1RuUlZjVlVNSjQ5d0lEQVFBQm8yRXdYekFPQmdOVkhROEJBZjhFCkJBTUNBcVF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01BOEdBMVVkRXdFQi93UUYKTUFNQkFmOHdIUVlEVlIwT0JCWUVGR3N5bklwZDdCZEprd3VvdWJpTjlMeUdsSnpyTUEwR0NTcUdTSWIzRFFFQgpDd1VBQTRJQkFRQXU3ckl1OFJBZGhlUWFCT1hiUXlLMlNjWmpvVU1CY3RMRi9pNW1FTEV0RFVrVWVUNG1XZUlCCkRIUGhGang2RHNLc1ZzMy84V1QvV0tYT3FVbWZzMHZMVjR1Mmt3Q0dBOG13YmJiVE85T0FUZDVCWWhIb281VXYKcFZXcTRGRUE0QXRWUUhHOXpycmViSDJHY1M0b290ajVlRkFhRnc3K05sQU80Y0VzV3FNNlNJeWNEM2tlRmVVbgp3VGh6ZkgxS2k4NElIZExPVldleDAyZmovRTJOQnlPRTBzb2hRTDhDTno0ZVhYQ1ZJN1ROMGw2L1g1L05BM21mCjBpT0xLTGdLT0ZmSm9OTlNXalhzTE1kM1BPTVlxL2I2Y2djL2E5cXpBODdOUWxhS2MyVnVtOUI2VVU3SVNDRTkKQXduNDRvMnpaLzA4NGVGWnQzL0IwWndCQW8xY2xWQ2kKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
ca.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBMUkzaWZSMzFvdU5ITVZIdmxjL3BQQ3dEN3VZMUtJT1dRSWZiZGNJUTE4bWpTQkhGCmFIWS9JRWhNMzRJRXpxZ1VUNlhPYkFFSjJqelpoakdTSHhKTGlSNDJoNWttdlFlRnVyMUsrVE9aVXRmakxFbWwKVE1WbzVkSFNUM3lpczN2ZGdCK25nODhkaHRZSlZYVjQrSEpZUE5MeVBMQi83Mzkva3BaTmU5WGtlUU94eVhNWgpTV3VJOVFxTy9aY2dadktTQUJCWnA3NzUycjZJKzJZZkM0WVMwcCt3bENaSHB3QXlDbjJYMktpUmNpakhDQ3VXCmdpR2c0RmduY09aM0sxamVOU0FzWkpFaVNHbXV2ZytNMjRtMmp0Ti9wL3B1cHljWndYaHFGQ0d0cFJYZGcyam8KOHJxdEE5VTQ5KzlXMWVLaTlwUkMwR1I0UkNUblJWY1ZVTUo0OXdJREFRQUJBb0lCQVFETXFkSHJwMjFkNm5vaQpnMEt1ZmdPV2JNdGN2VTF5TlVIMzROandDbTB0V25wZWFyNHFWN0Z3NUMwaENaQ1hiRUFpY1FUUitpNENkWlQrCkliMFJMZ3lOZXdvR2h2RkdFbmd4MXhMWjRWVkE3NTlPaFpzazBQQ3dXZGluc05yNDk4dlFFcXE1d0RRdUlPbmIKMzhFYmVQNTlrcUNzejBaZzFlT0F3ampaaEpyNTZWTmhldThEWUtYSnp3UmMxZGcxUGZXVGZLaTBGeGttcm1GRwovbHg3YWVITXZBV0p0MitxbW5UbXdNVFNDWTREK0RoODVUSG1HZFdHTnFFSjlabVBJekdRK3hkQ0lKS3Mra3dNCmEzbTZhc21FdnVvc2t6T1czT3VlS3lQVFcxU3V5OWFseUhFRjJwR25NaU1seVJOSWg5TVU2TWFVQm1nN0FUNzIKaU96UzN3UUJBb0dCQU9MY0FxeEpQQ0tUTWVIVytVNzZSVjFjaUhmaFJFaGozSEFvTWtKWWxmMVVveEFxejBCRApkSWo0ZWFTYmlJNDVZWGlFeFJjUVRWOEs0cHd5dFRlRkxOZ3VTRXBTZVVyTVdDWisyeEM4bzM5LzhiS05aZCtICnJXUWFhb096bmN0cjIyYkw5SnBsSm9mbyswcE5jSW5nM21zV3ZxOUppc3dMVzJSVkZyT3RPd1hMQW9HQkFPL2IKZDE1MzBEVGhIT2Noazd5YjF4ZS9vSkFOWEVNZzZkcU9SVitVaU9Qd245emlGUnZERE5Tdkl1MzRRM1BaTlNXSQovZTlOTEVxSUNlVlZkN21vaFpnL3lDUko4cUVxZlpHUkJqNnhjZUgyOWg4N1RoRkovUk1rR3lJWllPNGdLK2J2CjdiazBlaFlKOWF2WHdJUXdHbTlSMUsrVWp1bS80My9iTUdaMWhwUUZBb0dBTnVTcHVPcVhwSDRkaWRvc3hWR0YKeTB1Q3NnOU9LSDRSYndvcTd5YmtWRUpRbXE3aEsxbW5MeVdBdWJYdzJ3bERicGdoNEt6UEsvcEVUeXR0OGh0RgplS0hxV0NHUXUrcXFRZFpjUjdaOWtYSnlGNVJqWEMxR1pYeUczWXR1MlRRbUNMKzlWa2EzaGhkNEJzaXFQSkt1Ci94YW4yVjVnT1hOZUQrOE82VGMzbXZVQ2dZRUEwQXlmZDlPWlI2VFR1ekFHdzV5eGI2b0tEYWxwRTlraXZ5NlUKd2hsR3UrQmw2ZVE0eHdIaXlVQzRuWTJ3aEhZdGdVZFliQStXa1hkNmpmQWFqM0c3bjVvRGNtYXFERjJjMlh6ZgoyOVZ5b0x2a05LYnVTbFRSTFo4dDRkLzlrYzlhQlZDcjlPK3R6aHdKZW1zRVZDU2RhVjJqakVEaHphTmlJKzd1CkJwRitrRGtDZ1lFQTNydlc1ZDlxeUErM3JhaDdKOFhmaFNPTXJUYklPOHBSeUJOZmJYdVhRYjVGeEVDdmNMdnMKQitWZUY5YUQ3anZ4SzMya2VSUjYreWY4ZTVNTVBYSlNBVFhjeXE2WTJLZ1R1SklxVHhQMXlSVzBvR3Z0QnI3cApRNGZFcnlCSHJMaTUrbHhJSUlnTHBBNThRY3VJcVBLdTE5ZkFCaXRwZHZvNTAvK3k2TmxzVHdNPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
---
# Source: cilium/templates/hubble/tls-helm/server-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: hubble-server-certs
namespace: kube-system
type: kubernetes.io/tls
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGRENDQWZ5Z0F3SUJBZ0lSQU5LVW9DQUMvMUFxQ3c5NXZDWFJuWjh3RFFZSktvWklodmNOQVFFTEJRQXcKRkRFU01CQUdBMVVFQXhNSlEybHNhWFZ0SUVOQk1CNFhEVEkxTURFeE5UQTNNemMwT0ZvWERUSTRNREV4TlRBMwpNemMwT0Zvd0ZERVNNQkFHQTFVRUF4TUpRMmxzYVhWdElFTkJNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DCkFROEFNSUlCQ2dLQ0FRRUExSTNpZlIzMW91TkhNVkh2bGMvcFBDd0Q3dVkxS0lPV1FJZmJkY0lRMThtalNCSEYKYUhZL0lFaE0zNElFenFnVVQ2WE9iQUVKMmp6WmhqR1NIeEpMaVI0Mmg1a212UWVGdXIxSytUT1pVdGZqTEVtbApUTVZvNWRIU1QzeWlzM3ZkZ0Irbmc4OGRodFlKVlhWNCtISllQTkx5UExCLzczOS9rcFpOZTlYa2VRT3h5WE1aClNXdUk5UXFPL1pjZ1p2S1NBQkJacDc3NTJyNkkrMllmQzRZUzBwK3dsQ1pIcHdBeUNuMlgyS2lSY2lqSENDdVcKZ2lHZzRGZ25jT1ozSzFqZU5TQXNaSkVpU0dtdXZnK00yNG0yanROL3AvcHVweWNad1hocUZDR3RwUlhkZzJqbwo4cnF0QTlVNDkrOVcxZUtpOXBSQzBHUjRSQ1RuUlZjVlVNSjQ5d0lEQVFBQm8yRXdYekFPQmdOVkhROEJBZjhFCkJBTUNBcVF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01BOEdBMVVkRXdFQi93UUYKTUFNQkFmOHdIUVlEVlIwT0JCWUVGR3N5bklwZDdCZEprd3VvdWJpTjlMeUdsSnpyTUEwR0NTcUdTSWIzRFFFQgpDd1VBQTRJQkFRQXU3ckl1OFJBZGhlUWFCT1hiUXlLMlNjWmpvVU1CY3RMRi9pNW1FTEV0RFVrVWVUNG1XZUlCCkRIUGhGang2RHNLc1ZzMy84V1QvV0tYT3FVbWZzMHZMVjR1Mmt3Q0dBOG13YmJiVE85T0FUZDVCWWhIb281VXYKcFZXcTRGRUE0QXRWUUhHOXpycmViSDJHY1M0b290ajVlRkFhRnc3K05sQU80Y0VzV3FNNlNJeWNEM2tlRmVVbgp3VGh6ZkgxS2k4NElIZExPVldleDAyZmovRTJOQnlPRTBzb2hRTDhDTno0ZVhYQ1ZJN1ROMGw2L1g1L05BM21mCjBpT0xLTGdLT0ZmSm9OTlNXalhzTE1kM1BPTVlxL2I2Y2djL2E5cXpBODdOUWxhS2MyVnVtOUI2VVU3SVNDRTkKQXduNDRvMnpaLzA4NGVGWnQzL0IwWndCQW8xY2xWQ2kKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURWekNDQWorZ0F3SUJBZ0lSQUk3OTMxRlRYbno2MDB3RUpMenorUmN3RFFZSktvWklodmNOQVFFTEJRQXcKRkRFU01CQUdBMVVFQXhNSlEybHNhWFZ0SUVOQk1CNFhEVEkxTURFeE5UQTNNemMwT0ZvWERUSTJNREV4TlRBMwpNemMwT0Zvd0tqRW9NQ1lHQTFVRUF3d2ZLaTVrWldaaGRXeDBMbWgxWW1Kc1pTMW5jbkJqTG1OcGJHbDFiUzVwCmJ6Q0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5NTGpXeEVkWEZ1d3lrVHIvVzUKQkZtSXl0OWxTSE9HM2dHRDhBWFhZMCtBUkNzTFNDTTZZWUJBMWJ5YnpObnJXTFREVEdzTGxyQlRORkNUZW9UMQpSM28rRkFlaDZkU3RyRitWbzN4SVhBZHVWamxIWW5EWFNnU1hzamZuQ3QrYzZxS2V1Y3JNZmdFeUUxdFRYWDZkCmN4UkxEakZwcGh1Z25yU3ZWNmZnZHIyN3ljMkRuWXlWR2wzbVZjUFJObDQ5eFg3NzhjQ091dHk5cE41YkpXU0EKUXR3blNvSjBMWHpDS2k4UXdubTlTYXo2N1lFQ3RVSWQyK1RXNVhpdXN6anRlZUtFRUl1L2dSM25obWkya0xMRApGL0VMSWhhRkd0SVJGbVgwVTMvWW1DTGsrS0FQeHZ6bCt4eEtNWExBK1NxdjUrS3VmQlJNV2JaSy91Q1Z1Z0I2CjY0RUNBd0VBQWFPQmpUQ0JpakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdIUVlEVlIwbEJCWXdGQVlJS3dZQkJRVUgKQXdFR0NDc0dBUVVGQndNQ01Bd0dBMVVkRXdFQi93UUNNQUF3SHdZRFZSMGpCQmd3Rm9BVWF6S2NpbDNzRjBtVApDNmk1dUkzMHZJYVVuT3N3S2dZRFZSMFJCQ013SVlJZktpNWtaV1poZFd4MExtaDFZbUpzWlMxbmNuQmpMbU5wCmJHbDFiUzVwYnpBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWhUOTVOZGRRdnhoU0NpNTFBV0NoYlFTYTlHbWkKNU5ubDFWQzdDYWcvNERueTNjN0xFcU9ZaDZlVU1veDh4WncyQlp4WE4vaEo5R3V1M1JrYU9wR1U4R1I2SVNzbApBd3FxQWI1OFhlcHNpZHZkaFQ5OFphQU80SFJiZGpSWjV6TnVubU1NZ0crdFZtbTBCWS8zU1prdnVndXk3dnBxClNXUWdsbm90RVVDSkpmdzkwRHFVdHA2T2RVaGEwTHd5WGFQOFFtRU4xdkl0Nlk5V250Q2VpRk5OYW9ERFlZWW0KTmFYWmNKQ1UyRHJ4RDcxbkM0MW9QSGhYOW1jNGR6OUxuaWtoRk1oc1ZUcFB3SURPZGpoUm1KSUIvNXdGMldCUwo5d1NwTEtnRVc2OTFLeUhjN0FxN2NZazkxSkFRei9XazF3Q2lvUXh3VUNaejE5OFdYYXU3VE1HTG9RPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBMHd1TmJFUjFjVzdES1JPdjlia0VXWWpLMzJWSWM0YmVBWVB3QmRkalQ0QkVLd3RJCkl6cGhnRURWdkp2TTJldFl0TU5NYXd1V3NGTTBVSk42aFBWSGVqNFVCNkhwMUsyc1g1V2pmRWhjQjI1V09VZGkKY05kS0JKZXlOK2NLMzV6cW9wNjV5c3grQVRJVFcxTmRmcDF6RkVzT01XbW1HNkNldEs5WHArQjJ2YnZKellPZApqSlVhWGVaVnc5RTJYajNGZnZ2eHdJNjYzTDJrM2xzbFpJQkMzQ2RLZ25RdGZNSXFMeERDZWIxSnJQcnRnUUsxClFoM2I1TmJsZUs2ek9PMTU0b1FRaTcrQkhlZUdhTGFRc3NNWDhRc2lGb1VhMGhFV1pmUlRmOWlZSXVUNG9BL0cKL09YN0hFb3hjc0Q1S3EvbjRxNThGRXhadGtyKzRKVzZBSHJyZ1FJREFRQUJBb0lCQVFDNXR2WEphMmpKM0FWagpOVTduZGR1dENtSTBPY1dLM0FpcTNyYXQvVDhJSzhCUS9JbUxib2wwT2htYjhxSk90ZnFHZjBIVkJRcWl1V1ZVCjdxS25NOHlsZHRGYmNoRDV0YWIxS2hJR2dRcHpBNVplcTBHbU9OMktzdzhDZ0k4aC9jekFNOXNjNmR5TUlzdkQKMXRWMFlRdHV0U21WTS9vWFg3MnBGSStYVEVCUmJzQUt5czlvbHUxTThrYnFicDJCZGgyYnYvbFRmZHpMUzhDaQpFQ0twUHl1NkNhRWFtZ2dkZjBYSGR0SExmRHRnYjdkM2NGVnFURnN1VEdCNy9NcG5mZmYwVUo4RjA0cmFOcXFNCnhqMkMxM2pJTVpNUzFoS2kvOTFZQXVGK1U5bGlwZ1BQMFgwWHdFMnFmRE94OFU0VllzK1VTNUtxTmcrcGV5aHIKVFhLbUx4Uk5Bb0dCQVBoQU9GOVE2QklCc3dDbU1qNlovZGkzRERSVUFnM0pabFRNcHRvU09SSjJCUW1DMFdTVQpJM1dTQWZ4aFZJMmlzbGxQbklrbFdHcmdObFBjeGx6bWZvQS9ZRk9CNS9ScnNHWUZIbno1akZkOGZYMC9pOXM4CjZiZXAzNEZJUW54bFE2QzU3MDFSVVI2dE9HdmN0cWx0Tk81N0hHZitENnhSVHB4Zkx1WnJVWXZqQW9HQkFObWkKQlNlL2VOSnJGemlZdXFVbDZjbE9LZnU5bmgyVGdDdGRObFNwNnQ4eUp0L09aZS9HSXVOVnVhYWlqd2U0d0tMQgpyT3o5NkhqQzNJSmE1dXRmWVZ6L2V5TElPMEU1ejNKTXFLZkVibVZlZWdrb29MelMwRGEzeUkyTXU3UUxUQlBZCmhTTmtyMVZQL2lWMVdPVCtQSS83b2NMenlHNFl4UDduL0thaW9GQkxBb0dCQUoyQ3NjSFlaY2EzQ1VwS0tPc0wKYmNMSk1aY3FEanVOSTc1K013ZCtOSFFBS2VZRStMS21RM0ZmZUo4WGFqeUxsRG1TaDdHRTNuckJVL2NWeDA0KwplTmtLWFNYZThMdG1jSC9xazVPY0NtMmY4VWM4d0pJVUxmNTRhL1Z0VWJIMzFsYnVZbVZlU09mNzVDYWIzZEhXCkZwV1J5MDQ0SnQyZ29RNGFYbm1ZY2g4ZEFvR0FLRjBuYlo3U2p0d3oyMWhnVFo4QmhFZU4rOVhJVEozOXlJMHEKTlh1cVJ5a3JFcGxhU0tWTDlUUUNFY2pXbEUwTTFXTHNhcXdSQU16TFR4WUMvQ2FkalQwSkhvTmFraGRoeFVZNApoWjBtQ0lFRVMweVF1MVN5TDJQWXU0QWRsQ0FBUlRJRVIzTTJIYkdQWm0wa1JweHNxUnUzZmN2dklaUlFmU0tUCnRtZzFLWThDZ1lCNzlheVhCMmRyalAwNCt2UUV0TjR1NXVxbTczOGx1WE0rRmsyVHFHeUxnU21XbmpnRjhjRVEKQWZCdDBWdStXMDFsZE1QUEY4S0o0ZkMrZWJFbmZLZlBvWVNBMVVBdHU5TFBIQVU3bEVwbzFnRWpWSnlrdUx4bQp3ZEZnbWxhZXowTS9JamZ5UDZhZzFoWmtqR1VRWnNjbGw1VWovSWZvOEppamp2S0wraWtnWFE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
---
# Source: cilium/templates/cilium-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-config
namespace: kube-system
data:
# Identity allocation mode selects how identities are shared between cilium
# nodes by setting how they are stored. The options are "crd" or "kvstore".
# - "crd" stores identities in kubernetes as CRDs (custom resource definition).
# These can be queried with:
# kubectl get ciliumid
# - "kvstore" stores identities in an etcd kvstore, that is
# configured below. Cilium versions before 1.6 supported only the kvstore
# backend. Upgrades from these older cilium versions should continue using
# the kvstore by commenting out the identity-allocation-mode below, or
# setting it to "kvstore".
identity-allocation-mode: crd
identity-heartbeat-timeout: "30m0s"
identity-gc-interval: "15m0s"
cilium-endpoint-gc-interval: "5m0s"
nodes-gc-interval: "5m0s"
# If you want to run cilium in debug mode change this value to true
debug: "false"
debug-verbose: ""
# The agent can be put into the following three policy enforcement modes
# default, always and never.
# https://docs.cilium.io/en/latest/security/policy/intro/#policy-enforcement-modes
enable-policy: "default"
policy-cidr-match-mode: ""
# If you want metrics enabled in cilium-operator, set the port for
# which the Cilium Operator will have their metrics exposed.
# NOTE that this will open the port on the nodes where Cilium operator pod
# is scheduled.
operator-prometheus-serve-addr: ":9963"
enable-metrics: "true"
# Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
# address.
enable-ipv4: "true"
# Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
# address.
enable-ipv6: "false"
# Users who wish to specify their own custom CNI configuration file must set
# custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
custom-cni-conf: "false"
enable-bpf-clock-probe: "false"
# If you want cilium monitor to aggregate tracing for packets, set this level
# to "low", "medium", or "maximum". The higher the level, the less packets
# that will be seen in monitor output.
monitor-aggregation: medium
# The monitor aggregation interval governs the typical time between monitor
# notification events for each allowed connection.
#
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-interval: "5s"
# The monitor aggregation flags determine which TCP flags which, upon the
# first observation, cause monitor notifications to be generated.
#
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-flags: all
# Specifies the ratio (0.0-1.0] of total system memory to use for dynamic
# sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
bpf-map-dynamic-size-ratio: "0.0025"
enable-host-legacy-routing: "true"
# bpf-policy-map-max specifies the maximum number of entries in endpoint
# policy map (per endpoint)
bpf-policy-map-max: "16384"
# bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
# backend and affinity maps.
bpf-lb-map-max: "65536"
bpf-lb-external-clusterip: "false"
bpf-events-drop-enabled: "true"
bpf-events-policy-verdict-enabled: "true"
bpf-events-trace-enabled: "true"
# Pre-allocation of map entries allows per-packet latency to be reduced, at
# the expense of up-front memory allocation for the entries in the maps. The
# default value below will minimize memory usage in the default installation;
# users who are sensitive to latency may consider setting this to "true".
#
# This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
# this option and behave as though it is set to "true".
#
# If this value is modified, then during the next Cilium startup the restore
# of existing endpoints and tracking of ongoing connections may be disrupted.
# As a result, reply packets may be dropped and the load-balancing decisions
# for established connections may change.
#
# If this option is set to "false" during an upgrade from 1.3 or earlier to
# 1.4 or later, then it may cause one-time disruptions during the upgrade.
preallocate-bpf-maps: "false"
# Name of the cluster. Only relevant when building a mesh of clusters.
cluster-name: default
# Unique ID of the cluster. Must be unique across all conneted clusters and
# in the range of 1 and 255. Only relevant when building a mesh of clusters.
cluster-id: "0"
# Encapsulation mode for communication between nodes
# Possible values:
# - disabled
# - vxlan (default)
# - geneve
# Default case
routing-mode: "tunnel"
tunnel-protocol: "vxlan"
service-no-backend-response: "reject"
# Enables L7 proxy for L7 policy enforcement and visibility
enable-l7-proxy: "true"
enable-ipv4-masquerade: "true"
enable-ipv4-big-tcp: "false"
enable-ipv6-big-tcp: "false"
enable-ipv6-masquerade: "true"
enable-tcx: "true"
datapath-mode: "veth"
enable-masquerade-to-route-source: "false"
enable-xt-socket-fallback: "true"
install-no-conntrack-iptables-rules: "false"
auto-direct-node-routes: "false"
direct-routing-skip-unreachable: "false"
enable-local-redirect-policy: "false"
enable-runtime-device-detection: "true"
kube-proxy-replacement: "true"
kube-proxy-replacement-healthz-bind-address: ""
bpf-lb-sock: "false"
bpf-lb-sock-terminate-pod-connections: "false"
nodeport-addresses: ""
enable-health-check-nodeport: "true"
enable-health-check-loadbalancer-ip: "false"
node-port-bind-protection: "true"
enable-auto-protect-node-port-range: "true"
bpf-lb-acceleration: "disabled"
enable-svc-source-range-check: "true"
enable-l2-neigh-discovery: "true"
arping-refresh-period: "30s"
k8s-require-ipv4-pod-cidr: "false"
k8s-require-ipv6-pod-cidr: "false"
enable-k8s-networkpolicy: "true"
# Tell the agent to generate and write a CNI configuration file
write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist
cni-exclusive: "true"
cni-log-file: "/var/run/cilium/cilium-cni.log"
enable-endpoint-health-checking: "true"
enable-health-checking: "true"
enable-well-known-identities: "false"
enable-node-selector-labels: "false"
synchronize-k8s-nodes: "true"
operator-api-serve-addr: "127.0.0.1:9234"
# Enable Hubble gRPC service.
enable-hubble: "true"
# UNIX domain socket for Hubble server to listen to.
hubble-socket-path: "/var/run/cilium/hubble.sock"
hubble-export-file-max-size-mb: "10"
hubble-export-file-max-backups: "5"
# An additional address for Hubble server to listen to (e.g. ":4244").
hubble-listen-address: ":4244"
hubble-disable-tls: "false"
hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
ipam: "kubernetes"
ipam-cilium-node-update-rate: "15s"
egress-gateway-reconciliation-trigger-interval: "1s"
enable-vtep: "false"
vtep-endpoint: ""
vtep-cidr: ""
vtep-mask: ""
vtep-mac: ""
procfs: "/host/proc"
bpf-root: "/sys/fs/bpf"
cgroup-root: "/sys/fs/cgroup"
enable-k8s-terminating-endpoint: "true"
enable-sctp: "false"
k8s-client-qps: "10"
k8s-client-burst: "20"
remove-cilium-node-taints: "true"
set-cilium-node-taints: "true"
set-cilium-is-up-condition: "true"
unmanaged-pod-watcher-interval: "15"
# default DNS proxy to transparent mode in non-chaining modes
dnsproxy-enable-transparent-mode: "true"
dnsproxy-socket-linger-timeout: "10"
tofqdns-dns-reject-response-code: "refused"
tofqdns-enable-dns-compression: "true"
tofqdns-endpoint-max-ip-per-hostname: "50"
tofqdns-idle-connection-grace-period: "0s"
tofqdns-max-deferred-connection-deletes: "10000"
tofqdns-proxy-response-max-delay: "100ms"
agent-not-ready-taint-key: "node.cilium.io/agent-not-ready"
mesh-auth-enabled: "true"
mesh-auth-queue-size: "1024"
mesh-auth-rotated-identities-queue-size: "1024"
mesh-auth-gc-interval: "5m0s"
proxy-xff-num-trusted-hops-ingress: "0"
proxy-xff-num-trusted-hops-egress: "0"
proxy-connect-timeout: "2"
proxy-initial-fetch-timeout: "30"
proxy-max-requests-per-connection: "0"
proxy-max-connection-duration-seconds: "0"
proxy-idle-timeout-seconds: "60"
external-envoy-proxy: "true"
envoy-base-id: "0"
envoy-keep-cap-netbindservice: "false"
max-connected-clusters: "255"
clustermesh-enable-endpoint-sync: "false"
clustermesh-enable-mcs-api: "false"
nat-map-stats-entries: "32"
nat-map-stats-interval: "30s"
# Extra config allows adding arbitrary properties to the cilium config.
# By putting it at the end of the ConfigMap, it's also possible to override existing properties.
---
# Source: cilium/templates/cilium-envoy/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-envoy-config
namespace: kube-system
data:
bootstrap-config.json: |
{
"node": {
"id": "host~127.0.0.1~no-id~localdomain",
"cluster": "ingress-cluster"
},
"staticResources": {
"listeners": [
{
"name": "envoy-prometheus-metrics-listener",
"address": {
"socket_address": {
"address": "0.0.0.0",
"port_value": 9964
}
},
"filter_chains": [
{
"filters": [
{
"name": "envoy.filters.network.http_connection_manager",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
"stat_prefix": "envoy-prometheus-metrics-listener",
"route_config": {
"virtual_hosts": [
{
"name": "prometheus_metrics_route",
"domains": [
"*"
],
"routes": [
{
"name": "prometheus_metrics_route",
"match": {
"prefix": "/metrics"
},
"route": {
"cluster": "/envoy-admin",
"prefix_rewrite": "/stats/prometheus"
}
}
]
}
]
},
"http_filters": [
{
"name": "envoy.filters.http.router",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
}
}
],
"stream_idle_timeout": "0s"
}
}
]
}
]
},
{
"name": "envoy-health-listener",
"address": {
"socket_address": {
"address": "127.0.0.1",
"port_value": 9878
}
},
"filter_chains": [
{
"filters": [
{
"name": "envoy.filters.network.http_connection_manager",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
"stat_prefix": "envoy-health-listener",
"route_config": {
"virtual_hosts": [
{
"name": "health",
"domains": [
"*"
],
"routes": [
{
"name": "health",
"match": {
"prefix": "/healthz"
},
"route": {
"cluster": "/envoy-admin",
"prefix_rewrite": "/ready"
}
}
]
}
]
},
"http_filters": [
{
"name": "envoy.filters.http.router",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
}
}
],
"stream_idle_timeout": "0s"
}
}
]
}
]
}
],
"clusters": [
{
"name": "ingress-cluster",
"type": "ORIGINAL_DST",
"connectTimeout": "2s",
"lbPolicy": "CLUSTER_PROVIDED",
"typedExtensionProtocolOptions": {
"envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
"@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
"commonHttpProtocolOptions": {
"idleTimeout": "60s",
"maxConnectionDuration": "0s",
"maxRequestsPerConnection": 0
},
"useDownstreamProtocolConfig": {}
}
},
"cleanupInterval": "2.500s"
},
{
"name": "egress-cluster-tls",
"type": "ORIGINAL_DST",
"connectTimeout": "2s",
"lbPolicy": "CLUSTER_PROVIDED",
"typedExtensionProtocolOptions": {
"envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
"@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
"commonHttpProtocolOptions": {
"idleTimeout": "60s",
"maxConnectionDuration": "0s",
"maxRequestsPerConnection": 0
},
"upstreamHttpProtocolOptions": {},
"useDownstreamProtocolConfig": {}
}
},
"cleanupInterval": "2.500s",
"transportSocket": {
"name": "cilium.tls_wrapper",
"typedConfig": {
"@type": "type.googleapis.com/cilium.UpstreamTlsWrapperContext"
}
}
},
{
"name": "egress-cluster",
"type": "ORIGINAL_DST",
"connectTimeout": "2s",
"lbPolicy": "CLUSTER_PROVIDED",
"typedExtensionProtocolOptions": {
"envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
"@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
"commonHttpProtocolOptions": {
"idleTimeout": "60s",
"maxConnectionDuration": "0s",
"maxRequestsPerConnection": 0
},
"useDownstreamProtocolConfig": {}
}
},
"cleanupInterval": "2.500s"
},
{
"name": "ingress-cluster-tls",
"type": "ORIGINAL_DST",
"connectTimeout": "2s",
"lbPolicy": "CLUSTER_PROVIDED",
"typedExtensionProtocolOptions": {
"envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
"@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
"commonHttpProtocolOptions": {
"idleTimeout": "60s",
"maxConnectionDuration": "0s",
"maxRequestsPerConnection": 0
},
"upstreamHttpProtocolOptions": {},
"useDownstreamProtocolConfig": {}
}
},
"cleanupInterval": "2.500s",
"transportSocket": {
"name": "cilium.tls_wrapper",
"typedConfig": {
"@type": "type.googleapis.com/cilium.UpstreamTlsWrapperContext"
}
}
},
{
"name": "xds-grpc-cilium",
"type": "STATIC",
"connectTimeout": "2s",
"loadAssignment": {
"clusterName": "xds-grpc-cilium",
"endpoints": [
{
"lbEndpoints": [
{
"endpoint": {
"address": {
"pipe": {
"path": "/var/run/cilium/envoy/sockets/xds.sock"
}
}
}
}
]
}
]
},
"typedExtensionProtocolOptions": {
"envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
"@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
"explicitHttpConfig": {
"http2ProtocolOptions": {}
}
}
}
},
{
"name": "/envoy-admin",
"type": "STATIC",
"connectTimeout": "2s",
"loadAssignment": {
"clusterName": "/envoy-admin",
"endpoints": [
{
"lbEndpoints": [
{
"endpoint": {
"address": {
"pipe": {
"path": "/var/run/cilium/envoy/sockets/admin.sock"
}
}
}
}
]
}
]
}
}
]
},
"dynamicResources": {
"ldsConfig": {
"initialFetchTimeout": "30s",
"apiConfigSource": {
"apiType": "GRPC",
"transportApiVersion": "V3",
"grpcServices": [
{
"envoyGrpc": {
"clusterName": "xds-grpc-cilium"
}
}
],
"setNodeOnFirstMessageOnly": true
},
"resourceApiVersion": "V3"
},
"cdsConfig": {
"initialFetchTimeout": "30s",
"apiConfigSource": {
"apiType": "GRPC",
"transportApiVersion": "V3",
"grpcServices": [
{
"envoyGrpc": {
"clusterName": "xds-grpc-cilium"
}
}
],
"setNodeOnFirstMessageOnly": true
},
"resourceApiVersion": "V3"
}
},
"bootstrapExtensions": [
{
"name": "envoy.bootstrap.internal_listener",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.bootstrap.internal_listener.v3.InternalListener"
}
}
],
"overload_manager": {
"resource_monitors": [
{
"name": "envoy.resource_monitors.global_downstream_max_connections",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.resource_monitors.downstream_connections.v3.DownstreamConnectionsConfig",
"max_active_downstream_connections": "50000"
}
}
]
},
"admin": {
"address": {
"pipe": {
"path": "/var/run/cilium/envoy/sockets/admin.sock"
}
}
}
}
---
# Source: cilium/templates/cilium-agent/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cilium
labels:
app.kubernetes.io/part-of: cilium
rules:
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
- services
- pods
- endpoints
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- list
- watch
# This is used when validating policies in preflight. This will need to stay
# until we figure out how to avoid "get" inside the preflight, and then
# should be removed ideally.
- get
- apiGroups:
- cilium.io
resources:
- ciliumloadbalancerippools
- ciliumbgppeeringpolicies
- ciliumbgpnodeconfigs
- ciliumbgpadvertisements
- ciliumbgppeerconfigs
- ciliumclusterwideenvoyconfigs
- ciliumclusterwidenetworkpolicies
- ciliumegressgatewaypolicies
- ciliumendpoints
- ciliumendpointslices
- ciliumenvoyconfigs
- ciliumidentities
- ciliumlocalredirectpolicies
- ciliumnetworkpolicies
- ciliumnodes
- ciliumnodeconfigs
- ciliumcidrgroups
- ciliuml2announcementpolicies
- ciliumpodippools
verbs:
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumidentities
- ciliumendpoints
- ciliumnodes
verbs:
- create
- apiGroups:
- cilium.io
# To synchronize garbage collection of such resources
resources:
- ciliumidentities
verbs:
- update
- apiGroups:
- cilium.io
resources:
- ciliumendpoints
verbs:
- delete
- get
- apiGroups:
- cilium.io
resources:
- ciliumnodes
- ciliumnodes/status
verbs:
- get
- update
- apiGroups:
- cilium.io
resources:
- ciliumendpoints/status
- ciliumendpoints
- ciliuml2announcementpolicies/status
- ciliumbgpnodeconfigs/status
verbs:
- patch
---
# Source: cilium/templates/cilium-operator/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cilium-operator
labels:
app.kubernetes.io/part-of: cilium
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
# to automatically delete [core|kube]dns pods so that are starting to being
# managed by Cilium
- delete
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- cilium-config
verbs:
# allow patching of the configmap to set annotations
- patch
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
# To remove node taints
- nodes
# To set NetworkUnavailable false on startup
- nodes/status
verbs:
- patch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
# to perform LB IP allocation for BGP
- services/status
verbs:
- update
- patch
- apiGroups:
- ""
resources:
# to check apiserver connectivity
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
# to perform the translation of a CNP that contains `ToGroup` to its endpoints
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies
- ciliumclusterwidenetworkpolicies
verbs:
# Create auto-generated CNPs and CCNPs from Policies that have 'toGroups'
- create
- update
- deletecollection
# To update the status of the CNPs and CCNPs
- patch
- get
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies/status
- ciliumclusterwidenetworkpolicies/status
verbs:
# Update the auto-generated CNPs and CCNPs status.
- patch
- update
- apiGroups:
- cilium.io
resources:
- ciliumendpoints
- ciliumidentities
verbs:
# To perform garbage collection of such resources
- delete
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumidentities
verbs:
# To synchronize garbage collection of such resources
- update
- apiGroups:
- cilium.io
resources:
- ciliumnodes
verbs:
- create
- update
- get
- list
- watch
# To perform CiliumNode garbage collector
- delete
- apiGroups:
- cilium.io
resources:
- ciliumnodes/status
verbs:
- update
- apiGroups:
- cilium.io
resources:
- ciliumendpointslices
- ciliumenvoyconfigs
- ciliumbgppeerconfigs
- ciliumbgpadvertisements
- ciliumbgpnodeconfigs
verbs:
- create
- update
- get
- list
- watch
- delete
- patch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- get
- list
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- update
resourceNames:
- ciliumloadbalancerippools.cilium.io
- ciliumbgppeeringpolicies.cilium.io
- ciliumbgpclusterconfigs.cilium.io
- ciliumbgppeerconfigs.cilium.io
- ciliumbgpadvertisements.cilium.io
- ciliumbgpnodeconfigs.cilium.io
- ciliumbgpnodeconfigoverrides.cilium.io
- ciliumclusterwideenvoyconfigs.cilium.io
- ciliumclusterwidenetworkpolicies.cilium.io
- ciliumegressgatewaypolicies.cilium.io
- ciliumendpoints.cilium.io
- ciliumendpointslices.cilium.io
- ciliumenvoyconfigs.cilium.io
- ciliumexternalworkloads.cilium.io
- ciliumidentities.cilium.io
- ciliumlocalredirectpolicies.cilium.io
- ciliumnetworkpolicies.cilium.io
- ciliumnodes.cilium.io
- ciliumnodeconfigs.cilium.io
- ciliumcidrgroups.cilium.io
- ciliuml2announcementpolicies.cilium.io
- ciliumpodippools.cilium.io
- apiGroups:
- cilium.io
resources:
- ciliumloadbalancerippools
- ciliumpodippools
- ciliumbgppeeringpolicies
- ciliumbgpclusterconfigs
- ciliumbgpnodeconfigoverrides
verbs:
- get
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumpodippools
verbs:
- create
- apiGroups:
- cilium.io
resources:
- ciliumloadbalancerippools/status
verbs:
- patch
# For cilium-operator running in HA mode.
#
# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
# between multiple running instances.
# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
# common and fewer objects in the cluster watch "all Leases".
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- update
---
# Source: cilium/templates/cilium-agent/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cilium
labels:
app.kubernetes.io/part-of: cilium
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cilium
subjects:
- kind: ServiceAccount
name: "cilium"
namespace: kube-system
---
# Source: cilium/templates/cilium-operator/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cilium-operator
labels:
app.kubernetes.io/part-of: cilium
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cilium-operator
subjects:
- kind: ServiceAccount
name: "cilium-operator"
namespace: kube-system
---
# Source: cilium/templates/cilium-agent/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cilium-config-agent
namespace: kube-system
labels:
app.kubernetes.io/part-of: cilium
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
---
# Source: cilium/templates/cilium-agent/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cilium-config-agent
namespace: kube-system
labels:
app.kubernetes.io/part-of: cilium
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cilium-config-agent
subjects:
- kind: ServiceAccount
name: "cilium"
namespace: kube-system
---
# Source: cilium/templates/cilium-envoy/service.yaml
apiVersion: v1
kind: Service
metadata:
name: cilium-envoy
namespace: kube-system
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9964"
labels:
k8s-app: cilium-envoy
app.kubernetes.io/name: cilium-envoy
app.kubernetes.io/part-of: cilium
io.cilium/app: proxy
spec:
clusterIP: None
type: ClusterIP
selector:
k8s-app: cilium-envoy
ports:
- name: envoy-metrics
port: 9964
protocol: TCP
targetPort: envoy-metrics
---
# Source: cilium/templates/hubble/peer-service.yaml
apiVersion: v1
kind: Service
metadata:
name: hubble-peer
namespace: kube-system
labels:
k8s-app: cilium
app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: hubble-peer
spec:
selector:
k8s-app: cilium
ports:
- name: peer-service
port: 443
protocol: TCP
targetPort: 4244
internalTrafficPolicy: Local
---
# Source: cilium/templates/cilium-agent/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cilium
namespace: kube-system
labels:
k8s-app: cilium
app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: cilium-agent
spec:
selector:
matchLabels:
k8s-app: cilium
updateStrategy:
rollingUpdate:
maxUnavailable: 2
type: RollingUpdate
template:
metadata:
annotations:
labels:
k8s-app: cilium
app.kubernetes.io/name: cilium-agent
app.kubernetes.io/part-of: cilium
spec:
securityContext:
appArmorProfile:
type: Unconfined
containers:
- name: cilium-agent
image: "quay.io/cilium/cilium:v1.16.4@sha256:d55ec38938854133e06739b1af237932b9c4dd4e75e9b7b2ca3acc72540a44bf"
imagePullPolicy: IfNotPresent
command:
- cilium-agent
args:
- --config-dir=/tmp/cilium/config-map
startupProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9879
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
failureThreshold: 105
periodSeconds: 2
successThreshold: 1
initialDelaySeconds: 5
livenessProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9879
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
periodSeconds: 30
successThreshold: 1
failureThreshold: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9879
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CILIUM_CLUSTERMESH_CONFIG
value: /var/lib/cilium/clustermesh/
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: '1'
- name: KUBERNETES_SERVICE_HOST
value: "localhost"
- name: KUBERNETES_SERVICE_PORT
value: "7445"
lifecycle:
postStart:
exec:
command:
- "bash"
- "-c"
- |
set -o errexit
set -o pipefail
set -o nounset
# When running in AWS ENI mode, it's likely that 'aws-node' has
# had a chance to install SNAT iptables rules. These can result
# in dropped traffic, so we should attempt to remove them.
# We do it using a 'postStart' hook since this may need to run
# for nodes which might have already been init'ed but may still
# have dangling rules. This is safe because there are no
# dependencies on anything that is part of the startup script
# itself, and can be safely run multiple times per node (e.g. in
# case of a restart).
if [[ "$(iptables-save | grep -E -c 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];
then
echo 'Deleting iptables rules created by the AWS CNI VPC plugin'
iptables-save | grep -E -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restore
fi
echo 'Done!'
preStop:
exec:
command:
- /cni-uninstall.sh
securityContext:
seLinuxOptions:
level: s0
type: spc_t
capabilities:
add:
- CHOWN
- KILL
- NET_ADMIN
- NET_RAW
- IPC_LOCK
- SYS_ADMIN
- SYS_RESOURCE
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
drop:
- ALL
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- name: envoy-sockets
mountPath: /var/run/cilium/envoy/sockets
readOnly: false
# Unprivileged containers need to mount /proc/sys/net from the host
# to have write access
- mountPath: /host/proc/sys/net
name: host-proc-sys-net
# Unprivileged containers need to mount /proc/sys/kernel from the host
# to have write access
- mountPath: /host/proc/sys/kernel
name: host-proc-sys-kernel
- name: bpf-maps
mountPath: /sys/fs/bpf
# Unprivileged containers can't set mount propagation to bidirectional
# in this case we will mount the bpf fs from an init container that
# is privileged and set the mount propagation from host to container
# in Cilium.
mountPropagation: HostToContainer
# Check for duplicate mounts before mounting
- name: cilium-cgroup
mountPath: /sys/fs/cgroup
- name: cilium-run
mountPath: /var/run/cilium
- name: etc-cni-netd
mountPath: /host/etc/cni/net.d
- name: clustermesh-secrets
mountPath: /var/lib/cilium/clustermesh
readOnly: true
# Needed to be able to load kernel modules
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
- name: hubble-tls
mountPath: /var/lib/cilium/tls/hubble
readOnly: true
- name: tmp
mountPath: /tmp
initContainers:
- name: config
image: "quay.io/cilium/cilium:v1.16.4@sha256:d55ec38938854133e06739b1af237932b9c4dd4e75e9b7b2ca3acc72540a44bf"
imagePullPolicy: IfNotPresent
command:
- cilium-dbg
- build-config
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_SERVICE_HOST
value: "localhost"
- name: KUBERNETES_SERVICE_PORT
value: "7445"
volumeMounts:
- name: tmp
mountPath: /tmp
terminationMessagePolicy: FallbackToLogsOnError
- name: apply-sysctl-overwrites
image: "quay.io/cilium/cilium:v1.16.4@sha256:d55ec38938854133e06739b1af237932b9c4dd4e75e9b7b2ca3acc72540a44bf"
imagePullPolicy: IfNotPresent
env:
- name: BIN_PATH
value: /opt/cni/bin
command:
- sh
- -ec
# The statically linked Go program binary is invoked to avoid any
# dependency on utilities like sh that can be missing on certain
# distros installed on the underlying host. Copy the binary to the
# same directory where we install cilium cni plugin so that exec permissions
# are available.
- |
cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix;
nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix";
rm /hostbin/cilium-sysctlfix
volumeMounts:
- name: hostproc
mountPath: /hostproc
- name: cni-path
mountPath: /hostbin
terminationMessagePolicy: FallbackToLogsOnError
securityContext:
seLinuxOptions:
level: s0
type: spc_t
capabilities:
add:
- SYS_ADMIN
- SYS_CHROOT
- SYS_PTRACE
drop:
- ALL
# Mount the bpf fs if it is not mounted. We will perform this task
# from a privileged container because the mount propagation bidirectional
# only works from privileged containers.
- name: mount-bpf-fs
image: "quay.io/cilium/cilium:v1.16.4@sha256:d55ec38938854133e06739b1af237932b9c4dd4e75e9b7b2ca3acc72540a44bf"
imagePullPolicy: IfNotPresent
args:
- 'mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf'
command:
- /bin/bash
- -c
- --
terminationMessagePolicy: FallbackToLogsOnError
securityContext:
privileged: true
volumeMounts:
- name: bpf-maps
mountPath: /sys/fs/bpf
mountPropagation: Bidirectional
- name: clean-cilium-state
image: "quay.io/cilium/cilium:v1.16.4@sha256:d55ec38938854133e06739b1af237932b9c4dd4e75e9b7b2ca3acc72540a44bf"
imagePullPolicy: IfNotPresent
command:
- /init-container.sh
env:
- name: CILIUM_ALL_STATE
valueFrom:
configMapKeyRef:
name: cilium-config
key: clean-cilium-state
optional: true
- name: CILIUM_BPF_STATE
valueFrom:
configMapKeyRef:
name: cilium-config
key: clean-cilium-bpf-state
optional: true
- name: WRITE_CNI_CONF_WHEN_READY
valueFrom:
configMapKeyRef:
name: cilium-config
key: write-cni-conf-when-ready
optional: true
- name: KUBERNETES_SERVICE_HOST
value: "localhost"
- name: KUBERNETES_SERVICE_PORT
value: "7445"
terminationMessagePolicy: FallbackToLogsOnError
securityContext:
seLinuxOptions:
level: s0
type: spc_t
capabilities:
add:
- NET_ADMIN
- SYS_ADMIN
- SYS_RESOURCE
drop:
- ALL
volumeMounts:
- name: bpf-maps
mountPath: /sys/fs/bpf
# Required to mount cgroup filesystem from the host to cilium agent pod
- name: cilium-cgroup
mountPath: /sys/fs/cgroup
mountPropagation: HostToContainer
- name: cilium-run
mountPath: /var/run/cilium # wait-for-kube-proxy
# Install the CNI binaries in an InitContainer so we don't have a writable host mount in the agent
- name: install-cni-binaries
image: "quay.io/cilium/cilium:v1.16.4@sha256:d55ec38938854133e06739b1af237932b9c4dd4e75e9b7b2ca3acc72540a44bf"
imagePullPolicy: IfNotPresent
command:
- "/install-plugin.sh"
resources:
requests:
cpu: 100m
memory: 10Mi
securityContext:
seLinuxOptions:
level: s0
type: spc_t
capabilities:
drop:
- ALL
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- name: cni-path
mountPath: /host/opt/cni/bin # .Values.cni.install
restartPolicy: Always
priorityClassName: system-node-critical
serviceAccountName: "cilium"
automountServiceAccountToken: true
terminationGracePeriodSeconds: 1
hostNetwork: true
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
k8s-app: cilium
topologyKey: kubernetes.io/hostname
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: Exists
volumes:
# For sharing configuration between the "config" initContainer and the agent
- name: tmp
emptyDir: {}
# To keep state between restarts / upgrades
- name: cilium-run
hostPath:
path: /var/run/cilium
type: DirectoryOrCreate
# To keep state between restarts / upgrades for bpf maps
- name: bpf-maps
hostPath:
path: /sys/fs/bpf
type: DirectoryOrCreate
# To mount cgroup2 filesystem on the host or apply sysctlfix
- name: hostproc
hostPath:
path: /proc
type: Directory
# To keep state between restarts / upgrades for cgroup2 filesystem
- name: cilium-cgroup
hostPath:
path: /sys/fs/cgroup
type: DirectoryOrCreate
# To install cilium cni plugin in the host
- name: cni-path
hostPath:
path: /opt/cni/bin
type: DirectoryOrCreate
# To install cilium cni configuration in the host
- name: etc-cni-netd
hostPath:
path: /etc/cni/net.d
type: DirectoryOrCreate
# To be able to load kernel modules
- name: lib-modules
hostPath:
path: /lib/modules
# To access iptables concurrently with other processes (e.g. kube-proxy)
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Sharing socket with Cilium Envoy on the same node by using a host path
- name: envoy-sockets
hostPath:
path: "/var/run/cilium/envoy/sockets"
type: DirectoryOrCreate
# To read the clustermesh configuration
- name: clustermesh-secrets
projected:
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
sources:
- secret:
name: cilium-clustermesh
optional: true
# note: items are not explicitly listed here, since the entries of this secret
# depend on the peers configured, and that would cause a restart of all agents
# at every addition/removal. Leaving the field empty makes each secret entry
# to be automatically projected into the volume as a file whose name is the key.
- secret:
name: clustermesh-apiserver-remote-cert
optional: true
items:
- key: tls.key
path: common-etcd-client.key
- key: tls.crt
path: common-etcd-client.crt
- key: ca.crt
path: common-etcd-client-ca.crt
# note: we configure the volume for the kvstoremesh-specific certificate
# regardless of whether KVStoreMesh is enabled or not, so that it can be
# automatically mounted in case KVStoreMesh gets subsequently enabled,
# without requiring an agent restart.
- secret:
name: clustermesh-apiserver-local-cert
optional: true
items:
- key: tls.key
path: local-etcd-client.key
- key: tls.crt
path: local-etcd-client.crt
- key: ca.crt
path: local-etcd-client-ca.crt
- name: host-proc-sys-net
hostPath:
path: /proc/sys/net
type: Directory
- name: host-proc-sys-kernel
hostPath:
path: /proc/sys/kernel
type: Directory
- name: hubble-tls
projected:
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
sources:
- secret:
name: hubble-server-certs
optional: true
items:
- key: tls.crt
path: server.crt
- key: tls.key
path: server.key
- key: ca.crt
path: client-ca.crt
---
# Source: cilium/templates/cilium-envoy/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cilium-envoy
namespace: kube-system
labels:
k8s-app: cilium-envoy
app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: cilium-envoy
name: cilium-envoy
spec:
selector:
matchLabels:
k8s-app: cilium-envoy
updateStrategy:
rollingUpdate:
maxUnavailable: 2
type: RollingUpdate
template:
metadata:
annotations:
labels:
k8s-app: cilium-envoy
name: cilium-envoy
app.kubernetes.io/name: cilium-envoy
app.kubernetes.io/part-of: cilium
spec:
securityContext:
appArmorProfile:
type: Unconfined
containers:
- name: cilium-envoy
image: "quay.io/cilium/cilium-envoy:v1.30.7-1731393961-97edc2815e2c6a174d3d12e71731d54f5d32ea16@sha256:0287b36f70cfbdf54f894160082f4f94d1ee1fb10389f3a95baa6c8e448586ed"
imagePullPolicy: IfNotPresent
command:
- /usr/bin/cilium-envoy-starter
args:
- '--'
- '-c /var/run/cilium/envoy/bootstrap-config.json'
- '--base-id 0'
- '--log-level info'
- '--log-format [%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v'
startupProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9878
scheme: HTTP
failureThreshold: 105
periodSeconds: 2
successThreshold: 1
initialDelaySeconds: 5
livenessProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9878
scheme: HTTP
periodSeconds: 30
successThreshold: 1
failureThreshold: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9878
scheme: HTTP
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_SERVICE_HOST
value: "localhost"
- name: KUBERNETES_SERVICE_PORT
value: "7445"
ports:
- name: envoy-metrics
containerPort: 9964
hostPort: 9964
protocol: TCP
securityContext:
seLinuxOptions:
level: s0
type: spc_t
capabilities:
add:
- NET_ADMIN
- SYS_ADMIN
drop:
- ALL
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- name: envoy-sockets
mountPath: /var/run/cilium/envoy/sockets
readOnly: false
- name: envoy-artifacts
mountPath: /var/run/cilium/envoy/artifacts
readOnly: true
- name: envoy-config
mountPath: /var/run/cilium/envoy/
readOnly: true
- name: bpf-maps
mountPath: /sys/fs/bpf
mountPropagation: HostToContainer
restartPolicy: Always
priorityClassName: system-node-critical
serviceAccountName: "cilium-envoy"
automountServiceAccountToken: true
terminationGracePeriodSeconds: 1
hostNetwork: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cilium.io/no-schedule
operator: NotIn
values:
- "true"
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
k8s-app: cilium
topologyKey: kubernetes.io/hostname
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
k8s-app: cilium-envoy
topologyKey: kubernetes.io/hostname
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: Exists
volumes:
- name: envoy-sockets
hostPath:
path: "/var/run/cilium/envoy/sockets"
type: DirectoryOrCreate
- name: envoy-artifacts
hostPath:
path: "/var/run/cilium/envoy/artifacts"
type: DirectoryOrCreate
- name: envoy-config
configMap:
name: cilium-envoy-config
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
items:
- key: bootstrap-config.json
path: bootstrap-config.json
# To keep state between restarts / upgrades
# To keep state between restarts / upgrades for bpf maps
- name: bpf-maps
hostPath:
path: /sys/fs/bpf
type: DirectoryOrCreate
---
# Source: cilium/templates/cilium-operator/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cilium-operator
namespace: kube-system
labels:
io.cilium/app: operator
name: cilium-operator
app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: cilium-operator
spec:
# See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
# for more details.
replicas: 2
selector:
matchLabels:
io.cilium/app: operator
name: cilium-operator
# ensure operator update on single node k8s clusters, by using rolling update with maxUnavailable=100% in case
# of one replica and no user configured Recreate strategy.
# otherwise an update might get stuck due to the default maxUnavailable=50% in combination with the
# podAntiAffinity which prevents deployments of multiple operator replicas on the same node.
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 50%
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/port: "9963"
prometheus.io/scrape: "true"
labels:
io.cilium/app: operator
name: cilium-operator
app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: cilium-operator
spec:
containers:
- name: cilium-operator
image: "quay.io/cilium/operator-generic:v1.16.4@sha256:c55a7cbe19fe0b6b28903a085334edb586a3201add9db56d2122c8485f7a51c5"
imagePullPolicy: IfNotPresent
command:
- cilium-operator-generic
args:
- --config-dir=/tmp/cilium/config-map
- --debug=$(CILIUM_DEBUG)
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CILIUM_DEBUG
valueFrom:
configMapKeyRef:
key: debug
name: cilium-config
optional: true
- name: KUBERNETES_SERVICE_HOST
value: "localhost"
- name: KUBERNETES_SERVICE_PORT
value: "7445"
ports:
- name: prometheus
containerPort: 9963
hostPort: 9963
protocol: TCP
livenessProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9234
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 3
readinessProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9234
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 5
volumeMounts:
- name: cilium-config-path
mountPath: /tmp/cilium/config-map
readOnly: true
terminationMessagePolicy: FallbackToLogsOnError
hostNetwork: true
restartPolicy: Always
priorityClassName: system-cluster-critical
serviceAccountName: "cilium-operator"
automountServiceAccountToken: true
# In HA mode, cilium-operator pods must not be scheduled on the same
# node as they will clash with each other.
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
io.cilium/app: operator
topologyKey: kubernetes.io/hostname
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: Exists
volumes:
# To read the configuration from the config map
- name: cilium-config-path
configMap:
name: cilium-config
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment