Skip to content

Instantly share code, notes, and snippets.

@aojea
Last active March 31, 2022 16:31
Show Gist options
  • Save aojea/858ab7d22c455933ca68169746b55e2f to your computer and use it in GitHub Desktop.
Save aojea/858ab7d22c455933ca68169746b55e2f to your computer and use it in GitHub Desktop.
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"context"
"flag"
"fmt"
"strings"
"time"
"k8s.io/klog/v2"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
var kubeconfig string
var master string
flag.StringVar(&kubeconfig, "kubeconfig", "", "absolute path to the kubeconfig file")
flag.StringVar(&master, "master", "", "master url")
klog.InitFlags(nil)
flag.Parse()
// creates the connection
config, err := clientcmd.BuildConfigFromFlags(master, kubeconfig)
if err != nil {
klog.Fatal(err)
}
for i := 0; i < 1000; i++ {
i := i
configmap := newConfigMap(fmt.Sprintf("test-%d", i))
go func() {
now := time.Now()
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
klog.Fatal(err)
}
clientset.CoreV1().ConfigMaps("default").Create(context.TODO(), configmap, metav1.CreateOptions{})
fmt.Printf("created configmaap %s at %v", configmap.Name, time.Since(now))
}()
}
for i := 0; i < 10; i++ {
i := i
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
klog.Fatal(err)
}
// fieldSelector := fmt.Sprintf("spec.nodeName=%s", "node-a")
fieldSelector := ""
informer := cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options metav1.ListOptions) (runtime.Object, error) {
return clientset.CoreV1().ConfigMaps("default").List(context.Background(),
metav1.ListOptions{
FieldSelector: fieldSelector,
})
},
WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
return clientset.CoreV1().ConfigMaps("default").Watch(context.Background(), metav1.ListOptions{
FieldSelector: fieldSelector,
})
},
},
&v1.ConfigMap{},
time.Second*30,
cache.Indexers{},
)
// Now let's start the controller
stop := make(chan struct{})
defer close(stop)
go informer.Run(stop)
fmt.Println("DEBUG infomrer", i)
}
// Wait forever
select {}
}
func newConfigMap(name string) *v1.ConfigMap {
return &v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: "",
Name: name,
},
Data: map[string]string{
"data-1": strings.Repeat("a", 1024*1024),
},
}
}
@aojea
Copy link
Author

aojea commented Mar 31, 2022

The main factors that impact on the scale of a Kubernetes cluster are:

  • a lot of objects, doesn't matter the kind, Pod, Secrets, Configmaps, .
  • size of the objects, Configmaps or Secrets with 1MB payloads, .....
  • the number of clients querying this objects, list 1000 objects of 1MB means we need to send 1Gb of data through the network

Those issues are well known, some of them are solved, per example, there is a WatchCache that serves the information from the apiserver memory , instead of roundtripping to etcd constantly, and there different efforts going on to solve some of the problems, but resources are finite so the capacity of the cluster, that means that there is going to be always a limit and the resources of the nodes will make a difference 😄

For reference, following links are good readings about the topic:

kubernetes/kubernetes#108003
https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/3157-watch-list

The following golang snippet allow to reproduce issues controlling the previously mentioned dimensions:

This loop control the numbers of objects that are going to be created, in this case Configmap

https://gist.github.com/aojea/858ab7d22c455933ca68169746b55e2f#file-main-go-L52

the objects are only created once, because they will reuse the same name, if I run the snippet 10x times it will not create 10x the number of objects.

This line controls the size of the objects

https://gist.github.com/aojea/858ab7d22c455933ca68169746b55e2f#file-main-go-L113

There is an additional loop that emulate clients, this is used to emulate a node with X clients,
https://gist.github.com/aojea/858ab7d22c455933ca68169746b55e2f#file-main-go-L68

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment