Skip to content

Instantly share code, notes, and snippets.

@chuckha
Created March 30, 2018 17:23
Show Gist options
  • Save chuckha/3e3813dc342be2ad9d8cf8882b649374 to your computer and use it in GitHub Desktop.
Save chuckha/3e3813dc342be2ad9d8cf8882b649374 to your computer and use it in GitHub Desktop.
all tests
[k8s.io] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
[k8s.io] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
[k8s.io] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
[k8s.io] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
[k8s.io] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
[k8s.io] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
[k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance]
[k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance]
[k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance]
[k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance]
[k8s.io] Downward API [Serial] [Disruptive] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
[k8s.io] Downward API [Serial] [Disruptive] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
[k8s.io] EquivalenceCache [Serial] validates GeneralPredicates is properly invalidated when a pod is scheduled [Slow]
[k8s.io] EquivalenceCache [Serial] validates pod affinity works properly when new replica pod is scheduled
[k8s.io] EquivalenceCache [Serial] validates pod anti-affinity works properly when new replica pod is scheduled
[k8s.io] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD]
[k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
[k8s.io] InitContainer should invoke init containers on a RestartAlways pod
[k8s.io] InitContainer should invoke init containers on a RestartNever pod
[k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod
[k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod
[k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance]
[k8s.io] LimitRange should create a LimitRange with default ephemeral storage and ensure pod has the default applied.
[k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance]
[k8s.io] Pods should be submitted and removed [Conformance]
[k8s.io] Pods should be updated [Conformance]
[k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow]
[k8s.io] Pods should contain environment variables for services [Conformance]
[k8s.io] Pods should get a host IP [Conformance]
[k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow]
[k8s.io] Pods should support remote command execution over websockets
[k8s.io] Pods should support retrieving logs from the container over websockets
[k8s.io] PrivilegedPod should enable privileged commands
[k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance]
[k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
[k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance]
[k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout [Conformance]
[k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
[k8s.io] Probing container should have monotonically increasing restart count [Slow] [Conformance]
[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance]
[k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance]
[k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node
[k8s.io] Sysctls should reject invalid sysctls
[k8s.io] Sysctls should support sysctls
[k8s.io] Sysctls should support unsafe sysctls which are actually whitelisted
[k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance]
[k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance]
[k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance]
[k8s.io] [Feature:Example] [k8s.io] Cassandra should create and scale cassandra
[k8s.io] [Feature:Example] [k8s.io] CassandraStatefulSet should create statefulset
[k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace
[k8s.io] [Feature:Example] [k8s.io] Hazelcast should create and scale hazelcast
[k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted
[k8s.io] [Feature:Example] [k8s.io] Redis should create and stop redis servers
[k8s.io] [Feature:Example] [k8s.io] RethinkDB should create and stop rethinkdb servers
[k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret
[k8s.io] [Feature:Example] [k8s.io] Spark should start spark master, driver and workers
[k8s.io] [Feature:Example] [k8s.io] Storm should create and stop Zookeeper, Nimbus and Storm worker servers
[k8s.io] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
[k8s.io] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
[k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
[k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
[k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking resource tracking for 0 pods per node
[k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking resource tracking for 100 pods per node
[k8s.io] [sig-node] Mount propagation [Feature:MountPropagation] should propagate mounts to the host
[k8s.io] [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Flaky] [Conformance]
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance]
[k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]
[k8s.io] [sig-node] SSH should SSH to all nodes and run commands
[k8s.io] [sig-node] Security Context [Feature:SecurityContext] should support container.SecurityContext.RunAsUser
[k8s.io] [sig-node] Security Context [Feature:SecurityContext] should support pod.Spec.SecurityContext.RunAsUser
[k8s.io] [sig-node] Security Context [Feature:SecurityContext] should support pod.Spec.SecurityContext.SupplementalGroups
[k8s.io] [sig-node] Security Context [Feature:SecurityContext] should support seccomp alpha docker/default annotation [Feature:Seccomp]
[k8s.io] [sig-node] Security Context [Feature:SecurityContext] should support seccomp alpha unconfined annotation on the container [Feature:Seccomp]
[k8s.io] [sig-node] Security Context [Feature:SecurityContext] should support seccomp alpha unconfined annotation on the pod [Feature:Seccomp]
[k8s.io] [sig-node] Security Context [Feature:SecurityContext] should support seccomp default which is unconfined [Feature:Seccomp]
[k8s.io] [sig-node] Security Context [Feature:SecurityContext] should support volume SELinux relabeling
[k8s.io] [sig-node] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostIPC
[k8s.io] [sig-node] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostPID
[k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
[k8s.io] [sig-node] kubelet [k8s.io] [sig-node] host cleanup with volume mounts [sig-storage][HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
[k8s.io] [sig-node] kubelet [k8s.io] [sig-node] host cleanup with volume mounts [sig-storage][HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
[sig-api-machinery] AdmissionWebhook Should be able to deny custom resource creation
[sig-api-machinery] AdmissionWebhook Should be able to deny pod and configmap creation
[sig-api-machinery] AdmissionWebhook Should mutate configmap
[sig-api-machinery] AdmissionWebhook Should mutate crd
[sig-api-machinery] AdmissionWebhook Should mutate pod and apply defaults after mutation
[sig-api-machinery] AdmissionWebhook Should unconditionally reject operations on fail closed webhook
[sig-api-machinery] Aggregator Should be able to support the 1.7 Sample API Server using the current Aggregator
[sig-api-machinery] ConfigMap should be consumable via environment variable [Conformance]
[sig-api-machinery] ConfigMap should be consumable via the environment [Conformance]
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
[sig-api-machinery] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance]
[sig-api-machinery] Downward API should provide default limits.cpu/memory from node allocatable [Conformance]
[sig-api-machinery] Downward API should provide host IP as an env var [Conformance]
[sig-api-machinery] Downward API should provide pod UID as env vars [Conformance]
[sig-api-machinery] Downward API should provide pod name, namespace and IP address as env vars [Conformance]
[sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
[sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
[sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning
[sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
[sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning
[sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so
[sig-api-machinery] Garbage collector should not be blocked by dependency circle
[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted
[sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true
[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so
[sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
[sig-api-machinery] Garbage collector should support cascading deletion of custom resources
[sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
[sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs
[sig-api-machinery] Initializers [Feature:Initializers] don't cause replicaset controller creating extra pods if the initializer is not handled [Serial]
[sig-api-machinery] Initializers [Feature:Initializers] should be invisible to controllers by default
[sig-api-machinery] Initializers [Feature:Initializers] should dynamically register and apply initializers to pods [Serial]
[sig-api-machinery] Initializers [Feature:Initializers] will be set to nil if a patch removes the last pending initializer
[sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
[sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
[sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted.
[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted.
[sig-api-machinery] Secrets should be consumable from pods in env vars [Conformance]
[sig-api-machinery] Secrets should be consumable via the environment [Conformance]
[sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
[sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata
[sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
[sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
[sig-api-machinery] Servers with support for Table transformation should return pod details
[sig-apps] CronJob should delete successful finished jobs with limit of one successful job
[sig-apps] CronJob should not emit unexpected warnings
[sig-apps] CronJob should not schedule jobs when suspended [Slow]
[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow]
[sig-apps] CronJob should remove from active list jobs that have been deleted
[sig-apps] CronJob should replace jobs when ReplaceConcurrent
[sig-apps] CronJob should schedule multiple jobs concurrently
[sig-apps] Daemon set [Serial] Should adopt existing pods when creating a RollingUpdate DaemonSet regardless of templateGeneration
[sig-apps] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete
[sig-apps] Daemon set [Serial] Should rollback without unnecessary restarts
[sig-apps] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate
[sig-apps] Daemon set [Serial] should retry creating failed daemon pods
[sig-apps] Daemon set [Serial] should run and stop complex daemon
[sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
[sig-apps] Daemon set [Serial] should run and stop simple daemon
[sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
[sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
[sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
[sig-apps] Deployment RecreateDeployment should delete old pods and create new ones
[sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones
[sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
[sig-apps] Deployment deployment should delete old replica sets
[sig-apps] Deployment deployment should support rollback
[sig-apps] Deployment deployment should support rollover
[sig-apps] Deployment iterative rollouts should eventually progress
[sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
[sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
[sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
[sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
[sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction
[sig-apps] DisruptionController evictions: no PDB => should allow an eviction
[sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
[sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction
[sig-apps] DisruptionController should create a PodDisruptionBudget
[sig-apps] DisruptionController should update PodDisruptionBudget status
[sig-apps] Job should adopt matching orphans and release non-matching pods
[sig-apps] Job should delete a job
[sig-apps] Job should exceed active deadline
[sig-apps] Job should exceed backoffLimit
[sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted
[sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
[sig-apps] Job should run a job to completion when tasks succeed
[sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] Pods should be evicted from unready Node [Feature:TaintEviction] All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be evicted after eviction timeout passes
[sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout
[sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned
[sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero
[sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster
[sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive]
[sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive]
[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods
[sig-apps] ReplicaSet should serve a basic image on each replica with a private image
[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
[sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
[sig-apps] ReplicationController should adopt matching pods on creation
[sig-apps] ReplicationController should release no longer matching pods
[sig-apps] ReplicationController should serve a basic image on each replica with a private image
[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
[sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
[sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
[sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
[sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
[sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
[sig-apps] stateful Upgrade [Feature:StatefulUpgrade] [k8s.io] stateful upgrade should maintain a functioning cluster
[sig-auth] Advanced Audit should audit API calls [DisabledForLargeClusters]
[sig-auth] Certificates API should support building a client with a CSR
[sig-auth] Metadata Concealment should run a check-metadata-concealment job to completion
[sig-auth] PodSecurityPolicy should allow pods under the privileged PodSecurityPolicy
[sig-auth] PodSecurityPolicy should enforce the restricted PodSecurityPolicy
[sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available
[sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
[sig-auth] ServiceAccounts should ensure a single API token exists
[sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
[sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create an other node
[sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete an other node
[sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
[sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
[sig-auth] [Feature:NodeAuthorizer] Getting an existent secret should exit with the Forbidden error
[sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
[sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] should add new node and new node pool on too big pod, scale down to 1 and scale down to 0 [Feature:ClusterSizeAutoscalingScaleWithNAP]
[sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
[sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
[sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
[sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
[sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
[sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
[sig-autoscaling] Cluster size autoscaling [Slow] should create new node if there is no node for node selector [Feature:ClusterSizeAutoscalingScaleWithNAP]
[sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
[sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] shouldn't add new node group if not needed [Feature:ClusterSizeAutoscalingScaleWithNAP]
[sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
[sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
[sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up if cores limit too low, should scale up after limit is changed [Feature:ClusterSizeAutoscalingScaleWithNAP]
[sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
[sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
[sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
[sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling [sig-autoscaling] Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [DisabledForLargeClusters] ReplicationController light Should scale from 1 pod to 2 pods
[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [DisabledForLargeClusters] ReplicationController light Should scale from 2 pods to 1 pod
[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should autoscale with Custom Metrics from Stackdriver [Feature:CustomMetricsAutoscaling]
[sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects
[sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects
[sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects
[sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets
[sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects
[sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects
[sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects
[sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets
[sig-cli] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob
[sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl apply apply set/view last-applied
[sig-cli] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC
[sig-cli] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes
[sig-cli] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes
[sig-cli] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes
[sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl run CronJob should create a CronJob
[sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance]
[sig-cli] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node
[sig-cli] Kubectl client [k8s.io] Kubectl taint [Serial] should update the taint on a node
[sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance]
[sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance]
[sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance]
[sig-cli] Kubectl client [k8s.io] Simple pod should handle in-cluster config
[sig-cli] Kubectl client [k8s.io] Simple pod should return command exit codes
[sig-cli] Kubectl client [k8s.io] Simple pod should support exec
[sig-cli] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy
[sig-cli] Kubectl client [k8s.io] Simple pod should support exec through kubectl proxy
[sig-cli] Kubectl client [k8s.io] Simple pod should support inline execution and attach
[sig-cli] Kubectl client [k8s.io] Simple pod should support port-forward
[sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance]
[sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance]
[sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance]
[sig-cluster-lifecycle] Addon update should propagate add-on file changes [Slow]
[sig-cluster-lifecycle] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
[sig-cluster-lifecycle] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
[sig-cluster-lifecycle] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
[sig-cluster-lifecycle] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
[sig-cluster-lifecycle] Node Auto Repairs [Slow] [Disruptive] should repair node [Feature:NodeAutoRepairs]
[sig-cluster-lifecycle] Nodes [Disruptive] Resize [Slow] should be able to add nodes
[sig-cluster-lifecycle] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
[sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
[sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
[sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
[sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
[sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
[sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
[sig-cluster-lifecycle] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
[sig-cluster-lifecycle] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
[sig-cluster-lifecycle] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
[sig-cluster-lifecycle] Upgrade [Feature:Upgrade] node upgrade should maintain a functioning cluster [Feature:NodeUpgrade]
[sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
[sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
[sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
[sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
[sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
[sig-cluster-lifecycle] etcd Upgrade [Feature:EtcdUpgrade] etcd upgrade should maintain a functioning cluster
[sig-cluster-lifecycle] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
[sig-cluster-lifecycle] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
[sig-instrumentation] Cadvisor should be healthy on every node.
[sig-instrumentation] Cluster level logging implemented by Stackdriver [Feature:StackdriverLogging] [Soak] should ingest logs from applications running for a prolonged amount of time
[sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest events
[sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest logs
[sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest logs [Feature:StackdriverLogging]
[sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest system logs from all nodes [Feature:StackdriverLogging]
[sig-instrumentation] Cluster level logging using Elasticsearch [Feature:Elasticsearch] should check that logs from containers are ingested into Elasticsearch
[sig-instrumentation] Kibana Logging Instances Is Alive [Feature:Elasticsearch] should check that the Kibana logging instance is alive
[sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node
[sig-instrumentation] MetricsGrabber should grab all metrics from API server.
[sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
[sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
[sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
[sig-instrumentation] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster.
[sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
[sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter [Feature:StackdriverCustomMetrics]
[sig-multicluster] Multi-AZ Clusters should only be allowed to provision PDs in zones where nodes exist
[sig-multicluster] Multi-AZ Clusters should schedule pods in the same zones as statically provisioned PVs [sig-storage]
[sig-multicluster] Multi-AZ Clusters should spread the pods of a replication controller across zones
[sig-multicluster] Multi-AZ Clusters should spread the pods of a service across zones
[sig-network] ClusterDns [Feature:Example] should create pod that uses dns
[sig-network] DNS configMap federations should be able to change federation configuration [Slow][Serial]
[sig-network] DNS configMap nameserver should be able to change stubDomain configuration [Slow][Serial]
[sig-network] DNS should provide DNS for ExternalName services
[sig-network] DNS should provide DNS for pods for Hostname and Subdomain
[sig-network] DNS should provide DNS for services [Conformance]
[sig-network] DNS should provide DNS for the cluster [Conformance]
[sig-network] ESIPP [Slow] [DisabledForLargeClusters] should handle updates to ExternalTrafficPolicy field
[sig-network] ESIPP [Slow] [DisabledForLargeClusters] should only target nodes with endpoints
[sig-network] ESIPP [Slow] [DisabledForLargeClusters] should work for type=LoadBalancer
[sig-network] ESIPP [Slow] [DisabledForLargeClusters] should work for type=NodePort
[sig-network] ESIPP [Slow] [DisabledForLargeClusters] should work from pods
[sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
[sig-network] Firewall rule should have correct firewall rules for e2e cluster
[sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] multicluster ingress should get instance group annotation
[sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
[sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should create ingress with given static-ip
[sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] rolling update backend pods should not cause service disruption
[sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to switch between IG and NEG modes
[sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should conform to Ingress spec
[sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints to NEG
[sig-network] Loadbalancing: L7 [Slow] Nginx should conform to Ingress spec
[sig-network] Network should set TCP CLOSE_WAIT timeout
[sig-network] NetworkPolicy NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
[sig-network] NetworkPolicy NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
[sig-network] NetworkPolicy NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
[sig-network] NetworkPolicy NetworkPolicy between server and client should enforce policy based on NamespaceSelector [Feature:NetworkPolicy]
[sig-network] NetworkPolicy NetworkPolicy between server and client should enforce policy based on PodSelector [Feature:NetworkPolicy]
[sig-network] NetworkPolicy NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
[sig-network] NetworkPolicy NetworkPolicy between server and client should support a 'default-deny' policy [Feature:NetworkPolicy]
[sig-network] NetworkPolicy NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [Conformance]
[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [Conformance]
[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [Conformance]
[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [Conformance]
[sig-network] Networking Granular Checks: Services [Slow] should function for client IP based session affinity: http
[sig-network] Networking Granular Checks: Services [Slow] should function for client IP based session affinity: udp
[sig-network] Networking Granular Checks: Services [Slow] should function for endpoint-Service: http
[sig-network] Networking Granular Checks: Services [Slow] should function for endpoint-Service: udp
[sig-network] Networking Granular Checks: Services [Slow] should function for node-Service: http
[sig-network] Networking Granular Checks: Services [Slow] should function for node-Service: udp
[sig-network] Networking Granular Checks: Services [Slow] should function for pod-Service: http
[sig-network] Networking Granular Checks: Services [Slow] should function for pod-Service: udp
[sig-network] Networking Granular Checks: Services [Slow] should update endpoints: http
[sig-network] Networking Granular Checks: Services [Slow] should update endpoints: udp
[sig-network] Networking Granular Checks: Services [Slow] should update nodePort: http [Slow]
[sig-network] Networking Granular Checks: Services [Slow] should update nodePort: udp [Slow]
[sig-network] Networking IPerf IPv4 [Experimental] [Feature:Networking-IPv4] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients)
[sig-network] Networking IPerf IPv6 [Experimental] [Feature:Networking-IPv6] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients)
[sig-network] Networking should check kube-proxy urls
[sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
[sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental]
[sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
[sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
[sig-network] Proxy version v1 should proxy logs on node [Conformance]
[sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
[sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance]
[sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
[sig-network] Proxy version v1 should proxy to cadvisor
[sig-network] Proxy version v1 should proxy to cadvisor using proxy subresource
[sig-network] Service endpoints latency should not be very high [Conformance]
[sig-network] ServiceLoadBalancer [Feature:ServiceLoadBalancer] should support simple GET on Ingress ips
[sig-network] Services [Feature:GCEAlphaFeature][Slow] should be able to create and tear down a standard-tier load balancer [Slow]
[sig-network] Services should be able to change the type and ports of a service [Slow] [DisabledForLargeClusters]
[sig-network] Services should be able to change the type from ClusterIP to ExternalName
[sig-network] Services should be able to change the type from ExternalName to ClusterIP
[sig-network] Services should be able to change the type from ExternalName to NodePort
[sig-network] Services should be able to change the type from NodePort to ExternalName
[sig-network] Services should be able to create a functioning NodePort service
[sig-network] Services should be able to create an internal type load balancer [Slow] [DisabledForLargeClusters]
[sig-network] Services should be able to up and down services
[sig-network] Services should be able to update NodePorts with two same port numbers but different protocols
[sig-network] Services should check NodePort out-of-range
[sig-network] Services should create endpoints for unready pods
[sig-network] Services should only allow access from service loadbalancer source ranges [Slow]
[sig-network] Services should preserve source pod IP for traffic thru service cluster IP
[sig-network] Services should prevent NodePort collisions
[sig-network] Services should provide secure master service [Conformance]
[sig-network] Services should release NodePorts on delete
[sig-network] Services should serve a basic endpoint from pods [Conformance]
[sig-network] Services should serve multiport endpoints from pods [Conformance]
[sig-network] Services should use same NodePort with same port but different protocols
[sig-network] Services should work after restarting apiserver [Disruptive]
[sig-network] Services should work after restarting kube-proxy [Disruptive]
[sig-scalability] Density [Feature:HighDensityPerformance] should allow starting 95 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
[sig-scalability] Density [Feature:ManualPerformance] should allow starting 100 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
[sig-scalability] Density [Feature:ManualPerformance] should allow starting 3 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
[sig-scalability] Density [Feature:ManualPerformance] should allow starting 30 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 2 daemons
[sig-scalability] Density [Feature:ManualPerformance] should allow starting 30 pods per node using {batch Job} with 0 secrets, 0 configmaps and 0 daemons
[sig-scalability] Density [Feature:ManualPerformance] should allow starting 30 pods per node using {extensions Deployment} with 0 secrets, 0 configmaps and 0 daemons
[sig-scalability] Density [Feature:ManualPerformance] should allow starting 30 pods per node using {extensions Deployment} with 0 secrets, 2 configmaps and 0 daemons
[sig-scalability] Density [Feature:ManualPerformance] should allow starting 30 pods per node using {extensions Deployment} with 2 secrets, 0 configmaps and 0 daemons
[sig-scalability] Density [Feature:ManualPerformance] should allow starting 50 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
[sig-scalability] Density [Feature:Performance] should allow starting 30 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
[sig-scalability] Empty [Feature:Empty] starts a pod
[sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
[sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node { Random} with 0 secrets, 0 configmaps and 0 daemons
[sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node { ReplicationController} with 0 secrets, 0 configmaps and 2 daemons
[sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node {batch Job} with 0 secrets, 0 configmaps and 0 daemons
[sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node {extensions Deployment} with 0 secrets, 0 configmaps and 0 daemons
[sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node {extensions Deployment} with 0 secrets, 2 configmaps and 0 daemons
[sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node {extensions Deployment} with 2 secrets, 0 configmaps and 0 daemons
[sig-scalability] Load capacity [Feature:Performance] should be able to handle 30 pods per node { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
[sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
[sig-scheduling] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available
[sig-scheduling] ResourceQuota [Feature:Initializers] should create a ResourceQuota and capture the life of an uninitialized pod.
[sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a configMap.
[sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]
[sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]
[sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a pod.
[sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a pod.
[sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a replica set.
[sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a replication controller.
[sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a secret.
[sig-scheduling] ResourceQuota should create a ResourceQuota and capture the life of a service.
[sig-scheduling] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated.
[sig-scheduling] ResourceQuota should verify ResourceQuota with best effort scope.
[sig-scheduling] ResourceQuota should verify ResourceQuota with terminating scopes.
[sig-scheduling] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
[sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
[sig-scheduling] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected
[sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
[sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
[sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP
[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
[sig-scheduling] SchedulerPreemption [Serial] [Feature:PodPreemption] validates basic preemption works
[sig-scheduling] SchedulerPreemption [Serial] [Feature:PodPreemption] validates pod anti-affinity works in preemption
[sig-scheduling] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation
[sig-scheduling] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms
[sig-scheduling] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate
[sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests on Container Optimized OS only
[sig-scheduling] [Feature:GPU] run Nvidia GPU tests on Container Optimized OS only
[sig-service-catalog] [Feature:PodPreset] PodPreset should create a pod preset
[sig-service-catalog] [Feature:PodPreset] PodPreset should not modify the pod on conflict
[sig-storage] ConfigMap optional updates should be reflected in volume [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [Feature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup]
[sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [Conformance]
[sig-storage] ConfigMap updates should be reflected in volume [Conformance]
[sig-storage] Downward API volume should provide container's cpu limit [Conformance]
[sig-storage] Downward API volume should provide container's cpu request [Conformance]
[sig-storage] Downward API volume should provide container's memory limit [Conformance]
[sig-storage] Downward API volume should provide container's memory request [Conformance]
[sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance]
[sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance]
[sig-storage] Downward API volume should provide podname as non-root with fsgroup [Feature:FSGroup]
[sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup]
[sig-storage] Downward API volume should provide podname only [Conformance]
[sig-storage] Downward API volume should set DefaultMode on files [Conformance]
[sig-storage] Downward API volume should set mode on item file [Conformance]
[sig-storage] Downward API volume should update annotations on modification [Conformance]
[sig-storage] Downward API volume should update labels on modification [Conformance]
[sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by changing the default annotation[Slow] [Serial] [Disruptive]
[sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive]
[sig-storage] Dynamic Provisioning DynamicProvisioner Default should create and delete default persistent volumes [Slow]
[sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]
[sig-storage] Dynamic Provisioning DynamicProvisioner should not provision a volume in an unmanaged GCE zone. [Slow]
[sig-storage] Dynamic Provisioning DynamicProvisioner should provision storage with different parameters [Slow]
[sig-storage] Dynamic Provisioning DynamicProvisioner should provision storage with mount options
[sig-storage] Dynamic Provisioning DynamicProvisioner should provision storage with non-default reclaim policy Retain
[sig-storage] Dynamic Provisioning DynamicProvisioner should test that deleting a claim before the volume is provisioned deletes the volume.
[sig-storage] EmptyDir volumes should support (non-root,0644,default) [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0666,default) [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0777,default) [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance]
[sig-storage] EmptyDir volumes should support (root,0644,default) [Conformance]
[sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [Conformance]
[sig-storage] EmptyDir volumes should support (root,0666,default) [Conformance]
[sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [Conformance]
[sig-storage] EmptyDir volumes should support (root,0777,default) [Conformance]
[sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [Conformance]
[sig-storage] EmptyDir volumes volume on default medium should have the correct mode [Conformance]
[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance]
[sig-storage] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
[sig-storage] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] new files should be created with FSGroup ownership when container is non-root
[sig-storage] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] new files should be created with FSGroup ownership when container is root
[sig-storage] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] volume on default medium should have the correct mode using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
[sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow]
[sig-storage] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
[sig-storage] EmptyDir wrapper volumes should not conflict
[sig-storage] Flexvolumes [Disruptive] [Feature:FlexVolume] should be mountable when attachable
[sig-storage] Flexvolumes [Disruptive] [Feature:FlexVolume] should be mountable when non-attachable
[sig-storage] Flexvolumes [Disruptive] [Feature:FlexVolume] should install plugin without kubelet restart
[sig-storage] GCP Volumes GlusterFS should be mountable
[sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
[sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
[sig-storage] HostPath should give a volume the correct mode [Conformance]
[sig-storage] HostPath should support existing directory subPath
[sig-storage] HostPath should support existing single file subPath
[sig-storage] HostPath should support r/w
[sig-storage] HostPath should support subPath
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive] verify volume status after node power off
[sig-storage] PersistentVolumes GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
[sig-storage] PersistentVolumes GCEPD should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk
[sig-storage] PersistentVolumes GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.
[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access
[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access
[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access
[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access
[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access
[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access
[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access [Slow]
[sig-storage] PersistentVolumes [Feature:LabelSelector] [sig-storage] Selector-Label Volume Binding:vsphere should bind volume with claim for given label
[sig-storage] PersistentVolumes [Feature:ReclaimPolicy] [sig-storage] persistentvolumereclaim:vsphere should delete persistent volume when reclaimPolicy set to delete and associated claim is deleted
[sig-storage] PersistentVolumes [Feature:ReclaimPolicy] [sig-storage] persistentvolumereclaim:vsphere should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
[sig-storage] PersistentVolumes [Feature:ReclaimPolicy] [sig-storage] persistentvolumereclaim:vsphere should retain persistent volume when reclaimPolicy set to retain when associated claim is deleted
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: dir] when pod using local volume with non-existant path should not be able to mount
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: dir] when pod's node is different from PV's NodeAffinity should not be able to mount due to different NodeAffinity
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: dir] when pod's node is different from PV's NodeAffinity should not be able to mount due to different NodeSelector
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: dir] when pod's node is different from PV's NodeName should not be able to mount due to different NodeName
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: dir] when two pods mount a local volume at the same time should be able to write from pod1 and read from pod2
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: dir] when two pods mount a local volume one after the other should be able to write from pod1 and read from pod2
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: gce-localssd-scsi-fs] when pod using local volume with non-existant path should not be able to mount
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: gce-localssd-scsi-fs] when pod's node is different from PV's NodeAffinity should not be able to mount due to different NodeAffinity
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: gce-localssd-scsi-fs] when pod's node is different from PV's NodeAffinity should not be able to mount due to different NodeSelector
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: gce-localssd-scsi-fs] when pod's node is different from PV's NodeName should not be able to mount due to different NodeName
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: gce-localssd-scsi-fs] when two pods mount a local volume at the same time should be able to write from pod1 and read from pod2
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: gce-localssd-scsi-fs] when two pods mount a local volume one after the other should be able to write from pod1 and read from pod2
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: tmpfs] when pod using local volume with non-existant path should not be able to mount
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: tmpfs] when pod's node is different from PV's NodeAffinity should not be able to mount due to different NodeAffinity
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: tmpfs] when pod's node is different from PV's NodeAffinity should not be able to mount due to different NodeSelector
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: tmpfs] when pod's node is different from PV's NodeName should not be able to mount due to different NodeName
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: tmpfs] when two pods mount a local volume at the same time should be able to write from pod1 and read from pod2
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] [Volume type: tmpfs] when two pods mount a local volume one after the other should be able to write from pod1 and read from pod2
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] when StatefulSet has pod anti-affinity should use volumes spread across nodes
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] when one pod requests one prebound PVC should be able to mount volume and read from pod1
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] when one pod requests one prebound PVC should be able to mount volume and write from pod1
[sig-storage] PersistentVolumes-local [Feature:LocalPersistentVolumes] [Serial] when using local volume provisioner should create and recreate local persistent volume
[sig-storage] PersistentVolumes:vsphere should test that a file written to the vspehre volume mount before kubelet restart can be read after restart [Disruptive]
[sig-storage] PersistentVolumes:vsphere should test that a vspehre volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive]
[sig-storage] PersistentVolumes:vsphere should test that deleting a PVC before the pod does not cause pod deletion to fail on vsphere volume detach
[sig-storage] PersistentVolumes:vsphere should test that deleting the Namespace of a PVC and Pod causes the successful detach of vsphere volume
[sig-storage] PersistentVolumes:vsphere should test that deleting the PV before the pod does not cause pod deletion to fail on vspehre volume detach
[sig-storage] PersistentVolumes[Disruptive][Flaky] when kube-controller-manager restarts should delete a bound PVC from a clientPod, restart the kube-control-manager, and ensure the kube-controller-manager does not crash
[sig-storage] PersistentVolumes[Disruptive][Flaky] when kubelet restarts Should test that a file written to the mount before kubelet restart is readable after restart.
[sig-storage] PersistentVolumes[Disruptive][Flaky] when kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.
[sig-storage] Pod Disks detach in a disrupted environment [Slow] [Disruptive] when node is deleted
[sig-storage] Pod Disks detach in a disrupted environment [Slow] [Disruptive] when node's API object is deleted
[sig-storage] Pod Disks detach in a disrupted environment [Slow] [Disruptive] when pod is evicted
[sig-storage] Pod Disks schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] using 1 containers and 2 PDs
[sig-storage] Pod Disks schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] using 4 containers and 1 PDs
[sig-storage] Pod Disks schedule pods each with a PD, delete pod and verify detach [Slow] for RW PD with pod delete grace period of "default (30s)"
[sig-storage] Pod Disks schedule pods each with a PD, delete pod and verify detach [Slow] for RW PD with pod delete grace period of "immediate (0s)"
[sig-storage] Pod Disks schedule pods each with a PD, delete pod and verify detach [Slow] for read-only PD with pod delete grace period of "default (30s)"
[sig-storage] Pod Disks schedule pods each with a PD, delete pod and verify detach [Slow] for read-only PD with pod delete grace period of "immediate (0s)"
[sig-storage] Pod Disks should be able to delete a non-existent PD without error
[sig-storage] Projected optional updates should be reflected in volume [Conformance]
[sig-storage] Projected optional updates should be reflected in volume [Conformance]
[sig-storage] Projected should be able to mount in a volume regardless of a different secret existing with same name in different namespace
[sig-storage] Projected should be consumable from pods in volume [Conformance]
[sig-storage] Projected should be consumable from pods in volume [Conformance]
[sig-storage] Projected should be consumable from pods in volume as non-root [Conformance]
[sig-storage] Projected should be consumable from pods in volume as non-root with FSGroup [Feature:FSGroup]
[sig-storage] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance]
[sig-storage] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup]
[sig-storage] Projected should be consumable from pods in volume with defaultMode set [Conformance]
[sig-storage] Projected should be consumable from pods in volume with defaultMode set [Conformance]
[sig-storage] Projected should be consumable from pods in volume with mappings [Conformance]
[sig-storage] Projected should be consumable from pods in volume with mappings [Conformance]
[sig-storage] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance]
[sig-storage] Projected should be consumable from pods in volume with mappings and Item mode set [Conformance]
[sig-storage] Projected should be consumable from pods in volume with mappings as non-root [Conformance]
[sig-storage] Projected should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup]
[sig-storage] Projected should be consumable in multiple volumes in a pod [Conformance]
[sig-storage] Projected should be consumable in multiple volumes in the same pod [Conformance]
[sig-storage] Projected should project all components that make up the projection API [Projection] [Conformance]
[sig-storage] Projected should provide container's cpu limit [Conformance]
[sig-storage] Projected should provide container's cpu request [Conformance]
[sig-storage] Projected should provide container's memory limit [Conformance]
[sig-storage] Projected should provide container's memory request [Conformance]
[sig-storage] Projected should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance]
[sig-storage] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance]
[sig-storage] Projected should provide podname as non-root with fsgroup [Feature:FSGroup]
[sig-storage] Projected should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup]
[sig-storage] Projected should provide podname only [Conformance]
[sig-storage] Projected should set DefaultMode on files [Conformance]
[sig-storage] Projected should set mode on item file [Conformance]
[sig-storage] Projected should update annotations on modification [Conformance]
[sig-storage] Projected should update labels on modification [Conformance]
[sig-storage] Projected updates should be reflected in volume [Conformance]
[sig-storage] Secrets optional updates should be reflected in volume [Conformance]
[sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace
[sig-storage] Secrets should be consumable from pods in volume [Conformance]
[sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with defaultMode set [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with mappings [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance]
[sig-storage] Secrets should be consumable in multiple volumes in a pod [Conformance]
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with invalid capability name objectSpaceReserve is not honored for dynamically provisioned pvc using storageclass
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with invalid diskStripes value is not honored for dynamically provisioned pvc using storageclass
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with invalid hostFailuresToTolerate value is not honored for dynamically provisioned pvc using storageclass
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with non-vsan datastore is not honored for dynamically provisioned pvc using storageclass
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid diskStripes and objectSpaceReservation values and a VSAN datastore is honored for dynamically provisioned pvc using storageclass
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid diskStripes and objectSpaceReservation values is honored for dynamically provisioned pvc using storageclass
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid hostFailuresToTolerate and cacheReservation values is honored for dynamically provisioned pvc using storageclass
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid objectSpaceReservation and iopsLimit values is honored for dynamically provisioned pvc using storageclass
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify an existing and compatible SPBM policy is honored for dynamically provisioned pvc using storageclass
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify an if a SPBM policy and VSAN capabilities cannot be honored for dynamically provisioned pvc using storageclass
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify clean up of stale dummy VM for dynamically provisioned pvc using SPBM policy
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify if a SPBM policy is not honored on a non-compatible datastore for dynamically provisioned pvc using storageclass
[sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify if a non-existing SPBM policy is not honored for dynamically provisioned pvc using storageclass
[sig-storage] Subpath [Volume type: emptyDir] should fail if non-existent subpath is outside the volume [Slow]
[sig-storage] Subpath [Volume type: emptyDir] should fail if subpath directory is outside the volume [Slow]
[sig-storage] Subpath [Volume type: emptyDir] should fail if subpath file is outside the volume [Slow]
[sig-storage] Subpath [Volume type: emptyDir] should fail if subpath with backstepping is outside the volume [Slow]
[sig-storage] Subpath [Volume type: emptyDir] should support creating multiple subpath from same volumes [Slow]
[sig-storage] Subpath [Volume type: emptyDir] should support existing directory
[sig-storage] Subpath [Volume type: emptyDir] should support existing single file
[sig-storage] Subpath [Volume type: emptyDir] should support non-existent path
[sig-storage] Subpath [Volume type: emptyDir] should support restarting containers [Slow]
[sig-storage] Subpath [Volume type: gcePDPartitioned] should fail if non-existent subpath is outside the volume [Slow]
[sig-storage] Subpath [Volume type: gcePDPartitioned] should fail if subpath directory is outside the volume [Slow]
[sig-storage] Subpath [Volume type: gcePDPartitioned] should fail if subpath file is outside the volume [Slow]
[sig-storage] Subpath [Volume type: gcePDPartitioned] should fail if subpath with backstepping is outside the volume [Slow]
[sig-storage] Subpath [Volume type: gcePDPartitioned] should support creating multiple subpath from same volumes [Slow]
[sig-storage] Subpath [Volume type: gcePDPartitioned] should support existing directory
[sig-storage] Subpath [Volume type: gcePDPartitioned] should support existing single file
[sig-storage] Subpath [Volume type: gcePDPartitioned] should support non-existent path
[sig-storage] Subpath [Volume type: gcePDPartitioned] should support restarting containers [Slow]
[sig-storage] Subpath [Volume type: gcePD] should fail if non-existent subpath is outside the volume [Slow]
[sig-storage] Subpath [Volume type: gcePD] should fail if subpath directory is outside the volume [Slow]
[sig-storage] Subpath [Volume type: gcePD] should fail if subpath file is outside the volume [Slow]
[sig-storage] Subpath [Volume type: gcePD] should fail if subpath with backstepping is outside the volume [Slow]
[sig-storage] Subpath [Volume type: gcePD] should support creating multiple subpath from same volumes [Slow]
[sig-storage] Subpath [Volume type: gcePD] should support existing directory
[sig-storage] Subpath [Volume type: gcePD] should support existing single file
[sig-storage] Subpath [Volume type: gcePD] should support non-existent path
[sig-storage] Subpath [Volume type: gcePD] should support restarting containers [Slow]
[sig-storage] Subpath [Volume type: gluster] should fail if non-existent subpath is outside the volume [Slow]
[sig-storage] Subpath [Volume type: gluster] should fail if subpath directory is outside the volume [Slow]
[sig-storage] Subpath [Volume type: gluster] should fail if subpath file is outside the volume [Slow]
[sig-storage] Subpath [Volume type: gluster] should fail if subpath with backstepping is outside the volume [Slow]
[sig-storage] Subpath [Volume type: gluster] should support creating multiple subpath from same volumes [Slow]
[sig-storage] Subpath [Volume type: gluster] should support existing directory
[sig-storage] Subpath [Volume type: gluster] should support existing single file
[sig-storage] Subpath [Volume type: gluster] should support non-existent path
[sig-storage] Subpath [Volume type: gluster] should support restarting containers [Slow]
[sig-storage] Subpath [Volume type: hostPathSymlink] should fail if non-existent subpath is outside the volume [Slow]
[sig-storage] Subpath [Volume type: hostPathSymlink] should fail if subpath directory is outside the volume [Slow]
[sig-storage] Subpath [Volume type: hostPathSymlink] should fail if subpath file is outside the volume [Slow]
[sig-storage] Subpath [Volume type: hostPathSymlink] should fail if subpath with backstepping is outside the volume [Slow]
[sig-storage] Subpath [Volume type: hostPathSymlink] should support creating multiple subpath from same volumes [Slow]
[sig-storage] Subpath [Volume type: hostPathSymlink] should support existing directory
[sig-storage] Subpath [Volume type: hostPathSymlink] should support existing single file
[sig-storage] Subpath [Volume type: hostPathSymlink] should support non-existent path
[sig-storage] Subpath [Volume type: hostPathSymlink] should support restarting containers [Slow]
[sig-storage] Subpath [Volume type: hostPath] should fail if non-existent subpath is outside the volume [Slow]
[sig-storage] Subpath [Volume type: hostPath] should fail if subpath directory is outside the volume [Slow]
[sig-storage] Subpath [Volume type: hostPath] should fail if subpath file is outside the volume [Slow]
[sig-storage] Subpath [Volume type: hostPath] should fail if subpath with backstepping is outside the volume [Slow]
[sig-storage] Subpath [Volume type: hostPath] should support creating multiple subpath from same volumes [Slow]
[sig-storage] Subpath [Volume type: hostPath] should support existing directory
[sig-storage] Subpath [Volume type: hostPath] should support existing single file
[sig-storage] Subpath [Volume type: hostPath] should support non-existent path
[sig-storage] Subpath [Volume type: hostPath] should support restarting containers [Slow]
[sig-storage] Subpath [Volume type: nfsPVC] should fail if non-existent subpath is outside the volume [Slow]
[sig-storage] Subpath [Volume type: nfsPVC] should fail if subpath directory is outside the volume [Slow]
[sig-storage] Subpath [Volume type: nfsPVC] should fail if subpath file is outside the volume [Slow]
[sig-storage] Subpath [Volume type: nfsPVC] should fail if subpath with backstepping is outside the volume [Slow]
[sig-storage] Subpath [Volume type: nfsPVC] should support creating multiple subpath from same volumes [Slow]
[sig-storage] Subpath [Volume type: nfsPVC] should support existing directory
[sig-storage] Subpath [Volume type: nfsPVC] should support existing single file
[sig-storage] Subpath [Volume type: nfsPVC] should support non-existent path
[sig-storage] Subpath [Volume type: nfsPVC] should support restarting containers [Slow]
[sig-storage] Subpath [Volume type: nfs] should fail if non-existent subpath is outside the volume [Slow]
[sig-storage] Subpath [Volume type: nfs] should fail if subpath directory is outside the volume [Slow]
[sig-storage] Subpath [Volume type: nfs] should fail if subpath file is outside the volume [Slow]
[sig-storage] Subpath [Volume type: nfs] should fail if subpath with backstepping is outside the volume [Slow]
[sig-storage] Subpath [Volume type: nfs] should support creating multiple subpath from same volumes [Slow]
[sig-storage] Subpath [Volume type: nfs] should support existing directory
[sig-storage] Subpath [Volume type: nfs] should support existing single file
[sig-storage] Subpath [Volume type: nfs] should support non-existent path
[sig-storage] Subpath [Volume type: nfs] should support restarting containers [Slow]
[sig-storage] Volume Attach Verify [Feature:vsphere][Serial][Disruptive] verify volume remains attached after master kubelet restart
[sig-storage] Volume Disk Format [Feature:vsphere] verify disk format type - eagerzeroedthick is honored for dynamically provisioned pv using storageclass
[sig-storage] Volume Disk Format [Feature:vsphere] verify disk format type - thin is honored for dynamically provisioned pv using storageclass
[sig-storage] Volume Disk Format [Feature:vsphere] verify disk format type - zeroedthick is honored for dynamically provisioned pv using storageclass
[sig-storage] Volume Disk Size [Feature:vsphere] verify dynamically provisioned pv using storageclass with an invalid disk size fails
[sig-storage] Volume FStype [Feature:vsphere] verify fstype - default value should be ext4
[sig-storage] Volume FStype [Feature:vsphere] verify fstype - ext3 formatted volume
[sig-storage] Volume FStype [Feature:vsphere] verify invalid fstype
[sig-storage] Volume Operations Storm [Feature:vsphere] should create pod with many volumes and verify no attach call fails
[sig-storage] Volume Placement should create and delete pod with multiple volumes from different datastore
[sig-storage] Volume Placement should create and delete pod with multiple volumes from same datastore
[sig-storage] Volume Placement should create and delete pod with the same volume source attach/detach to different worker nodes
[sig-storage] Volume Placement should create and delete pod with the same volume source on the same worker node
[sig-storage] Volume Placement test back to back pod creation and deletion with different volume sources on the same worker node
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] verify dynamic provision with default parameter on clustered datastore
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] verify dynamic provision with spbm policy on clustered datastore
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] verify static provisioning on clustered datastore
[sig-storage] Volume Provisioning on Datastore [Feature:vsphere] verify dynamically provisioned pv using storageclass fails on an invalid datastore
[sig-storage] Volume plugin streaming [Slow] Ceph-RBD [Feature:Volumes] should write files of various sizes, verify size, validate content
[sig-storage] Volume plugin streaming [Slow] GlusterFS should write files of various sizes, verify size, validate content
[sig-storage] Volume plugin streaming [Slow] NFS should write files of various sizes, verify size, validate content
[sig-storage] Volume plugin streaming [Slow] iSCSI [Feature:Volumes] should write files of various sizes, verify size, validate content
[sig-storage] Volumes Azure Disk [Feature:Volumes] should be mountable [Slow]
[sig-storage] Volumes Ceph RBD [Feature:Volumes] should be mountable
[sig-storage] Volumes CephFS [Feature:Volumes] should be mountable
[sig-storage] Volumes Cinder [Feature:Volumes] should be mountable
[sig-storage] Volumes ConfigMap should be mountable
[sig-storage] Volumes GlusterFS should be mountable
[sig-storage] Volumes NFS should be mountable
[sig-storage] Volumes PD should be mountable with ext3
[sig-storage] Volumes PD should be mountable with ext4
[sig-storage] Volumes PD should be mountable with xfs
[sig-storage] Volumes iSCSI [Feature:Volumes] should be mountable
[sig-storage] Volumes vsphere [Feature:Volumes] should be mountable
[sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach
[sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref
[sig-storage] vcp at scale [Feature:vsphere] vsphere scale tests
[sig-storage] vcp-performance [Feature:vsphere] vcp performance tests
[sig-storage] vsphere cloud provider stress [Feature:vsphere] vsphere stress tests
[sig-storage] vsphere statefulset vsphere statefulset testing
[sig-ui] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment