- website is changing from jekyll to hugo
- branch
master:website/docs
becomes branchhugo-migration:website/content/en/docs
- CDRs
- Versioning: no-on (rename) conversions coming
- Admission webhooks
- Aggregation is GA
- WG: Apply
- Feature branch
- IDL changes (types.go)
- API changes incoming
- under-the-hood changes
- Pain Points
- client-go interface churn: "fix in next release"
- setting up webhooks / aggregated apiservers (certs)
- Helm 2.9 out
- Helm 3 proposal has been merged
- app-dev working group has come up with labeling recommendations
- started application CRD, to aggregate high level views of apps
- App Survey results are out
- Charter out, first round of feedback done
- Ongoing discussions about development tooling
- good "landing-page" for kubernetes
- Meet every Monday at 9am PST
- sig-apps covers all tooling and workloads
- Need more participation in architecture as a general rule
- Looking at roadmaps and plans for everything
- New sig, about a year old
- PR is out for charter
- Tackling all sorts of governance and project policy
- e.g. sig leads should have representation from multiple companies
- pod identity improvements
- token request API - time limited, per pod (eventually) audience scoped
- ServiceAccountTokenProjection
- uses new volume type, kubelet making a request on behalf of the pod
- Scheduling policy in 1.12
- Audit events now support annotations
- rbac role, binding, and psp
- webhook extra info
- node-isolation:
- kubelet tls improvements
- cert rotation
- node restrictions for daemonsets
- security conformance
- expand authentication and authorization tests
- test properties, not implemented
- maybe: add conformance tier
- expand bug bounty program
- Build a secure cluster test or a CIS benchmark native for kubernetes
- Why not have a security working group?
- been brought up multiple times, all sigs should consider security for their components
- sig auth is the glue between everything
- Changes in 1.10
- external metrics support
- scaling on metrics not directly associated with a k8s object e.g. from cloud provider
- cluster autoscaler improvements
- bug fixes
- azure improvements
- GCP automated scaling
- external metrics support
- Whats in 1.11
- vertical pod autoscaler is alpha
- metric label selectors
- tweaks to hpa v2 ergonomics
- Buffer pods and buffer in the cluster autoscaler?
- There have been 4 attempts to implement and no one is successful
- Will revisit if pod priority scheduler/preemption doesn't take care of it
- Heptio authentication incoming
- Making much more use of sig repositories
- discussion forming on what falls into sig-aws
- virtual-kubelet work being done (cross sig concern)
- virtual-kubelet should go into sig-node long term
- aws plugin should still belong to sig-aws
???
???
- moving client side logic to server side
- solves issues with extensions and version skew
- overlaps with api-machinery and sig-apps
- initiatives:
- serverside printing
- define how things are printed on server side instead of client
- server-side apply
- serverside printing
- Making remaining client-side kubectl work with extensions and version skew
- use openapi schema instead of static go structs
- use subresources
- provide more transparency on deliverables and execution
- have assignees and priority
- clear backlog
- track burn down and shuffle-sort the load
- auto labeling
- Loop in more contributors
- the pool of talent is wide and diverse
- average meetings > 20 people
- 1.11
- kubeadm
- triage and backlog burn down
- lots of cleanup
- burned in upgrade to 1.10, more review for future
- KEPs for:
- ComponentConfig for kubeadm
- rethink selfhosting
- initial master join workflow
- phase refactoring
- triage and backlog burn down
- clusterAPI:
- alpha in 1.11
- new home in k8s sigs
- kubeadm
- 2018 principles
- if its not automated, it better be documented
- achieving a spof-less sig
- remove spofs where possible (youtube accounts etc)
- growing our contributors
- mentorship has large time burdens
- out-of-box solutions:
- group mentoring
- moving up the chain
- 1-on-1 hour mentoring session
- all contributors should have a relatively same and smooth process across repos
- subprojects
- devstats
- mentoring
- contributor documentation
- contributor site being built
- possible contributor twitter?
- community management
- expand more events such as contributor summit
- events
- contributor workflow and documentation
???
- New SIg (less than 10 days old)
- 20 SIG Members
- Bi-weekly meeting, every 2 weeks Wed @ 14:00 EST
- slack channel - #sig-ibmcloud
- metrics api were beta in 1.8
- last component of metrics api landed in 1.10
- heapster is being deprecated
- external metrics api
- end-to-end tests have been a problem
- lots of e2e tests had external dependencies on cloud providers causing alot of failures
- working with testing for 3rd party testing
- Cross sig issues
- cluster registry - sig-apimachinery
- naming (EnvironmentRegistry)
- implementing as a CRD
- Auth in Cluster Registry - sig-auth
- keeping auth outside of the cluster registry
- pints to credential store to house credentials instead of managing it itself
- discussion on still hosting some creds
- Multicluster Ingress - sig-networking
- adding features to existing ingress spec and path to GA
- cluster registry - sig-apimachinery
-
Updates:
- Kubernetes/cloud-openstack-provider repository
- track upstream k8s provider code
- new features landing on new repository
- gal is to remove k/k provider code in 1.11
- hosts flex and CSI drivers for cinder
- hosts openstack keystone authN/Z driver
- 3rd party e2e testing of the cloud provider
- based on upstream minikube tests
- test hosted by openlab and uses zuul v3 testing framework
- building integrations to report back to test grid
- Kubernetes/cloud-openstack-provider repository
-
WG Cloud Provider Collaborations
- sig
- providing example jobs for other providers with miikube and e2e tests
- Defining common standards for K8s provider
- sig-docs
- working with docs to define common standards for provider documentation
- sig-testing
- working on 3rd party integrations for provider testing
- sig
- CRI
- CRI is the big area of focus, stabilization is the priority
- CRI logging and metrics need to see further development
- containerd is adopting CRI as of 1.1
- sandbox pods - things like kata containers being integrated into a native concept at the pod level
- resource management
- work being done on evictions
- burstable workloads
- core metrics adoption within kubelet
- future of cadvisor within node
- charter
- discussed at sig-pm meetings during the last 2 months
- major points:
- formerly renamed to sig-pm from sig-project management
- covering the product, program and project management
- aspects of Kubernetes.
- user feedback
- relatively difficult to channel feedback from outside community
- no authoritative source to say what should be in/out of the project
- trying to make sigs more successful
- feedback form: missed url
- use ML to objectify data
- working on charter moving forward
- moving release team to a sub-project, impacts code ownership etc
- downside of co-mingling of release team and release sig
- volunteer for the release team to help!
- get familiar with code base etc
- scope of sig-release as projects get moved out of the core repo
- understand release dependencies
- facilitate projects and understand what their requirements are
- don't break other tools dependent on the release process
- general trend to try and make it less reliant on Google for testing, trying to make release more vendor neutral.
- Trying to work on avoiding situation where scalability may inhibit a release
- Working on writing the charter, 1st round feedback received
- Creating a framework to use by developers for testing
- Refine definition of scaling
- work on cross-provider for scalability testing
- Current:
- beta as of oct 2017
- key dev for users
- svcat: new commandline tool
- ns-scoped brokers under development
- Possibly moving to CRDs
- or apiserver storage for aggregate apis
- finalizing v1.0 wish-list
- v1.0 wish list
- complete ns-scoped brokers
- async bindings
- resolve crd decision
- generic instance actions
- generic broker actions
- GUIDs in names are problematic
- finalizing by May 14th
- Does not fall under Kubernetes Conformance
- trying to find a way where it could fall under them
- v1.10 update
- beta:
- local storage api
- added support for block volumes (alpha)
- CSI core API
- move api to beta and add support for FSType and other CSI parameters
- enable secrets to be shared with CSI driver
- moved to CSI spec v0.2
- moving storage out of core, fixes issues with 3rd party code blocking releases.
- mount propagation
- enabled privileged containers to opt-in to "bidirectional" (rshared) mounts
- ephemeral storage request/lmit API
- allowed settings requests/limits on space used for container logs, images etc
- local storage api
- beta:
- Whats Next:
- in v1.11/v1.12, drive beta features towards GA/stable
- CSI
- local storage
- topology aware volume scheduling
- generic apis to handle topology in k8s and csi
- snapshots/restpores apis
- working on a series of videos to help on-board new contributors
- F2F meeting in Mt. View, California on May 15th-16th
- see sig meeting notes on mailing list for details
- in v1.11/v1.12, drive beta features towards GA/stable
- testing-update
- switch to tide for kubernetes/kubernetes
- tide is the new merge pool
- tide is enabled for 40 repos..just not kubernetes/kubernetes
- retire submit-queue
- split up testing configurations for each org/sig
- each sig can own their own prow jobs
- mechanisms for promote/demote merge blocking jobs
- metrics for presubmit flakiness now lives
- Conformance Drive
- integration with kubetest and testgrid
- conformance wg
- Prow Bundle
- OSS Testgrid (currently owned by googled)
- Bundle Prow + Gubernator + testgrid
- Testing Commons Project
- Sub project for integration test framework
- https://github.com/kubernetes-sig/testing_frameworks
- moving to a world where you can use
/cherrypick
- Added documentation on labels
- switch to tide for kubernetes/kubernetes
Looking for new contributors
-
new sig
-
re-architect cloud provider for vsphere
-
vsphere provider interacts with ALOT of other sigs (network, node, scheduling, storage, testing and others)
-
just got resources from for starting CI/CD test integrations
-
mid/longer term, interaction/interest in virtual-kubelet effort
-
still building dev + user participation as new sig
- 47 members
-
1.11 goals
- design
- move cloud provider "out of tree"
- deliverable: design discussion, execution plan, proposed documentation
- implementation target 1.12
- design
- Update
- kubelet and kube-proxy can now be ran as Windows Services
- kubelet starting and working properly with vmware
- experimental support for hyper-v isolation (only container per pod only)
- Windows Resource Control
- resource limits and requests
- Ansible playbook for deploying Kubernetes cluster on windows node (using ovn-kubelet)
- ALOT of bug fixes
- Next:
- CNI support and enhancements for all versions of windows
- Windows Service Accounts in Kubernetes
- CI/CD system for Windows In Kubenretes
- Lots of work being done with sig-testing to better support Windows
- Working with kube-test