Skip to content

Instantly share code, notes, and snippets.

@danehans
Created December 18, 2018 19:21
Show Gist options
  • Save danehans/9cb8632ce68809424b8f65386522e910 to your computer and use it in GitHub Desktop.
Save danehans/9cb8632ce68809424b8f65386522e910 to your computer and use it in GitHub Desktop.
fedv2 high-level api
Current Primitives:
```
apiVersion: primitives.federation.k8s.io/v1alpha1
kind: FederatedFoo
metadata:
name: foo
spec:
clusters:
- cluster2
- cluster1
template:
spec:
<target foo fields>
overrides:
- clusterName: cluster1
<override_field>:
- <override_value>
---
apiVersion: primitives.federation.k8s.io/v1alpha1
kind: FederatedServiceEntryPlacement
metadata:
name: httpbin-ext
spec:
clusterNames:
- cluster2
- cluster1
---
apiVersion: primitives.federation.k8s.io/v1alpha1
kind: FederatedVirtualServiceEntryOverride
metadata:
name: httpbin-ext
spec:
overrides:
- clusterName: cluster1
hosts:
- "httpbin.istio-system.external.svc.us-west1-b.us-west1.external.daneyon.com"
- clusterName: cluster2
hosts:
- "httpbin.istio-system.external.svc.us-west1-a.us-west1.external.daneyon.com"
```
High-level API equivilent:
```
```

apiVersion: extensions.federation.k8s.io/v1alpha1 kind: FederatedApp metadata: name: istio namespace: istio-system spec: domain: 5.7 highly-available: true autoUpgradePolicy: minor encrypted: true size: 100Gi

istio telem pod fails. Here is a working telem pod from a std istio deploy:
```
$ kubectl logs istio-telemetry-68787476f4-5m272 -n istio-system
Error from server (BadRequest): a container name must be specified for pod istio-telemetry-68787476f4-5m272, choose one of: [mixer istio-proxy]
$ kubectl logs istio-telemetry-68787476f4-5m272 -n istio-system -c mixer
Mixer started with
MaxMessageSize: 1048576
MaxConcurrentStreams: 1024
APIWorkerPoolSize: 1024
AdapterWorkerPoolSize: 1024
APIPort: 9091
APIAddress: unix:///sock/mixer.socket
MonitoringPort: 9093
EnableProfiling: true
SingleThreaded: false
NumCheckCacheEntries: 1500000
ConfigStoreURL: k8s://
ConfigDefaultNamespace: istio-system
LoggingOptions: log.Options{OutputPaths:[]string{"stdout"}, ErrorOutputPaths:[]string{"stderr"}, RotateOutputPath:"", RotationMaxSize:104857600, RotationMaxAge:30, RotationMaxBackups:1000, JSONEncoding:false, LogGrpc:true, outputLevels:"default:info", logCallers:"", stackTraceLevels:"default:none"}
TracingOptions: tracing.Options{ZipkinURL:"http://zipkin:9411/api/v1/spans", JaegerURL:"", LogTraceSpans:false}
IntrospectionOptions: ctrlz.Options{Port:0x2694, Address:"127.0.0.1"}
LoadSheddingOptions: loadshedding.Options{Mode:0, AverageLatencyThreshold:0, SamplesPerSecond:1.7976931348623157e+308, SampleHalfLife:1000000000, MaxRequestsPerSecond:0, BurstSize:0}
gc 1 @0.014s 16%: 0.014+2.3+2.7 ms clock, 0.028+0.28/0.62/2.2+5.4 ms cpu, 4->4->2 MB, 5 MB goal, 2 P
2018-12-11T04:18:57.331236Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2018-12-11T04:18:57.331639Z info Built new config.Snapshot: id='0'
2018-12-11T04:18:57.331712Z info Cleaning up handler table, with config ID:-1
gc 2 @0.029s 23%: 0.008+6.9+5.3 ms clock, 0.017+0.75/1.7/10+10 ms cpu, 4->4->3 MB, 5 MB goal, 2 P
2018-12-11T04:18:57.350747Z info Built new config.Snapshot: id='1'
2018-12-11T04:18:57.350854Z info Cleaning up handler table, with config ID:0
2018-12-11T04:18:57.350886Z info Awaiting for config store sync...
2018-12-11T04:18:58.363196Z info Publishing 7 events
2018-12-11T04:18:58.363474Z error failed to evaluate expression for field 'Value'; unknown attribute connection.sent.bytes
2018-12-11T04:18:58.363501Z error failed to evaluate expression for field 'Value'; unknown attribute connection.received.bytes
2018-12-11T04:18:58.363574Z error failed to evaluate expression for field 'Dimensions[connection_security_policy]'; unknown attribute context.reporter.kind
2018-12-11T04:18:58.363613Z error failed to evaluate expression for field 'Value'; unknown attribute response.duration
2018-12-11T04:18:58.363631Z error failed to evaluate expression for field 'Value'; unknown attribute request.size
2018-12-11T04:18:58.363650Z error failed to evaluate expression for field 'Value'; unknown attribute response.size
2018-12-11T04:18:58.363666Z info Built new config.Snapshot: id='2'
2018-12-11T04:18:58.363749Z info Cleaning up handler table, with config ID:1
gc 3 @1.473s 1%: 0.011+11+14 ms clock, 0.022+0.91/2.2/17+28 ms cpu, 6->6->4 MB, 7 MB goal, 2 P
2018-12-11T04:18:59.782570Z info Publishing 2 events
2018-12-11T04:18:59.783930Z info Built new config.Snapshot: id='3'
2018-12-11T04:18:59.784001Z info Cleaning up handler table, with config ID:2
2018-12-11T04:19:01.155066Z info Publishing 9 events
2018-12-11T04:19:01.156976Z error Handler not found: handler='handler.prometheus'
2018-12-11T04:19:01.156992Z error No valid actions found in rule
2018-12-11T04:19:01.157016Z error Handler not found: handler='handler.stdio'
2018-12-11T04:19:01.157020Z error No valid actions found in rule
2018-12-11T04:19:01.157033Z error Handler not found: handler='handler.prometheus'
2018-12-11T04:19:01.157037Z error No valid actions found in rule
2018-12-11T04:19:01.157062Z error Handler not found: handler='handler.stdio'
2018-12-11T04:19:01.157066Z error No valid actions found in rule
2018-12-11T04:19:01.157071Z info Built new config.Snapshot: id='4'
2018-12-11T04:19:01.157138Z info adapters getting kubeconfig from: "" {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:01.157147Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2018-12-11T04:19:01.158232Z info adapters Waiting for kubernetes cache sync... {"adapter": "handler.kubernetesenv.istio-system"}
gc 4 @3.890s 1%: 0.026+48+16 ms clock, 0.052+11/27/0+32 ms cpu, 8->9->6 MB, 9 MB goal, 2 P
2018-12-11T04:19:01.359874Z info adapters Cache sync successful. {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:01.360457Z info Cleaning up handler table, with config ID:3
gc 5 @5.488s 1%: 0.82+18+13 ms clock, 1.6+0.23/14/19+27 ms cpu, 10->10->6 MB, 11 MB goal, 2 P
2018-12-11T04:19:03.165267Z info Publishing 2 events
2018-12-11T04:19:03.168699Z info Built new config.Snapshot: id='5'
2018-12-11T04:19:03.169350Z info adapters getting kubeconfig from: "" {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:03.169365Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2018-12-11T04:19:03.170873Z info adapters Waiting for kubernetes cache sync... {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:03.377530Z info adapters serving prometheus metrics on 42422 {"adapter": "handler.prometheus.istio-system"}
gc 6 @5.831s 3%: 0.009+145+93 ms clock, 0.019+70/25/17+187 ms cpu, 75->122->122 MB, 76 MB goal, 2 P
2018-12-11T04:19:03.418214Z info Starting monitor server...
Istio Mixer: [email protected]/istio-1.0.3-a44d4c8bcb427db16ca4a439adfbd8d9361b8ed3-Clean
Starting gRPC server on port 9091
2018-12-11T04:19:03.453874Z info ControlZ available at 10.44.2.8:9876
2018-12-11T04:19:03.503368Z info adapters Cache sync successful. {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:03.521297Z info Cleaning up handler table, with config ID:4
2018-12-11T04:19:03.521347Z error adapters adapter did not close all the scheduled daemons {"adapter": "handler.kubernetesenv.istio-system"}
gc 7 @6.083s 6%: 17+176+83 ms clock, 35+18/101/134+166 ms cpu, 122->126->124 MB, 243 MB goal, 2 P
$ kubectl logs istio-telemetry-68787476f4-5m272 -n istio-system -c mixer
Mixer started with
MaxMessageSize: 1048576
MaxConcurrentStreams: 1024
APIWorkerPoolSize: 1024
AdapterWorkerPoolSize: 1024
APIPort: 9091
APIAddress: unix:///sock/mixer.socket
MonitoringPort: 9093
EnableProfiling: true
SingleThreaded: false
NumCheckCacheEntries: 1500000
ConfigStoreURL: k8s://
ConfigDefaultNamespace: istio-system
LoggingOptions: log.Options{OutputPaths:[]string{"stdout"}, ErrorOutputPaths:[]string{"stderr"}, RotateOutputPath:"", RotationMaxSize:104857600, RotationMaxAge:30, RotationMaxBackups:1000, JSONEncoding:false, LogGrpc:true, outputLevels:"default:info", logCallers:"", stackTraceLevels:"default:none"}
TracingOptions: tracing.Options{ZipkinURL:"http://zipkin:9411/api/v1/spans", JaegerURL:"", LogTraceSpans:false}
IntrospectionOptions: ctrlz.Options{Port:0x2694, Address:"127.0.0.1"}
LoadSheddingOptions: loadshedding.Options{Mode:0, AverageLatencyThreshold:0, SamplesPerSecond:1.7976931348623157e+308, SampleHalfLife:1000000000, MaxRequestsPerSecond:0, BurstSize:0}
gc 1 @0.014s 16%: 0.014+2.3+2.7 ms clock, 0.028+0.28/0.62/2.2+5.4 ms cpu, 4->4->2 MB, 5 MB goal, 2 P
2018-12-11T04:18:57.331236Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2018-12-11T04:18:57.331639Z info Built new config.Snapshot: id='0'
2018-12-11T04:18:57.331712Z info Cleaning up handler table, with config ID:-1
gc 2 @0.029s 23%: 0.008+6.9+5.3 ms clock, 0.017+0.75/1.7/10+10 ms cpu, 4->4->3 MB, 5 MB goal, 2 P
2018-12-11T04:18:57.350747Z info Built new config.Snapshot: id='1'
2018-12-11T04:18:57.350854Z info Cleaning up handler table, with config ID:0
2018-12-11T04:18:57.350886Z info Awaiting for config store sync...
2018-12-11T04:18:58.363196Z info Publishing 7 events
2018-12-11T04:18:58.363474Z error failed to evaluate expression for field 'Value'; unknown attribute connection.sent.bytes
2018-12-11T04:18:58.363501Z error failed to evaluate expression for field 'Value'; unknown attribute connection.received.bytes
2018-12-11T04:18:58.363574Z error failed to evaluate expression for field 'Dimensions[connection_security_policy]'; unknown attribute context.reporter.kind
2018-12-11T04:18:58.363613Z error failed to evaluate expression for field 'Value'; unknown attribute response.duration
2018-12-11T04:18:58.363631Z error failed to evaluate expression for field 'Value'; unknown attribute request.size
2018-12-11T04:18:58.363650Z error failed to evaluate expression for field 'Value'; unknown attribute response.size
2018-12-11T04:18:58.363666Z info Built new config.Snapshot: id='2'
2018-12-11T04:18:58.363749Z info Cleaning up handler table, with config ID:1
gc 3 @1.473s 1%: 0.011+11+14 ms clock, 0.022+0.91/2.2/17+28 ms cpu, 6->6->4 MB, 7 MB goal, 2 P
2018-12-11T04:18:59.782570Z info Publishing 2 events
2018-12-11T04:18:59.783930Z info Built new config.Snapshot: id='3'
2018-12-11T04:18:59.784001Z info Cleaning up handler table, with config ID:2
2018-12-11T04:19:01.155066Z info Publishing 9 events
2018-12-11T04:19:01.156976Z error Handler not found: handler='handler.prometheus'
2018-12-11T04:19:01.156992Z error No valid actions found in rule
2018-12-11T04:19:01.157016Z error Handler not found: handler='handler.stdio'
2018-12-11T04:19:01.157020Z error No valid actions found in rule
2018-12-11T04:19:01.157033Z error Handler not found: handler='handler.prometheus'
2018-12-11T04:19:01.157037Z error No valid actions found in rule
2018-12-11T04:19:01.157062Z error Handler not found: handler='handler.stdio'
2018-12-11T04:19:01.157066Z error No valid actions found in rule
2018-12-11T04:19:01.157071Z info Built new config.Snapshot: id='4'
2018-12-11T04:19:01.157138Z info adapters getting kubeconfig from: "" {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:01.157147Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2018-12-11T04:19:01.158232Z info adapters Waiting for kubernetes cache sync... {"adapter": "handler.kubernetesenv.istio-system"}
gc 4 @3.890s 1%: 0.026+48+16 ms clock, 0.052+11/27/0+32 ms cpu, 8->9->6 MB, 9 MB goal, 2 P
2018-12-11T04:19:01.359874Z info adapters Cache sync successful. {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:01.360457Z info Cleaning up handler table, with config ID:3
gc 5 @5.488s 1%: 0.82+18+13 ms clock, 1.6+0.23/14/19+27 ms cpu, 10->10->6 MB, 11 MB goal, 2 P
2018-12-11T04:19:03.165267Z info Publishing 2 events
2018-12-11T04:19:03.168699Z info Built new config.Snapshot: id='5'
2018-12-11T04:19:03.169350Z info adapters getting kubeconfig from: "" {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:03.169365Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2018-12-11T04:19:03.170873Z info adapters Waiting for kubernetes cache sync... {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:03.377530Z info adapters serving prometheus metrics on 42422 {"adapter": "handler.prometheus.istio-system"}
gc 6 @5.831s 3%: 0.009+145+93 ms clock, 0.019+70/25/17+187 ms cpu, 75->122->122 MB, 76 MB goal, 2 P
2018-12-11T04:19:03.418214Z info Starting monitor server...
Istio Mixer: [email protected]/istio-1.0.3-a44d4c8bcb427db16ca4a439adfbd8d9361b8ed3-Clean
Starting gRPC server on port 9091
2018-12-11T04:19:03.453874Z info ControlZ available at 10.44.2.8:9876
2018-12-11T04:19:03.503368Z info adapters Cache sync successful. {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:03.521297Z info Cleaning up handler table, with config ID:4
2018-12-11T04:19:03.521347Z error adapters adapter did not close all the scheduled daemons {"adapter": "handler.kubernetesenv.istio-system"}
gc 7 @6.083s 6%: 17+176+83 ms clock, 35+18/101/134+166 ms cpu, 122->126->124 MB, 243 MB goal, 2 P
GC forced
gc 8 @126.365s 0%: 0.024+55+50 ms clock, 0.049+0/15/57+101 ms cpu, 170->170->123 MB, 246 MB goal, 2 P
$ kubectl logs istio-telemetry-68787476f4-5m272 -n istio-system -c mixer
Mixer started with
MaxMessageSize: 1048576
MaxConcurrentStreams: 1024
APIWorkerPoolSize: 1024
AdapterWorkerPoolSize: 1024
APIPort: 9091
APIAddress: unix:///sock/mixer.socket
MonitoringPort: 9093
EnableProfiling: true
SingleThreaded: false
NumCheckCacheEntries: 1500000
ConfigStoreURL: k8s://
ConfigDefaultNamespace: istio-system
LoggingOptions: log.Options{OutputPaths:[]string{"stdout"}, ErrorOutputPaths:[]string{"stderr"}, RotateOutputPath:"", RotationMaxSize:104857600, RotationMaxAge:30, RotationMaxBackups:1000, JSONEncoding:false, LogGrpc:true, outputLevels:"default:info", logCallers:"", stackTraceLevels:"default:none"}
TracingOptions: tracing.Options{ZipkinURL:"http://zipkin:9411/api/v1/spans", JaegerURL:"", LogTraceSpans:false}
IntrospectionOptions: ctrlz.Options{Port:0x2694, Address:"127.0.0.1"}
LoadSheddingOptions: loadshedding.Options{Mode:0, AverageLatencyThreshold:0, SamplesPerSecond:1.7976931348623157e+308, SampleHalfLife:1000000000, MaxRequestsPerSecond:0, BurstSize:0}
gc 1 @0.014s 16%: 0.014+2.3+2.7 ms clock, 0.028+0.28/0.62/2.2+5.4 ms cpu, 4->4->2 MB, 5 MB goal, 2 P
2018-12-11T04:18:57.331236Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2018-12-11T04:18:57.331639Z info Built new config.Snapshot: id='0'
2018-12-11T04:18:57.331712Z info Cleaning up handler table, with config ID:-1
gc 2 @0.029s 23%: 0.008+6.9+5.3 ms clock, 0.017+0.75/1.7/10+10 ms cpu, 4->4->3 MB, 5 MB goal, 2 P
2018-12-11T04:18:57.350747Z info Built new config.Snapshot: id='1'
2018-12-11T04:18:57.350854Z info Cleaning up handler table, with config ID:0
2018-12-11T04:18:57.350886Z info Awaiting for config store sync...
2018-12-11T04:18:58.363196Z info Publishing 7 events
2018-12-11T04:18:58.363474Z error failed to evaluate expression for field 'Value'; unknown attribute connection.sent.bytes
2018-12-11T04:18:58.363501Z error failed to evaluate expression for field 'Value'; unknown attribute connection.received.bytes
2018-12-11T04:18:58.363574Z error failed to evaluate expression for field 'Dimensions[connection_security_policy]'; unknown attribute context.reporter.kind
2018-12-11T04:18:58.363613Z error failed to evaluate expression for field 'Value'; unknown attribute response.duration
2018-12-11T04:18:58.363631Z error failed to evaluate expression for field 'Value'; unknown attribute request.size
2018-12-11T04:18:58.363650Z error failed to evaluate expression for field 'Value'; unknown attribute response.size
2018-12-11T04:18:58.363666Z info Built new config.Snapshot: id='2'
2018-12-11T04:18:58.363749Z info Cleaning up handler table, with config ID:1
gc 3 @1.473s 1%: 0.011+11+14 ms clock, 0.022+0.91/2.2/17+28 ms cpu, 6->6->4 MB, 7 MB goal, 2 P
2018-12-11T04:18:59.782570Z info Publishing 2 events
2018-12-11T04:18:59.783930Z info Built new config.Snapshot: id='3'
2018-12-11T04:18:59.784001Z info Cleaning up handler table, with config ID:2
2018-12-11T04:19:01.155066Z info Publishing 9 events
2018-12-11T04:19:01.156976Z error Handler not found: handler='handler.prometheus'
2018-12-11T04:19:01.156992Z error No valid actions found in rule
2018-12-11T04:19:01.157016Z error Handler not found: handler='handler.stdio'
2018-12-11T04:19:01.157020Z error No valid actions found in rule
2018-12-11T04:19:01.157033Z error Handler not found: handler='handler.prometheus'
2018-12-11T04:19:01.157037Z error No valid actions found in rule
2018-12-11T04:19:01.157062Z error Handler not found: handler='handler.stdio'
2018-12-11T04:19:01.157066Z error No valid actions found in rule
2018-12-11T04:19:01.157071Z info Built new config.Snapshot: id='4'
2018-12-11T04:19:01.157138Z info adapters getting kubeconfig from: "" {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:01.157147Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2018-12-11T04:19:01.158232Z info adapters Waiting for kubernetes cache sync... {"adapter": "handler.kubernetesenv.istio-system"}
gc 4 @3.890s 1%: 0.026+48+16 ms clock, 0.052+11/27/0+32 ms cpu, 8->9->6 MB, 9 MB goal, 2 P
2018-12-11T04:19:01.359874Z info adapters Cache sync successful. {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:01.360457Z info Cleaning up handler table, with config ID:3
gc 5 @5.488s 1%: 0.82+18+13 ms clock, 1.6+0.23/14/19+27 ms cpu, 10->10->6 MB, 11 MB goal, 2 P
2018-12-11T04:19:03.165267Z info Publishing 2 events
2018-12-11T04:19:03.168699Z info Built new config.Snapshot: id='5'
2018-12-11T04:19:03.169350Z info adapters getting kubeconfig from: "" {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:03.169365Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2018-12-11T04:19:03.170873Z info adapters Waiting for kubernetes cache sync... {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:03.377530Z info adapters serving prometheus metrics on 42422 {"adapter": "handler.prometheus.istio-system"}
gc 6 @5.831s 3%: 0.009+145+93 ms clock, 0.019+70/25/17+187 ms cpu, 75->122->122 MB, 76 MB goal, 2 P
2018-12-11T04:19:03.418214Z info Starting monitor server...
Istio Mixer: [email protected]/istio-1.0.3-a44d4c8bcb427db16ca4a439adfbd8d9361b8ed3-Clean
Starting gRPC server on port 9091
2018-12-11T04:19:03.453874Z info ControlZ available at 10.44.2.8:9876
2018-12-11T04:19:03.503368Z info adapters Cache sync successful. {"adapter": "handler.kubernetesenv.istio-system"}
2018-12-11T04:19:03.521297Z info Cleaning up handler table, with config ID:4
2018-12-11T04:19:03.521347Z error adapters adapter did not close all the scheduled daemons {"adapter": "handler.kubernetesenv.istio-system"}
gc 7 @6.083s 6%: 17+176+83 ms clock, 35+18/101/134+166 ms cpu, 122->126->124 MB, 243 MB goal, 2 P
GC forced
gc 8 @126.365s 0%: 0.024+55+50 ms clock, 0.049+0/15/57+101 ms cpu, 170->170->123 MB, 246 MB goal, 2 P
$
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment