If you use atom... download & install the following packages:
| apiVersion: v1 | |
| kind: ReplicationController | |
| metadata: | |
| name: kube-registry-v0 | |
| namespace: kube-system | |
| labels: | |
| k8s-app: kube-registry | |
| version: v0 | |
| spec: | |
| replicas: 1 |
| import React, { Component } from 'react' | |
| import { BrowserRouter as Router, Route, Link, Match, Redirect, Switch } from 'react-router-dom' | |
| import OverviewPage from './page/OverviewPage' | |
| import AccountPage from './page/AccountPage' | |
| /* | |
| Layouts, inline define here for demo purpose | |
| you may want to define in another file instead | |
| */ |
This guide has moved to a GitHub repository to enable collaboration and community input via pull-requests.
https://github.com/alexellis/k8s-on-raspbian
Alex
Just documenting docs, articles, and discussion related to gRPC and load balancing.
https://github.com/grpc/grpc/blob/master/doc/load-balancing.md
Seems gRPC prefers thin client-side load balancing where a client gets a list of connected clients and a load balancing policy from a "load balancer" and then performs client-side load balancing based on the information. However, this could be useful for traditional load banaling approaches in clound deployments.
https://groups.google.com/forum/#!topic/grpc-io/8s7UHY_Q1po
gRPC "works" in AWS. That is, you can run gRPC services on EC2 nodes and have them connect to other nodes, and everything is fine. If you are using AWS for easy access to hardware then all is fine. What doesn't work is ELB (aka CLB), and ALBs. Neither of these support HTTP/2 (h2c) in a way that gRPC needs.
| # A part of the Halyard config file declaring the ECR registries. | |
| # There can be multiple registries, each in different AWS account. | |
| # In this example there are 3 "stages" accounts - dev, stage & live. | |
| # NOTE: The declared password files must exist and provide valid base64 encoded values, | |
| # otherwise Halayrd will endup with an exception during deployment. | |
| # The values can be fake, they will be updated later by the Kubernetes Job (see 2-nd attached file). | |
| # NOTE: replace ${YOUR_DEV_AWS_ACCOUNT_ID} ${YOUR_DEV_AWS_REGION} | |
| # with appropriate values (same for STAGE & LIVE). | |
| dockerRegistry: |
| // Discord all events! | |
| // A quick and dirty fleshing out of the discord.js event listeners (not tested at all!) | |
| // listed here -> https://discord.js.org/#/docs/main/stable/class/Client | |
| // Learn from this, do not just copy it mofo! | |
| // | |
| // Saved to -> https://gist.github.com/koad/316b265a91d933fd1b62dddfcc3ff584 | |
| // Last Updated -> Halloween 2022 | |
| /* |
| readinessProbe: | |
| exec: | |
| command: ["/root/grpc_health_probe", "-addr=:6666"] | |
| initialDelaySeconds: 1 | |
| livenessProbe: | |
| exec: | |
| command: ["/root/grpc_health_probe", "-addr=:6666"] | |
| initialDelaySeconds: 2 | |
| imagePullPolicy: IfNotPresent |
| groups: | |
| # These sum(irate()) functions are in separate groups, so they run in parallel | |
| - name: istio.workload.istio_request_duration_milliseconds_bucket | |
| interval: 10s | |
| rules: | |
| - record: workload:istio_request_duration_milliseconds_bucket:rate1m | |
| expr: | | |
| sum(irate(istio_request_duration_milliseconds_bucket{reporter="source", source_workload!=""}[1m])) | |
| by ( |
Registry as a pull through cache | Docker Documentation
그래도 다수의 다양한 이미지를 풀하면 mirror registry가 rate limit에 걸릴 가능성이 존재