Skip to content

Instantly share code, notes, and snippets.

View bgarcial's full-sized avatar

Bernardo García bgarcial

View GitHub Profile
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"resourceGroupName": {
"type": "string"
},
"location": {
"type": "string",
"defaultValue": "West Europe",
@bgarcial
bgarcial / question.md
Last active October 23, 2019 21:36
Question

I have an existing Virtual Network created with two subnets: aks-subnet and persistence-subnet.

My goal is to create an Azure Kubernetes Cluster inside the aks-subnet

I am creating resource groups and resources [at the subscription level][1], using the New-AzDeployment command from PowerShell core.

Like my idea is create a resource group and deploy resources to it, I have a nested template defining the resources to create in the resource group.

So I have the resource group created from the ARM template

@bgarcial
bgarcial / delete-evicted-pods-all-namespaces.sh
Created August 26, 2019 15:28 — forked from psxvoid/delete-evicted-pods-all-namespaces.sh
Delete evicted pods from all namespaces (also ImagePullBackOff and ErrImagePull)
#!/bin/sh
# based on https://gist.github.com/ipedrazas/9c622404fb41f2343a0db85b3821275d
# delete all evicted pods from all namespaces
kubectl get pods --all-namespaces | grep Evicted | awk '{print $2 " --namespace=" $1}' | xargs kubectl delete pod
# delete all containers in ImagePullBackOff state from all namespaces
kubectl get pods --all-namespaces | grep 'ImagePullBackOff' | awk '{print $2 " --namespace=" $1}' | xargs kubectl delete pod
# delete all containers in ImagePullBackOff or ErrImagePull or Evicted state from all namespaces
@bgarcial
bgarcial / .md
Created May 30, 2019 09:14
Cert manager, kong and acme kube helper

Installing helm and tiller

Helm is a package manager tool which allow to find, share, and use software built for Kubernetes.

@bgarcial
bgarcial / auth.yaml
Created April 23, 2019 14:13
Creating ACL to allow access to KongConsuemers with basic-auth KongPlugin
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
name: sandbox-ingress-zcrm365
# namespace: default
proxy:
protocols:
- http
- https
# path: /
@bgarcial
bgarcial / KongWithIngressController.yaml
Last active August 27, 2019 09:16
Kong 1.0 with ingress controller 0.3.0 and postgres like external service running
apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: v1
kind: Secret
metadata:
@bgarcial
bgarcial / doc-README.md
Last active March 20, 2019 08:52
Evaluating the behavior of cert-manager and kong. Some issues references and analyzing our current situation

1. Analyze the reasons which kong-ingress-controller does not works with cert-manager

  • Some things that I can see.

When I've already created the ClusterIssuer (staging-environment) and the kong-ingress-zcrm365 ingress resource (our zcrm-custom ingress); cert-manager creates a new Ingress resource to handle the ACME http01 validation named cm-acme-http-solver-jr4fg

 ⟩ kubectl get ingress 
NAME                        HOSTS                                ADDRESS         PORTS     AGE
cm-acme-http-solver-jr4fg test1kongletsencrypt.possibilit.nl 80 33s
⟩ kubectl describe issuers letsencrypt-prod
Name: letsencrypt-prod
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Issuer","metadata":{"annotations":{},"name":"letsencrypt-prod","namespace":"default"},...
API Version: certmanager.k8s.io/v1alpha1
Kind: Issuer
Metadata:
Creation Timestamp: 2019-03-14T14:03:06Z
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
⟩ kubectl describe certificate letsencrypt-prod
Name: letsencrypt-prod
Namespace: default
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-03-14T14:20:15Z
Generation: 1