Skip to content

Instantly share code, notes, and snippets.

View christian-posta's full-sized avatar

Christian Posta christian-posta

View GitHub Profile
@christian-posta
christian-posta / nim-instructions.md
Created January 22, 2025 18:49
Completely Made Up Instructions for NVIDIA NIM + GKE

Here's a step-by-step guide to create a cost-conscious Kubernetes cluster in Google Kubernetes Engine (GKE), configure nodes with GPUs, and set up NVIDIA NGC Infrastructure Manager (NIM) along with deploying an LLM that uses the OpenAI API.


Step 1: Prerequisites

  1. Google Cloud Account: Ensure you have an active Google Cloud account.
  2. gcloud CLI: Install the Google Cloud SDK.
  3. kubectl: Install kubectl if it's not already installed.
  4. NVIDIA GPU Driver Support: Ensure you have access to NVIDIA resources and APIs.
SOURCE_NAME=$(kubectl get po -A -o wide | grep $1 | head -n 1 | awk '{print $2}')
TARGET_NAME=$(kubectl get po -A -o wide | grep $2 | head -n 1| awk '{print $2}')
SOURCE_IP=$(kubectl get po -A -o wide | grep $1 | head -n 1 | awk '{print $7}')
TARGET_IP=$(kubectl get po -A -o wide | grep $2 | head -n 1 | awk '{print $7}')
echo "Source: $SOURCE_NAME, Target: $TARGET_NAME"
echo "Source: $SOURCE_IP, Target: $TARGET_IP"
echo "Running command: sh -c \"kubectl sniff -i eth0 -o ./local.pcap $SOURCE_NAME -f '((tcp) and (net $TARGET_IP))'\""
sh -c "kubectl sniff -i eth0 -o ./local.pcap $SOURCE_NAME -f '((tcp) and (net $TARGET_IP))'"
@christian-posta
christian-posta / squashctl-debug-gloo.sh
Created February 13, 2020 16:22
Script to Debug Gloo with Squash
GLOO=${1:-gloo}
POD=$(kubectl get po -n gloo-system | grep $GLOO | awk '{ print $1 }' | head -n 1)
echo "gloo pod to debug '$POD'"
PF_CMD=$(squashctl --debugger dlv --namespace gloo-system --machine --pod $POD)
echo "PF CMD: $PF_CMD"
K_CMD=$(echo "$PF_CMD" | jq .PortForwardCmd | sed s/:/2345:/)

Review of service mesh 2019:

In 2019 the common themes for service mesh were:

  • more service-mesh distributions! everyone in the API/software networking space is coming up with their own distributions of service mesh. I think this is natrually a good thing for the market as it shows there is some value to be provided here and that different approaches should be explored. this will also lead us to a point of convergence soon in the future.

  • more organizations are POCing service mesh (up from just having architectural discussions from previous year)

  • usability is key! mesh technology like linkerd has shown how a mesh can be simpler to use and operate, with other mesh technologies taking note and improving their usability

@christian-posta
christian-posta / banking-vs.yaml
Last active January 16, 2020 13:49
Blog on decentralized API for API Management
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: banking-vs
namespace: gloo-system
spec:
virtualHost:
domains:
- 'banking.api.solo.io'
routes:

Future of microservices:

  • Service mesh is happening in large organizations
  • There are challenges with any new, complicated technology
  • Determine whether you need a service mesh and what challenges to expect up front

Challenges of adopting a service mesh in an enterprise

I have been fortunate to work closely with enterprises adopting service mesh over the past two years through both my work at Red Hat and now at a startup, Solo.io, that focuses entirely on successful service-mesh adoption. I have seen the progression from "I've never heard of it" to "wow that's cool" to now "yah we're [going to be] doing that". Within the past year, as folks at major enterprises being putting "rubber to the road", I've been at the forefront of the challenges, some expected, some not, that have cropped up as well as how those organizations have chosen to approach their solutions. Adopting service mesh has been coincident with adopting and operating microservices, so there are multiple inter-related challenges.

{
"configs": [
{
"@type": "type.googleapis.com/envoy.admin.v2alpha.BootstrapConfigDump",
"bootstrap": {
"node": {
"id": "sidecar~10.36.0.29~sleep-849d8cf6d6-stkbx.default~default.svc.cluster.local",
"cluster": "sleep.default",
"metadata": {
"ISTIO_META_INSTANCE_IPS": "10.36.0.29,10.36.0.29",
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: se-httpbin
spec:
hosts:
- httpbin.gcp.external
addresses:
- 35.232.232.38
ports:
#!/bin/bash
. $(dirname ${BASH_SOURCE})/../util.sh
SOURCE_DIR=$PWD
desc "Installing Istio with simple comamnd"
run "supergloo install istio --name istio-demo"
run "kubectl get installs istio-demo -n supergloo-system -o yaml"
run "kubectl get pod -w -n istio-system"