Skip to content

Instantly share code, notes, and snippets.

View LubeckHuaman's full-sized avatar

Lubeck Huaman LubeckHuaman

  • BairesDev
  • peru
View GitHub Profile
@skbr1234
skbr1234 / default nginx configuration file
Last active December 17, 2025 23:35
The default nginx configuration file inside /etc/nginx/sites-available/default
# Author: Zameer Ansari
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# http://wiki.nginx.org/Pitfalls
# http://wiki.nginx.org/QuickStart
# http://wiki.nginx.org/Configuration
#
# Generally, you will want to move this file somewhere, and start with a clean
# file but keep this around for reference. Or just disable in sites-enabled.
#

The PATH is an important concept when working on the command line. It's a list of directories that tell your operating system where to look for programs, so that you can just write script instead of /home/me/bin/script or C:\Users\Me\bin\script. But different operating systems have different ways to add a new directory to it:

Windows

  1. The first step depends which version of Windows you're using:
  • If you're using Windows 8 or 10, press the Windows key, then search for and
@iamavnish
iamavnish / CKAD_Notes.txt
Last active December 10, 2025 00:58
CKAD Cheatsheet
# Good Links
http://www.yamllint.com/
https://youtu.be/02AA5JRFn5w
#########################################
minikube
#########################################
minikube start
minikube status
minikube stop
class Node:
def __init__(self, val=0, neighbors=None):
self.val = val
self.neighbors = neighbors if neighbors is not None else []
def dfs(node, node_map):
clone = Node(node.val)
node_map[node.val] = clone
for neighbor in node.neighbors:
@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active April 8, 2025 13:49
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin