Hi there!
The docker cheat sheet has moved to a Github project under https://github.com/wsargent/docker-cheat-sheet.
Please click on the link above to go to the cheat sheet.
import socket | |
import struct | |
import sys | |
from httplib import HTTPResponse | |
from BaseHTTPServer import BaseHTTPRequestHandler | |
from StringIO import StringIO | |
import gtk | |
import gobject |
Hi there!
The docker cheat sheet has moved to a Github project under https://github.com/wsargent/docker-cheat-sheet.
Please click on the link above to go to the cheat sheet.
(defun psamim-push-gtasks-todos () | |
"Asynchronously syncs to Google tasks" | |
(interactive) | |
(org-tags-sparse-tree t "+TODO=\"NEXT\"") | |
(org-export-visible ?\s nil) | |
(delete-matching-lines "^\* .*") | |
(replace-string "** NEXT" "*") | |
(write-file psamim-mobile-todo-org-file nil) | |
(my-window-killer) | |
(message "Pushing todos started") |
Inspired by "Parsing CSS with Parsec".
Just quick notes and code that you can play with in REPL.
By @kachayev
#!/usr/bin/env sh | |
open "tel://$*" |
""" | |
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) | |
BSD License | |
""" | |
import numpy as np | |
# data I/O | |
data = open('input.txt', 'r').read() # should be simple plain text file | |
chars = list(set(data)) | |
data_size, vocab_size = len(data), len(chars) |
Hello software developers,
Please check your code to ensure you're not making one of the following mistakes related to cryptography.
// Starter Code: https://gist.github.com/eslachance/3349734a98d30011bb202f47342601d3#file-index_v12-js | |
const Discord = require("discord.js"); | |
const speech = require('@google-cloud/speech'); | |
const fs = require('fs'); | |
/* | |
DISCORD.JS VERSION 12 CODE | |
*/ | |
const client = new Discord.Client(); |
The problem with large language models is that you can’t run these locally on your laptop. Thanks to Georgi Gerganov and his llama.cpp project, it is now possible to run Meta’s LLaMA on a single computer without a dedicated GPU.
There are multiple steps involved in running LLaMA locally on a M1 Mac after downloading the model weights.