A "Best of the Best Practices" (BOBP) guide to developing in Python.
- "Build tools for others that you want to be built for you." - Kenneth Reitz
- "Simplicity is alway better than functionality." - Pieter Hintjens
# post_loc.txt contains the json you want to post | |
# -p means to POST it | |
# -H adds an Auth header (could be Basic or Token) | |
# -T sets the Content-Type | |
# -c is concurrent clients | |
# -n is the number of requests to run in the test | |
ab -p post_loc.txt -T application/json -H 'Authorization: Token abcd1234' -c 10 -n 2000 http://example.com/api/v1/locations/ |
Movies Recommendation:
Music Recommendation:
import string | |
import math | |
tokenize = lambda doc: doc.lower().split(" ") | |
document_0 = "China has a strong economy that is growing at a rapid pace. However politically it differs greatly from the US Economy." | |
document_1 = "At last, China seems serious about confronting an endemic problem: domestic violence and corruption." | |
document_2 = "Japan's prime minister, Shinzo Abe, is working towards healing the economic turmoil in his own country for his view on the future of his people." | |
document_3 = "Vladimir Putin is working hard to fix the economy in Russia as the Ruble has tumbled." | |
document_4 = "What's the future of Abenomics? We asked Shinzo Abe for his views" |
""" | |
Usage: python remove_output.py notebook.ipynb [ > without_output.ipynb ] | |
Modified from remove_output by Minrk | |
""" | |
import sys | |
import io | |
import os | |
from IPython.nbformat.current import read, write |
#Understanding neural networks | |
#Random approach | |
def forward_multiply_gate(x,y): | |
return x*y | |
x = -2 | |
y = 3 | |
tweak_amount = 0.01 | |
import random |
import numpy as np | |
values = df[column].tolist() | |
n = len(df[column].tolist()) | |
counts = np.zeros(n, dtype='int64') | |
for i in range(n): | |
v = values[i] | |
if len(v): | |
counts[i] += len(v) | |
else: | |
# empty list-like, use a nan marker |
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
08737ef720f0510c7ec2aa84d7f70c691073c35d
.Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts and experience preferred (super rare at this point).