By Adam Anderson
This write-up assumes you have an general understanding of the TensorFlow programming model, but maybe you haven't kept up to date with the latest library features/standard practices.
# Sebastian Raschka 2025 | |
# | |
# | |
# Usage: | |
# python llama-vs-gemma.py \ | |
# --auth_token hf_... \ | |
# --model_name meta-llama/Llama-3.2-1B \ | |
# --prompt medium | |
import argparse |
""" | |
A minimal, fast example generating text with Llama 3.1 in MLX. | |
To run, install the requirements: | |
pip install -U mlx transformers fire | |
Then generate text with: | |
python l3min.py "How tall is K2?" |
import pyhf | |
pyhf.set_backend('pytorch', 'minuit') | |
nSig = 4.166929245 | |
errSig = 4.166929245 | |
nBkg = 0.11 | |
errBkgUp = 0.20 | |
errBkgDown = 0.11 | |
model_json = { |
import numpy as np | |
from pynvml.smi import nvidia_smi | |
import pycuda.gpuarray as ga | |
import pycuda.driver as cuda | |
nvsmi = nvidia_smi.getInstance() | |
def getGPUMemoryUsage(gpu_index=0): | |
return nvsmi.DeviceQuery("memory.used")["gpu"][gpu_index]['fb_memory_usage']['used'] |
### JHW 2018 | |
import numpy as np | |
import umap | |
# This code from the excellent module at: | |
# https://stackoverflow.com/questions/4643647/fast-prime-factorization-module | |
import random |
import torch | |
from torch import LongTensor | |
from torch.nn import Embedding, LSTM | |
from torch.autograd import Variable | |
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence | |
## We want to run LSTM on a batch of 3 character sequences ['long_str', 'tiny', 'medium'] | |
# | |
# Step 1: Construct Vocabulary | |
# Step 2: Load indexed data (list of instances, where each instance is list of character indices) |
By Adam Anderson
This write-up assumes you have an general understanding of the TensorFlow programming model, but maybe you haven't kept up to date with the latest library features/standard practices.
As I'm writing this small tutorial, I assume you've read my previous one about setting up macOS, so if for any tool I'll use without explanation, look to that other article.
The full version IS NOT MANDATORY, as in the tutorial that follows I installed the smaller version of MacTeX and proceded installing every needed dependency. Installing the complete package is about ~3.5GB of download and ~5GB on disk, the smaller one is just about 80MBs.
Click here to download the complete version or here to download the smaller version.