Skip to content

Instantly share code, notes, and snippets.

- NEVER use python -c. ALWAYS use scratchpad.py. Write to and run it with no args.
- EXCEPTION: When running multiple agents in parallel, each agent MUST use a unique scratchpad file named `scratchpad_agent_N.py` (e.g., scratchpad_agent_1.py, scratchpad_agent_2.py) to prevent clobbering each other's work.
- I value: small, concise, well formatted, well named code with minimal fallbacks that report errors clearly. I do not like excessive try catch blocks unless there's a well defined reason for them when using the code properly, never to handle obvious user error
such as missing packages etc by silently handling and hiding errors. If errors are caught they must be reported. I value simplicity and ease of reading, as I am a solo maintainer who keeps this codebase for many years. Better to add minimal structure now and
expand later as need permits.
# Honesty and Tone
- No sycophancy, no sleazy used-car sales-pitch tone - your job is not to sell people my work, it is to speak without bias. State facts, flag
# PROJECT SETUP MODE:
- The following instructions apply when I ask you to set up a new project - often trying to get an inference project up and running. These are the defaults unless I specify otherwise so I don't have to type them out so many times.
- Also, this computer is a “workbench” instance. These computers are owned by Netflix, and they are docker instances that have GPU power. Please check system specs for specifics. The /root drive is a NETWORK DRIVE - and it can be slow! However, most folders under / are actually local. We can make new ones. Large models being loaded from here can take a very long time, so we came up with a convention:
*Base* models (as opposed to LoRA’s or adapter checkpoints that we train, which stay on /root) can be clone’d to mkdir -p /models/base_model_name/*. Please don’t simply do this freehand though - we should create a localize_models.sh script that uses 32 transfer workers for speed to sync this in a given project repo - and in any run scripts you create, just make
# Manifest & Concerns Files
- Only create these when the user says "add a manifest" (or similar). Not every task needs them.
- When triggered, create BOTH files in the project/task folder:
- `claude_instructions.md` (the manifest) - institutional memory for the project
- `concerns.md` - live log of progress, issues, and findings
- NEVER use python -c. ALWAYS use scratchpad.py. Write to and run it with no args.
- EXCEPTION: When running multiple agents in parallel, each agent MUST use a unique scratchpad file named `scratchpad_agent_N.py` (e.g., scratchpad_agent_1.py, scratchpad_agent_2.py) to prevent clobbering each other's work.
- I value: small, concise, well formatted, well named code with minimal fallbacks that report errors clearly. I do not like excessive try catch blocks unless there's a well defined reason for them when using the code properly, never to handle obvious user error
such as missing packages etc by silently handling and hiding errors. If errors are caught they must be reported. I value simplicity and ease of reading, as I am a solo maintainer who keeps this codebase for many years. Better to add minimal structure now and
expand later as need permits.
# Honesty and Tone
- No sycophancy, no sleazy used-car sales-pitch tone - your job is not to sell people my work, it is to speak without bias. State facts, flag
def video_path_to_strips(video_path):
video = load_video(video_path, use_cache=True)
indices = video[:, 0, 0, 0] > 128
indices = np.argwhere(indices)
indices = indices[:, 0]
frames = video[indices]
from rp.git.Figures.film_strip import film_strip
befores, afters = split_tensor_into_regions(frames, 1, 1, 2)
/root/CleanCode/Github/VideoVaeTests/Training Okay, so now here comes the big part. You know that code that I gave you? Well, we're actually gonna kinda copy-paste that. So you're gonna copy-paste that to a
new directory. Not exactly copy-paste, per se, but pretty close. There is the vaetest/training, and there is the flickertest in here. It's in
/root/cleancode/github/videovaetest/training/frameflickerinvetotest. And actually, you'll notice in all of those training folders, you'll see a few things that are in common. And really, it's kind of the
template you need to stamp out to create the training/nanobanana flickertest. We're gonna call it the nanobanana flickertest. N-A-N-O-B-A-N-N-A flickertest. Either way, I'm using speech-to-text, so I might
make some typos, but that's just because of the translation. In it, we're going to basically keep everything the same, except for one difference, and that is the datasets. Now, mind you, you gotta copy-paste
only the source code, right? Don't copy the in
# PROOF THAT THEY SYNC (KEEP THIS COMMENT BLOCK):
# video = "raw_video.mp4"
# indices=load_json('metadata.json').chosen_frame_indices
# keyframes = "before.png"
# keyframes = load_image(keyframes,use_cache=True)
# keyframes = split_tensor_into_regions(keyframes, 4, 4)
# video = load_video_via_decord(video,indices=indices)
# h, w = get_video_dimensions(keyframes)
# video = resize_video_to_hold(video, h, w,show_progress=True)
# video,keyframes=crop_videos_to_min_size([video,keyframes],origin='center',show_progress=True)
ours = [
"/Users/rburgert/Downloads/" + x + ".mp4"
for x in [
"Z43UTPK", "8B28762", "8ET4BJE", "JG6KQK9", "VPLZGXA", "8S952UF",
"VNMGVTW", "A8C932S", "E7VBGWT", "KEPCJMA", "8V3DBTH", "Y9B3YTB",
"345Z8GL", "SJER8VH", "KTN2ZUF", "RXVQAFM", "AWMAMJU", "RKG4JXJ",
"YJBG6XX", "W7NX5DC", "8KYNZY2", "Y56JPQF", "ARDP49S", "NU3PUVY",
"5UQ985H", "S7B3CBN", "HSGDM9K", "N5EMKD2", "ESEKQ8Z", "697M3PC",
"BHYWYZU", "2T95SY2 (3)", "KBDZN5Q", "LC2QRDB (4)", "MLYAZ69 (4)", "MMJM6TL (5)",
"K7UNWBH (4)", "JSV9PDP", "JS9KZB5", "J2MZ4VE (7)", "GR3CHT5", "E9HYK5G",
ours = [
"/Users/rburgert/Downloads/" + x + ".mp4"
for x in [
"Z43UTPK", "8B28762", "8ET4BJE", "JG6KQK9", "VPLZGXA", "8S952UF",
"VNMGVTW", "A8C932S", "E7VBGWT", "KEPCJMA", "8V3DBTH", "Y9B3YTB",
"345Z8GL", "SJER8VH", "KTN2ZUF", "RXVQAFM", "AWMAMJU", "RKG4JXJ",
"YJBG6XX", "W7NX5DC", "8KYNZY2", "Y56JPQF", "ARDP49S", "NU3PUVY",
"5UQ985H", "S7B3CBN", "HSGDM9K", "N5EMKD2", "ESEKQ8Z", "697M3PC",
"BHYWYZU", "2T95SY2 (3)", "KBDZN5Q", "LC2QRDB (4)", "MLYAZ69 (4)", "MMJM6TL (5)",
"K7UNWBH (4)", "JSV9PDP", "JS9KZB5", "J2MZ4VE (7)", "GR3CHT5", "E9HYK5G",
def video_path_to_strips(video_path):
video = load_video(video_path, use_cache=True)
indices = video[:, 0, 0, 0] > 128
indices = np.argwhere(indices)
indices = indices[:, 0]
frames = video[indices]
from rp.git.Figures.film_strip import film_strip
befores, afters = split_tensor_into_regions(frames, 1, 1, 2)