Here’s a complete workflow to automatically apply your own protections to files via CI Workflow.
# Save the script as protect-files.sh
chmod +x protect-files.sh
##### dce example output ##### | |
## Select a service: | |
## 1) video-api | |
## 2) video-web | |
## ?# 1 | |
## Connecting to service: video-api | |
############################## | |
unalias dce | |
# V1: Docker Compose Exec with built-in service selectiondce() { |
""" | |
Move each finished recording to a UNC share. | |
OBS ▶ Tools ▸ Scripts ▸ + ▸ select this file | |
--------------------------------------------------------- | |
• Works with Simple or Advanced output mode. | |
• No change to your regular Recording Path is required; | |
in fact, keeping it on a local SSD makes the initial write faster. | |
--------------------------------------------------------- | |
Tested with Python 3.11 + OBS 30. |
To Update Drivers:
docker run --rm -i -v "$PWD":/w linuxserver/blender:4.4.3 \ | |
blender --factory-startup -b --python - <<'PY' | |
import bpy, math, os | |
vid = "/w/phone_screen.mp4" # <<< your vertical video | |
assert os.path.exists(vid), "Video missing" | |
# ------------------------------------------------------ build objects | |
bpy.ops.mesh.primitive_plane_add(size=2) | |
plane = bpy.context.object |
# /scripts/two_cam_auto_vfx.py | |
""" | |
Two-Camera VFX Auto-Rig -- Audio-Sync, Insta360-aware Edition | |
───────────────────────────────────────────────────────────────────────────── | |
• Imports hero + witness plates | |
• Automatically aligns them via audio-waveform cross-correlation | |
• Auto-detects FPS, resolution & 360⇆rectilinear | |
• Injects missing XMP → always pano-aware | |
• Tracks witness plate on a seeded grid, iteratively cleans & re-solves | |
• Builds RigRoot ▶ WitnessCam ▶ HeroCam hierarchy |
A comprehensive system for capturing NDI streams, recording them, and processing with MoviePy for professional video production workflows.
# Install required packages
!pip install ndi-python opencv-python ipywidgets moviepy numpy pillow
# Note: Ensure NDI SDK/Runtime is installed on your system
docker run --gpus all --restart unless-stopped -d ` | |
-v "B:\Models:/models" ` | |
-p 8000:8000 ` | |
ghcr.io/ggml-org/llama.cpp:server-cuda ` | |
-m /models/llama-2-7b-chat.Q4_K_M.gguf ` | |
--port 8000 --host 0.0.0.0 -n 512 --n-gpu-layers 35 |