Skip to content

Instantly share code, notes, and snippets.

@YunghuiHsu
YunghuiHsu / rtsp_processor.py
Last active October 14, 2025 14:05
demo for RTP and RTCP Buffer parse with GStreamer
demo for RTP and RTCP Buffer parse with GStreamer
1. This example program is designed to work with the deepstream / gstreamer python API
to calculate the latency of an rtsp transfer.
2. The gstreamer element example is modified from NVIDIA-AI-IOT/deepstream_python_apps/
deepstream_test1_rtsp_in_rtsp_out.py.
@bml1g12
bml1g12 / 1_worker_producer_shared_memory.py
Last active March 9, 2021 18:59
Shared memory for communicating Numpy arrays
def worker_producer_shared_memory(np_arr_shape, shared_memory, n_frames):
"""A frame producer function that writes to shared memory"""
mp_array, np_array = shared_memory
for _ in range(n_frames):
mp_array.acquire()
np_array[:] = prepare_random_frame(np_arr_shape) # produce a fresh array
@ccj5351
ccj5351 / itera_dataloader_example.py
Created February 19, 2020 03:06
Modified code with the article "How to Build a Streaming DataLoader with PyTorch" at https://medium.com/speechmatics/how-to-build-a-streaming-dataloader-with-pytorch-a66dd891d9dd.
import random
from itertools import chain, cycle, islice
import torch.utils.data as data
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
import time
import torch
import numpy as np
@BryanCutler
BryanCutler / tf_arrow_model_training.py
Last active June 28, 2021 16:13
TensorFlow Keras Model Training Example with Apache Arrow Dataset
from functools import partial
import multiprocessing
import os
import socket
import sys
from sklearn.preprocessing import StandardScaler
import numpy as np
import pandas as pd
@wkcn
wkcn / dataloader.py
Created April 20, 2019 01:31
GluonDataloader
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
@jimathyp
jimathyp / ps_mem.py
Created April 2, 2019 01:15
System Admin
# https://raw.githubusercontent.com/pixelb/ps_mem/master/ps_mem.py
#!/usr/bin/env python
# Try to determine how much RAM is currently being used per program.
# Note per _program_, not per process. So for example this script
# will report RAM used by all httpd process together. In detail it reports:
# sum(private RAM for program processes) + sum(Shared RAM for program processes)
# The shared RAM is problematic to calculate, and this script automatically
# selects the most accurate method available for your kernel.
@yang-zhang
yang-zhang / pytorch-losses-in-plain-python.ipynb
Last active December 21, 2022 07:14
git/yang-zhang.github.io/ds_code/pytorch-losses-in-plain-python.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@mjdietzx
mjdietzx / waya-dl-setup.sh
Last active September 20, 2025 11:52
Install CUDA Toolkit v8.0 and cuDNN v6.0 on Ubuntu 16.04
#!/bin/bash
# install CUDA Toolkit v8.0
# instructions from https://developer.nvidia.com/cuda-downloads (linux -> x86_64 -> Ubuntu -> 16.04 -> deb (network))
CUDA_REPO_PKG="cuda-repo-ubuntu1604_8.0.61-1_amd64.deb"
wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/${CUDA_REPO_PKG}
sudo dpkg -i ${CUDA_REPO_PKG}
sudo apt-get update
sudo apt-get -y install cuda
@tomrunia
tomrunia / tf_queue.py
Created November 2, 2016 14:48
TensorFlow queue example
# Initialize placeholders for feeding in to the queue
pl_queue_screens = tf.placeholder(tf.float32, shape=[config.seq_length, config.image_size, config.image_size, config.input_channels], name="queue_inputs")
pl_queue_targets = tf.placeholder(tf.uint8, shape=[config.seq_length], name="queue_targets_cnt")
# ...
capacity = config.min_after_dequeue + 10 * (config.num_gpus*config.batch_size)
q = tf.RandomShuffleQueue(
@squarism
squarism / iterm2.md
Last active October 22, 2025 08:40
An iTerm2 Cheatsheet

In the below keyboard shortcuts, I use the capital letters for reading clarity but this does not imply shift, if shift is needed, I will say shift. So + D does not mean hold shift. + Shift + D does of course.

Tabs and Windows

Function Shortcut
New Tab + T
Close Tab or Window + W (same as many mac apps)
Go to Tab + Number Key (ie: ⌘2 is 2nd tab)
Go to Split Pane by Direction + Option + Arrow Key