Skip to content

Instantly share code, notes, and snippets.

@dqtweb
dqtweb / install_pyenv.md
Created September 22, 2024 04:52 — forked from trongnghia203/install_pyenv.md
Install pyenv
#!/usr/bin/env python3
'''
always getting the most recent frame of a camera
================================================
Usage:
------
freshest_camera_frame.py
@dqtweb
dqtweb / install-nvm-zsh.txt
Created January 10, 2024 10:33 — forked from mike-casas/install-nvm-zsh.txt
install nvm on mac with zsh shell
After install zsh
- brew update
- brew install nvm
- mkdir ~/.nvm
after in your ~/.zshrc or in .bash_profile if your use bash shell:
export NVM_DIR=~/.nvm
source $(brew --prefix nvm)/nvm.sh
@dqtweb
dqtweb / http-benchmark.md
Created January 2, 2024 04:47 — forked from denji/http-benchmark.md
HTTP(S) Benchmark Tools / Toolkit for testing/debugging HTTP(S) and restAPI (RESTful)
@dqtweb
dqtweb / client.js
Created March 3, 2023 22:48 — forked from naoki-sawada/client.js
Simple socket.io room and auth example
const io = require('socket.io-client');
const socket = io('http://localhost:3000', {
transportOptions: {
polling: {
extraHeaders: {
'Authorization': 'Bearer abc',
},
},
},
@dqtweb
dqtweb / jupyter.service
Created April 26, 2020 10:30 — forked from r-darwish/jupyter.service
Jupyterlab as a systemd service
[Unit]
Description=Jupyter Notebook
[Service]
Type=simple
ExecStart=/home/roeyd/Notebook/.env/bin/jupyter lab --port 9090
WorkingDirectory=/home/roeyd/Notebook
[Install]
WantedBy=default.target
@dqtweb
dqtweb / ajax.js
Created January 26, 2020 12:09 — forked from xeoncross/ajax.js
Simple, cross-browser Javascript POST/GET xhr request object. Supports request data and proper AJAX headers.
/**
* IE 5.5+, Firefox, Opera, Chrome, Safari XHR object
*
* @param string url
* @param object callback
* @param mixed data
* @param null x
*/
function ajax(url, callback, data, x) {
try {
@dqtweb
dqtweb / export-pyspark-schema-to-json.py
Created January 16, 2020 07:54 — forked from stefanthoss/export-pyspark-schema-to-json.py
Export/import a PySpark schema to/from a JSON file
import json
from pyspark.sql.types import *
# Define the schema
schema = StructType(
[StructField("name", StringType(), True), StructField("age", IntegerType(), True)]
)
# Write the schema
with open("schema.json", "w") as f:
@dqtweb
dqtweb / gist:30a0598875703e73431a85bc7a5c9267
Created October 25, 2019 07:15 — forked from mikeyk/gist:1329319
Testing storage of millions of keys in Redis
#! /usr/bin/env python
import redis
import random
import pylibmc
import sys
r = redis.Redis(host = 'localhost', port = 6389)
mc = pylibmc.Client(['localhost:11222'])

Migrate from Facebook scribe to Apache Flume (Part II)

In last article we talked about how to setup flume and write files HDFS. This article, we begin to change flume to write file in scribe like style category. Multiplexing Way?

The first thought is using source multiplex to distribute log to different destination. Flume distribute log events by event header. So we google to find out which field in header is referring to scribe header.

https://apache.googlesource.com/flume/+/d66bf94b1dd059bc7e4b1ff332be59a280498077/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/ScribeSource.java

category in header will refer to scribe category. So we try to use multiplexing source: