Skip to content

Instantly share code, notes, and snippets.

@futureperfect
futureperfect / fork_fd_test.c
Created January 13, 2018 20:53
How do concurrent writes by parent and child processes behave with a shared file descriptor?
/* Write a program that opens a file (with the open() system call) and then
* calls fork() to create a new process. Can both the child and parent access
* the file descriptor returned by open()? What happens when they are writing to
* the file concurrently, i.e., at the same time? */
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
@futureperfect
futureperfect / demo.p8
Created March 6, 2018 08:24
PICO-8 Demo of a ball moving with arrow keys
pico-8 cartridge // http://www.pico-8.com
version 16
__lua__
-- move an ball on-screen
-- by erik hollembeak
SCREEN_WIDTH = 128
SCREEN_HEIGHT = 128
-- Ball definition
@futureperfect
futureperfect / logs.txt
Created June 11, 2018 21:21
PySpark Reduction
py4j.protocol.Py4JJavaError: An error occurred while calling o90.save.
: java.io.IOException: Failed to open native connection to Cassandra at {<Redacted>}:<Redacted port>
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:168)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:32)
at com.datastax.spark.connector.cql.RefCountedCache.syncAcquire(RefCountedCache.scala:69)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:57)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:79)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:111)
@futureperfect
futureperfect / async_app.py
Created March 16, 2025 20:43
Silly async/await program to sort a list of numbers using workers and a shared resource
import asyncio
import bisect
import logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
async def sleep_and_sort(n, lock, result_list):
try:
logger.info(f"Sleeping for {n} seconds before doing work")
@futureperfect
futureperfect / absorber.py
Created March 21, 2025 00:55
Simple metrics endpoint collecting cpu, memory, and disk utilization statistics and reporting via an HTTP endpoint
"""
Collect host CPU, memory, and disk utilization metrics minutely
and report them via an HTTP endpoint
"""
import time
import os
from textwrap import dedent
import threading