Skip to content

Instantly share code, notes, and snippets.

View cameronkerrnz's full-sized avatar

Cameron Kerr cameronkerrnz

View GitHub Profile
@cameronkerrnz
cameronkerrnz / magnifying-glass.scad
Created March 29, 2025 06:21
OpenSCAD Magnifying Glass
$fn = 12;
rope_diam = 6;
rope_strand_eccentricity = 0.3; // prop. of diam. it should move from center
lens_radius = 50;
hoop_radius = lens_radius + rope_diam * 0.5; // fudge factor allows for strand packing + lens overlap
handle_length = 100;
module rope_crosssection() {
offset(r=-0.2) offset(r=0.2)
for(i = [0: 120: 360]) {
@cameronkerrnz
cameronkerrnz / openscad-phone-case-alcatel-something.scad
Last active March 6, 2025 21:59
Phone case modelled using OpenSCAD suitable for an unknown model of Alcatel phone
body_thickness = 10;
body_width = 71.8;
body_length = 146.2;
body_corner_radius_xy = 7;
glass_width = 67.8;
glass_length = 142.4;
//glass_inset = 2; // TODO: validate // TPU
glass_inset = 0.5; // TODO: validate // PLA
// Side of phone appears to have a radius of about 6mm, tangent
@cameronkerrnz
cameronkerrnz / Deploy_MaxMind_geoipupdate_on_RHEL7.md
Last active August 9, 2021 07:44
Ansible deployment for MaxMind geoipupdate

Logstash 7.14 introduces the concept of the databasemanager into logstash-filter-geoip, which downloads Maxmind GeoIP databases in accordance with Maxmind licencing terms. This does not work behind a proxy. Thus, I updated my deployment with the ability to download install the newer (more appropriate) version of geoipupdate from MaxMind, and configure that to download the databases.

A better (more scalable, easier on upstream) would be to have this on one machine, and then share out the resulting databases internally, on a recurring schedule.

This is an extract of a deployment Ansible (2.10ish) targetting RHEL7 / CentOS7, and while its use-case is to support Logstash, there is nothing in this gist that is specifically about Logstash.

@cameronkerrnz
cameronkerrnz / Monitoring_Kafka_Time_Range.md
Last active June 1, 2021 01:13
Export Kafka timestamp of earliest message in a topic/partition

I've been working on improving operational visibility of a system that uses Kafka as its data backbone that have a combination of size and age-based retention policies [with no compaction]. One of the key metrics I want to track is how much time is actually held in some of these topics, before I start making changes, as a longer amount of time means I have a longer range of recovery if I need to reprocess that data. Although kafka_exporter does give me useful visibility of topic lag, Kafka doesn't provide any tools specifically for reporting on the timestamp of the earliest message, so with the aid of kafka-console-consumer, some basic scripting, Prometheus and Grafana, now I have the visibility I need, and could set up some useful thresholds for alerting.

To use with Grafana

To use this in a Grafana dashboard, here's a sample Prometheus query you can use as part of a Singlestat (format the Singlestat with a unit of 'seconds')

time() - max(kafka_topic_partition_earliest_time_seconds{topic=~
@cameronkerrnz
cameronkerrnz / tail-windows-firewall-defender-log.ps1
Created April 8, 2021 03:34
Tail and Filter Windows Firewall Log (like tail -f ... | awk)
Get-Content -Head 5 c:\windows\system32\LogFiles\Firewall\pfirewall.log
Get-Content -Wait -Tail 5 C:\Windows\System32\LogFiles\Firewall\pfirewall.log | % {
do {
$a=$_.split(' ')
# DROP or ACCEPT (the only values AFAIK)
#
if ($a[2] -ne 'DROP') {continue}