Setup for feeding data from my Raspberry Pi to:
- Flightradar24 (ADS-B)
- FlightAware (ADS-B)
- airplanes.live (ADS-B)
- ADSBhub (supporting SafeSky) (ADS-B)
- LiveATC (audio)
Hardware:
# ----------------------------------------------------------------------------- | |
# AI-powered Git Commit Function | |
# Copy paste this gist into your ~/.bashrc or ~/.zshrc to gain the `gcm` command. It: | |
# 1) gets the current staged changed diff | |
# 2) sends them to an LLM to write the git commit message | |
# 3) allows you to easily accept, edit, regenerate, cancel | |
# But - just read and edit the code however you like | |
# the `llm` CLI util is awesome, can get it here: https://llm.datasette.io/en/stable/ | |
gcm() { |
XZ Backdoor symbol deobfuscation. Updated as i make progress |
# ============================================================================== | |
# ShellGPT | |
# ============================================================================== | |
# ------------------------------------------------------------------------------ | |
if which sgpt >/dev/null 2>&1; then | |
# ------------------------------------------------------------------------------ | |
alias sgpt-chat="sgpt --repl chat" | |
alias sgpt-code="sgpt --repl code --code" |
#!/bin/sh | |
# rename-pictures.sh | |
# Author: Justine Tunney <[email protected]> | |
# License: Apache 2.0 | |
# | |
# This shell script can be used to ensure all the images in a folder | |
# have good descriptive filenames that are written in English. It's | |
# based on the Mistral 7b and LLaVA v1.5 models. | |
# | |
# For example, the following command: |
Setup for feeding data from my Raspberry Pi to:
Hardware:
Each day at our company, developers are required to document their activities, painstakingly jotting down their daily work and future plans. A monotonous chore that I just really dislike.
So now, there's a scribe for that :
import openai | |
import requests | |
import textwrap | |
import uuid | |
# pip3 install openai requests | |
# setup the API credentials | |
es_username = "<your username>" | |
es_password = "<your password>" | |
es_url = "https://localhost:9200" |
ChatGPT appeared like an explosion on all my social media timelines in early December 2022. While I keep up with machine learning as an industry, I wasn't focused so much on this particular corner, and all the screenshots seemed like they came out of nowhere. What was this model? How did the chat prompting work? What was the context of OpenAI doing this work and collecting my prompts for training data?
I decided to do a quick investigation. Here's all the information I've found so far. I'm aggregating and synthesizing it as I go, so it's currently changing pretty frequently.
At this point, it is probably easier to just use something like this: https://github.com/reznok/Spring4Shell-POC | |
- clone https://spring.io/guides/gs/handling-form-submission/ | |
- you can skip right to gs-handling-form-submission/complete, no need to follow the tutorial | |
- modify it so that you can build a war file (https://www.baeldung.com/spring-boot-war-tomcat-deploy) | |
- install tomcat9 + java 11 (i did it on ubuntu 20.04) | |
- deploy the war file | |
- update the PoC (https://share.vx-underground.org/) to write the tomcatwar.jsp file to webapps/handling-form-submission instead of webapps/ROOT |