- Adjust code as necessary
- Convert code to a Bookmarklet using this tool: https://chriszarate.github.io/bookmarkleter/
- Add to Chrome by following instructions here: https://www.freecodecamp.org/news/what-are-bookmarklets/
This is a naive way of comparing two Zod schemas to see if they are equivalent. Basically, it works by serializing the two schemas and comparing the resulting strings. This approach is limited because serialization doesn't capture all of the information about certain Zod types, e.g. functions.
List names of CMS datasets in Socrata:
curl -s 'http://api.us.socrata.com/api/catalog/v1?domains=data.cms.gov&search_context=data.cms.gov&limit=2000' | jq ".results | .[] | .resource.name"
Put CMS dataset names, ids, descriptions, etc. into CSV:
curl -s 'http://api.us.socrata.com/api/catalog/v1?domains=data.cms.gov&search_context=data.cms.gov&limit=2000' | jq ".results | .[] | .resource | [.name, .id, .description, .createdAt, .updatedAt, .data_updated_at] | @csv"
#!/usr/local/bin/python3 | |
# pip3 install mysql-connector-python | |
import mysql.connector | |
import argparse | |
import csv | |
DB_USER = 'root' | |
DB_HOST = 'localhost' | |
DB_NAME = 'opkit' |
#!/usr/local/bin/python3 | |
import csv | |
import re | |
import argparse | |
CASES_OUT = 'sis-parser-out-cases.csv' | |
CPTS_OUT = 'sis-parser-out-cpts.csv' | |
parser = argparse.ArgumentParser(prog="sis-parser.py", description='Parse cases and CPT codes from a SIS Complete CSV export.') |
// Here's a quick script you can copy-paste into your browser console to copy a CSV of people search results from LinkedIn | |
// Example: https://www.linkedin.com/search/results/people/?currentCompany=%5B%223282%22%2C%2236494%22%2C%222142019%22%5D&geoUrn=%5B%22103644278%22%5D&origin=FACETED_SEARCH&profileLanguage=%5B%22en%22%5D&title=sales%20rep | |
var contentStrings = Array.from(document.querySelectorAll('.entity-result__content')).map(node => node.innerText) | |
var tuples = contentStrings.map(function(contentString) { | |
var split = contentString.split("\n") | |
return [split[0], split[split.length-2], split[split.length-1]] | |
}) |
I had some XML files containing structured data. I wanted to insert this data in to a SQL database. So I needed to figure out how to transform the XML into SQL statements. Turns out, there is something called XLST that can be used to programmatically transform XML files into... well... whatever you want. So here's how I used XSLT to transform XML into SQL statements.
- Download Saxon-HE from here: https://saxonica.com/download/java.xml
- Create your xml file:
<!-- /Users/sherwood/Code/cdcatalog.xsl -->
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="cdcatalog.xsl"?>
You can get some really detailed stats about Logstash pipelines via the HTTP API. See the docs for more!
$ curl -XGET 'localhost:9600/_node/stats/pipelines?pretty'
{
"host" : "logstash-msk-679b6b8dd9-4pd2h",
"version" : "7.4.0",
"http_address" : "0.0.0.0:9600",
"id" : "ba41dded-1a12-41e9-988f-03bd4eae9d4a",
"name" : "logstash-msk-679b6b8dd9-4pd2h",
import time | |
import multiprocessing | |
def delay_print(message): | |
time.sleep(1) | |
print(message) | |
for i in range(10): |
Structured logs are way better than normal logs for a whole bunch of reasons, but they can sometimes be a pain to read in the shell. Take this logline for example:
{"erlang_pid":"#PID<0.1584.0>","level":"error","message":"Got error when retry: :econnrefused, will retry after 1535ms. Have retried 2 times, :infinity times left.","module":"","release":"c2ef629cb357c136f529abec997426d6d58de485","timestamp":"2019-12-17T19:22:11.164Z"}
This format is hard for a human to parse. How about this format instead?
error | 2019-12-17T19:21:02.944Z | Got error when retry: :econnrefused, will retry after 1648ms. Have retried 2 times, :infinity times left.