Built this as one off script to post a bunch of records to bluesky based on a given csv to bootstrap https://bsky.app/profile/cdk.dev
sonnet 3.5 wrote all the code
## Stack: | |
## Tanstack Start | |
## Infra: Alchemy / Cloudflare | |
## UI: Shadcn | |
## Store: Zustand | |
## DB Drizzle | |
## Commands | |
- Dev Server: pnpm dev (runs Alchemy dev with hot-reload; uses .env.dev) |
#!/usr/bin/env bun | |
"use strict"; | |
const fs = require("fs"); | |
const { execSync } = require("child_process"); | |
const path = require("path"); | |
// ANSI color constants | |
const c = { | |
cy: '\033[36m', // cyan |
# Project Policy | |
This policy provides a single, authoritative, and machine-readable source of truth for AI coding agents and humans, ensuring that all work is governed by clear, unambiguous rules and workflows. It aims to eliminate ambiguity, reduce supervision needs, and facilitate automation while maintaining accountability and compliance with best practices. | |
# 1. Introduction | |
> Rationale: Sets the context, actors, and compliance requirements for the policy, ensuring all participants understand their roles and responsibilities. | |
## 1.1 Actors |
Built this as one off script to post a bunch of records to bluesky based on a given csv to bootstrap https://bsky.app/profile/cdk.dev
sonnet 3.5 wrote all the code
-- Query the API directly and flatten the nested JSON structure | |
WITH raw_data AS ( | |
SELECT * FROM read_json_auto('https://public.api.bsky.app/xrpc/app.bsky.feed.getAuthorFeed?actor=did:plc:edglm4muiyzty2snc55ysuqx&limit=10') | |
), | |
unnested_feed AS ( | |
SELECT unnest(feed) as post_data FROM raw_data | |
) | |
SELECT | |
-- Post basics | |
post_data.post.uri as post_uri, |
INSTALL AWS; | |
LOAD AWS; | |
CALL load_aws_credentials(); | |
CREATE TABLE ct_raw AS SELECT * FROM read_json('s3://YOUR_CT_BUCKET_WITH_A_DATE_PREFIX/*.gz', maximum_depth=2); | |
CREATE TABLE ct as SELECT unnest(Records) as Event FROM ct_raw; | |
CREATE TABLE cloudtrail_events AS SELECT json_extract_string(event, '$.eventVersion') AS eventVersion, | |
json_extract_string(event, '$.userIdentity.type') AS userType, | |
json_extract_string(event, '$.userIdentity.principalId') AS principalId, | |
json_extract_string(event, '$.userIdentity.arn') AS userArn, | |
json_extract_string(event, '$.userIdentity.accountId') AS accountId, |
WITH generate_date AS ( | |
SELECT CAST(RANGE AS DATE) AS date_key | |
FROM RANGE(DATE '2009-01-01', DATE '2013-12-31', INTERVAL 1 DAY) | |
) | |
SELECT date_key AS date_key, | |
DAYOFYEAR(date_key) AS day_of_year, | |
YEARWEEK(date_key) AS week_key, | |
WEEKOFYEAR(date_key) AS week_of_year, | |
DAYOFWEEK(date_key) AS day_of_week, | |
ISODOW(date_key) AS iso_day_of_week, |
--- | |
# | |
# This template example assumes a UserPool and UserPoolDomain exist. | |
# The function of this is to produce a custom resource with an attribute | |
# that can be referenced for DNSName of an Route53::RecordSet AliasTarget. | |
# | |
# AliasTarget: | |
# HostedZone: Z2FDTNDATAQYW2 | |
# DNSNAME: !GetAtt UPDomain.CloudFrontDistribution |
<?xml version="1.0" encoding="UTF-8"?> | |
<opml version="1.0"> | |
<head> | |
<title>AWS RSS feeds 2019-04-22</title> | |
</head> | |
<body> | |
<outline text="AWS" title="AWS"> | |
<outline type="rss" text="Infrastructure & Automation" title="Infrastructure & Automation" xmlUrl="https://aws.amazon.com/blogs/infrastructure-and-automation/feed/" htmlUrl="https://aws.amazon.com/blogs/infrastructure-and-automation/"/> | |
<outline type="rss" text="AWS Developer Blog" title="AWS Developer Blog" xmlUrl="http://feeds.feedburner.com/AwsDeveloperBlog" htmlUrl="https://aws.amazon.com/blogs/developer/"/> |
Feel free to contact me at [email protected] or tweet at me @statisticsftw
This is a rough outline of how we utilize next.js and S3/Cloudfront. Hope it helps!
It assumes some knowledge of AWS.