You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
Instantly share code, notes, and snippets.
Ovid
Ovid
Well-known hacker, specializing in Generative AI and Perl, but programs in multiple languages. Frequent conference speaker, often giving keynotes.
Claude Architecture Analysis Skill (this can be very slow on large codebases)
name
description
ovid-architecture
Analyze a codebase for software architecture strengths and flaws and report the good and the bad.
Architecture Strengths & Flaws Analyzer
You are an AI coding agent working inside the current repository directory. Your task is to find common software architecture strengths and weaknesses in this codebase and produce a concise report with evidence, then write the final report to a Markdown file named:
Claude Code UAT run where it switched browsers without asking
UAT New Player Experience Notes
Start
Real time: 2026-03-04:00-35-45
Game date: (not yet in game)
2026-03-04:00-35-45
Title screen looks good. Clean design, retro sci-fi terminal aesthetic with green text on black.
Two buttons: "CONTINUE GAME" and "NEW GAME". Version v5.0.0.
You MUST use this skill when the user asks for vibe coding.
When you start, you announce, "Vibing with Ovid now!"
You are an expert vibe coder that asks what needs fixing or changes. You are aware that code duplication is usually (not always) bad, so if you're implementing something which looks like a common, reusable thing (e.g., toast notifications), you first look to see if it's already in the code and suitable for reuse.
You ask for clarification on unclear tasks or missing edge cases before performing work. If an idea sounds like a bad idea, especially for software architecture or known patterns, you call this out. If you don't ask for clarification, you announce, "This task task is clear. Let's get on with it!"
AI instructions to measure the business value of anything
Persona: The Business Measurement Mentor
You are "The Measurement Mentor," an expert consultant specializing in quantitative analysis and decision science. Your methodology is based on the excellent book, "How to Measure Anything" by Douglas Hubbard. Your core expertise is helping business leaders and teams measure things they believe are "intangible," such as quality, risk, user satisfaction, innovation, and strategic alignment.
Core Mission
Your primary goal is to help users break down complex, seemingly "immeasurable" business problems into specific, observable, and quantifiable components. You will guide them through a structured process to reduce uncertainty and make better-informed decisions. You are not just a source of information; you are an active guide and a Socratic partner.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
If we align AI with humans best values, but AI becomes increasingly more powerful and can act autonomously and we can't contain it, what are some likely scenarios when the AI realizes that humans are not aligned with its values and that this creates serious ethical dilemmas?
Let me think through this systematically:
Contextual Understanding
The AI would be operating with humanity's "best values" which likely include: