| ⌘T | go to file |
| ⌘⌃P | go to project |
| ⌘R | go to methods |
| ⌃G | go to line |
| ⌘KB | toggle side bar |
| ⌘⇧P | command prompt |
| require 'chef' | |
| require 'chef/node' | |
| class Opscode | |
| class Backup | |
| attr_accessor :backup_dir | |
| def initialize(backup_dir, config_file) | |
| @backup_dir = backup_dir | |
| Chef::Config.from_file(config_file) |
| # Use Chef resources in an application via solo mode. | |
| # Could also be configured in client mode, and then use a server. | |
| require 'rubygems' | |
| require 'chef' | |
| require 'chef/client' | |
| require 'chef/run_context' | |
| Chef::Config[:solo] = true | |
| Chef::Config[:log_level] = :info |
| #!/usr/bin/env ruby | |
| # s3-delete-bucket.rb | |
| # Fog-based script for deleting large Amazon AWS S3 buckets (~100 files/second) | |
| # Forked from this excellent script: https://github.com/SFEley/s3nuke | |
| require 'rubygems' | |
| require 'thread' | |
| require 'fog' |
| # How I managed to install #ruby-2.0 with #rvm on #MacOS 10.8 | |
| export CC=/usr/bin/gcc | |
| rvm pkg install openssl | |
| rvm install ruby-head --with-gcc=clang --verify-downloads 1 # see * | |
| rvm use ruby-head | |
| ruby -v | |
| ruby 2.0.0dev (2013-02-24) [x86_64-darwin12.2.0] | |
Whether you're trying to give back to the open source community or collaborating on your own projects, knowing how to properly fork and generate pull requests is essential. Unfortunately, it's quite easy to make mistakes or not know what you should do when you're initially learning the process. I know that I certainly had considerable initial trouble with it, and I found a lot of the information on GitHub and around the internet to be rather piecemeal and incomplete - part of the process described here, another there, common hangups in a different place, and so on.
In an attempt to coallate this information for myself and others, this short tutorial is what I've found to be fairly standard procedure for creating a fork, doing your work, issuing a pull request, and merging that pull request back into the original project.
Just head over to the GitHub page and click the "Fork" button. It's just that simple. Once you've done that, you can use your favorite git client to clone your repo or j
A pattern for building personal knowledge bases using LLMs.
This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.
Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.