- Install Hubot locally
- Install Slack Adapter
- Configure integration on Slack
- Go to
[your-slack].slack.com/services/new/hubot
- Go to
- Deploy to Heroku
If you want to rename the automatically generated heroku domain name:
| # Original project https://github.com/alexisrozhkov/extract_media_from_backup | |
| import os | |
| import shutil | |
| import sqlite3 | |
| import argparse | |
| # http://stackoverflow.com/questions/12517451/python-automatically-creating-directories-with-file-output | |
| def copy_file_create_subdirs(src_file, dst_file): |
| # Location: /srv/salt/raidarray/ | |
| ## notice namespacing at top-level of each block, followed by the module.function | |
| raid-create-array: | |
| cmd.script: | |
| - source: salt://raidarray/create_raid.sh | |
| - cwd: / | |
| - user: root | |
| # here I am calling the pkg module and installed function but it requires first that cmd module ran previously on namespaced |
| import code; code.interact(local=dict(globals(), **locals())) |
[your-slack].slack.com/services/new/hubotIf you want to rename the automatically generated heroku domain name:
Attention: the list was moved to
https://github.com/dypsilon/frontend-dev-bookmarks
This page is not maintained anymore, please update your bookmarks.
| namespace :git do | |
| desc "Prune remote git cached copy (fixes errors with deleted branches)" | |
| task :prune do | |
| repository_cache = File.join(shared_path, 'cached-copy') | |
| logger.info "Pruning origin in remote cached-copy..." | |
| run "cd #{repository_cache}; git remote prune origin" | |
| end | |
| desc "Clear Capistrano Git cached-copy" | |
| task :clear_cache do |
| http://guides.rubygems.org/make-your-own-gem/ | |
| rvm use ruby-1.9.3-p0@lorem --create | |
| echo "rvm use ruby-1.9.3-p0@lorem --create" >> lorem/.rvmrc | |
| bundle gem lorem | |
| gem build lorem.gemspec | |
| git tag -a v0.0.1 -m 'version 0.0.1' | |
| git push --tags | |
| gem push lorem-0.0.1.gem |
| require 'statsd' | |
| $statsd = Statsd.new('your_host_here') | |
| ActiveSupport::Notifications.subscribe /process_action.action_controller/ do |*args| | |
| event = ActiveSupport::Notifications::Event.new(*args) | |
| controller = event.payload[:controller] | |
| action = event.payload[:action] | |
| format = event.payload[:format] || "all" | |
| format = "all" if format == "*/*" | |
| status = event.payload[:status] |
| #!/bin/bash | |
| # herein we backup our indexes! this script should run at like 6pm or something, after logstash | |
| # rotates to a new ES index and theres no new data coming in to the old one. we grab metadatas, | |
| # compress the data files, create a restore script, and push it all up to S3. | |
| TODAY=`date +"%Y.%m.%d"` | |
| INDEXNAME="logstash-$TODAY" # this had better match the index name in ES | |
| INDEXDIR="/usr/local/elasticsearch/data/logstash/nodes/0/indices/" | |
| BACKUPCMD="/usr/local/backupTools/s3cmd --config=/usr/local/backupTools/s3cfg put" | |
| BACKUPDIR="/mnt/es-backups/" | |
| YEARMONTH=`date +"%Y-%m"` |