Thoughts on habits + practices for giving/receiving feedback effectively, as well as creating systems that get better at this with time
- Company Blogs
- Sandya Sankarram
- Michael Lynch
#!/usr/bin/env bash | |
# Steps: | |
# 1 Remove node_modules and yarn.lock from project root | |
# 2 Remove node_modules from all folders under packages | |
# 3 Install all the node_modules | |
rm -Rf node_modules yarn.lock \ | |
&& find packages/ \ | |
-name node_modules \ | |
-maxdepth 2 \ |
Thoughts on habits + practices for giving/receiving feedback effectively, as well as creating systems that get better at this with time
/* | |
* You can also just do "node --trace-warnings app.js" | |
*/ | |
process.on('unhandledRejection', (rejection) => { | |
console.log('Unhandled promise rejection warning:'); | |
console.log(rejection); | |
}); |
A curated list of amazingly awesome Electronic and Hardware platform #WoT #IoT #M2M
# Karma configuration | |
# Generated on Tue Aug 20 2013 16:26:25 GMT-0400 (EDT) | |
module.exports = (config) -> | |
config.set | |
# base path, that will be used to resolve all patterns, eg. files, exclude | |
basePath: '..' | |
# frameworks to use |
#!/usr/bin/env bash | |
# MIT © Sindre Sorhus - sindresorhus.com | |
# git hook to run a command after `git pull` if a specified file was changed | |
# Run `chmod +x post-merge` to make it executable then put it into `.git/hooks/`. | |
changed_files="$(git diff-tree -r --name-only --no-commit-id ORIG_HEAD HEAD)" | |
check_run() { | |
echo "$changed_files" | grep --quiet "$1" && eval "$2" |
People
:bowtie: |
😄 :smile: |
😆 :laughing: |
---|---|---|
😊 :blush: |
😃 :smiley: |
:relaxed: |
😏 :smirk: |
😍 :heart_eyes: |
😘 :kissing_heart: |
😚 :kissing_closed_eyes: |
😳 :flushed: |
😌 :relieved: |
😆 :satisfied: |
😁 :grin: |
😉 :wink: |
😜 :stuck_out_tongue_winking_eye: |
😝 :stuck_out_tongue_closed_eyes: |
😀 :grinning: |
😗 :kissing: |
😙 :kissing_smiling_eyes: |
😛 :stuck_out_tongue: |
Yesterday I upgraded our running elasticsearch cluster on a site which serves a few million search requests a day, with zero downtime. I've been asked to describe the process, hence this blogpost.
To make it more complicated, the cluster was running elasticsearch version 0.17.8 (released 6 Oct 2011) and I upgraded it to the latest 0.19.10. There have been 21 releases between those two versions, with a lot of functional changes, so I needed to be ready to roll back if necessary.
We run elasticsearch on two biggish boxes: 16 cores plus 32GB of RAM. All indices have 1 replica, so all data is stored on both boxes (about 45GB of data). The primary data for our main indices is also stored in our database. We have a few other indices whose data is stored only in elasticsearch, but are updated once daily only. Finally, we store our sessions in elasticsearch, but active sessions are cached in memcached.