- Joyce Sims - Come Into My Life
- Mantronix – got to have your love
- Inner City – good life
- Krush – house arrest
- Monie Love - It's a shame
- Inner City - Big Fun
- SOS Band - The Finest
- Patrice Rushen - Forget Me Nots
We had to deploy ElasticSearch in a particular environment, where our hosts would be connected to Internet and access 2 different subnets, but with some restrictions. This makes our setup somehow tricky as we need the following: | |
eth0: external IP, listening on the Internet. There are iptables rules blocking every connection there on ports 9200 and 9300. | |
eth1: RFC1918 IP address. | |
lo:0: a single RFC1918 address used on every node for IPVS / IPFail for load balancing and fail over purpose. | |
Why is this setup tricky? | |
1. By default, ElasticSearch will listen on eth0 if it exists and is up. Shutting down eth0 and setting it up will just break your setup. Add iptables rules and you'll really be in trouble. Using unicast and a list of IPs won't be enough to solve the issue. | |
require 'open-uri' | |
require 'json' | |
def get_twitter_counter | |
url = 'http://api.twitter.com/1/users/show.json?screen_name=fdevillamil' | |
cache = File.join(Rails.root, "tmp", "twitter_counter") | |
return File.read(cache) if File.exists?(cache) && (Time.now - File.mtime(cache)).to_i < 7200 | |
begin |
repeat | |
tell application "Google Chrome" to tell active tab of window 1 | |
execute javascript "document.getElementById('bigCookie').click()" | |
end tell | |
end repeat |
extension=apc.so | |
apc.enabled=1 | |
apc.shm_size=512M | |
apc.rfc1867 = on | |
apc.max_file_size = 512m | |
apc.stat = 0 | |
apc.filters = "-/path/vers/les/trucs/a/exclure/*.*" |
# dans autosave | |
respond_to do |format| | |
format.js | |
end | |
# autosave.js.erb | |
$('#autosave').replaceWith("<%= hidden_field_tag('article[id]', @article.id) %>") | |
$('#destroy_link').replaceWith('<%= link_to_destroy_draft(@article) %>') | |
$('#publish').replaceWith("<%= text_field_tag('article', 'published_at', {:class => 'span7 datepicker'}) %>") |
# Dans mon controleur | |
respond_to do |format| | |
format.js { | |
render 'autosave' | |
} | |
end | |
# Ma vue | |
<%= javascript_tag "alert('All is good');" %> |
Branch: refs/heads/master | |
Home: https://github.com/fdv/publify | |
Commit: 0cd73579fbbe14e6934fd69f4a3ceb727c4c8ab2 | |
https://github.com/fdv/publify/commit/0cd73579fbbe14e6934fd69f4a3ceb727c4c8ab2 | |
Author: Frédéric de Villamil <[email protected]> | |
Date: 2013-11-08 (Fri, 08 Nov 2013) | |
Changed paths: | |
R app/assets/images/admin/glyphicons-halflings.png | |
R app/assets/javascripts/bootstrap-affix.js |
Every article about Nginx optimization talks about using the sendfile
, tcp_nodelay
and tcp_nopush
settings. Unfortunately, none of them explains neither why they should be used, nor how they actually work.
A few weeks ago, as we were building Botify SAAS platform, we started working on the Web server performances. As we're relying a lot on peer review to improve the quality of our work, Greg left my pull request open with questions, lots of questions, most of them starting with "Why?".
As we didn't find any obvious answer, we started a journey inside the Linux Kernel TCP stack, trying to understand Nginx internals and why we should combine 2 options as opposed as tcp_nopush
and tcp_nodelay
.
How can I force a socket to send the data in its buffer? One answer to that tricky question lies in the TCP_NODELAY
option of the Linux TCP(7)
stack. When you
#!/bin/bash | |
KITTEN_SITE_URL="http://emergencykitten.com" | |
KITTEN_IMG_FILE=kitten.jpg | |
KITTEN_SECS=$1 | |
usage(){ | |
echo "Usage : $(basename $0) <sleep_time>" | |
echo "Exemple : $(basename $0) 60" | |
exit 1 |