Skip to content

Instantly share code, notes, and snippets.

View da115115's full-sized avatar

Denis A da115115

  • France
  • 14:34 (UTC +02:00)
View GitHub Profile
@sahilsk
sahilsk / ELK Procedure
Created August 26, 2014 04:53
nginx proxy configuration for elasticsearch
## Install Docker
```
# install the backported kernel
$ sudo apt-get update
$ sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
@gwenzek
gwenzek / ArgsOps.scala
Last active July 3, 2021 21:52
Scala parser for programm args
import scala.language.implicitConversions
/**
* @author gwenzek
*
*/
class ArgsOps(ops: Map[String, OptionalParam], val args: Array[String]){
def apply(op: String) = ops(op)
}
@zxbodya
zxbodya / S3-CORS-config.xml
Last active April 6, 2024 03:40
Client side uploads to s3, with pre-signed upload form (PHP/JS)
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
$ mahout
Running on hadoop, using /home/akm/hadoop-2.4.1/bin/hadoop and HADOOP_CONF_DIR=
MAHOUT-JOB: /home/akm/mahout/examples/target/mahout-examples-0.10.1-SNAPSHOT-job.jar
An example program must be given as the first argument.
Valid program names are:
arff.vector: : Generate Vectors from an ARFF file or directory
baumwelch: : Baum-Welch algorithm for unsupervised HMM training
buildforest: : Build the random forest classifier
canopy: : Canopy clustering
cat: : Print a file or resource as the logistic regression models would see it
@soheilhy
soheilhy / nginxproxy.md
Last active May 14, 2025 20:17
How to proxy web apps using nginx?

Virtual Hosts on nginx (CSC309)

When hosting our web applications, we often have one public IP address (i.e., an IP address visible to the outside world) using which we want to host multiple web apps. For example, one may wants to host three different web apps respectively for example1.com, example2.com, and example1.com/images on the same machine using a single IP address.

How can we do that? Well, the good news is Internet browsers

@sr75
sr75 / wget-jdk-oracle-install-example.txt
Last active March 16, 2023 11:28
wget command to install Oracle JAVA JDK from stupid oracle website for centos and ubuntu
http://d.stavrovski.net/blog/post/how-to-install-and-setup-oracle-java-jdk-in-centos-6
# rpm
wget --no-cookies \
--no-check-certificate \
--header "Cookie: oraclelicense=accept-securebackup-cookie" \
"http://download.oracle.com/otn-pub/java/jdk/7u55-b13/jdk-7u55-linux-x64.rpm" \
-O jdk-7-linux-x64.rpm
# ubuntu
@korya
korya / Subfolder to git repo.md
Last active May 18, 2025 08:50
Convert subfolder into Git submodule
@christopheranderton
christopheranderton / homebrew-github-api-token.md
Last active August 15, 2024 15:19
Set your Github API Token If you hit a ”GitHub API rate limit exceeded” when searching with Homebrew (http://brew.sh/).

Description

PLEASE SCROLL DOWN AND READ THE COMMENTS FOR A MORE UP TO DATE WAY (AND EASIER) TO DO THIS
When using Homebrew (http://brew.sh) and searching formulas or pull requests you may get the dreaded error message: Github API Rate limit exceeded

Let's fix that! (yeah!)


Short version

PLEASE SCROLL DOWN AND READ THE COMMENTS FOR A MORE UP TO DATE WAY (AND EASIER) TO DO THIS

@higarmi
higarmi / Flatten JSON or a nested dictionary
Created September 26, 2013 01:49
This python recursive function flattens a JSON file or a dictionary with nested lists and/or dictionaries. The output is a flattened dictionary that use dot-chained names for keys, based on the dictionary structure. This allows for reconstructing the JSON structure or converting it to other formats without loosing any structural information.
"""
example: The following JSON document:
{"maps":[{"id1":"blabla","iscategorical1":"0", "perro":[{"dog1": "1", "dog2": "2"}]},{"id2":"blabla","iscategorical2":"0"}],
"masks":{"id":"valore"},
"om_points":"value",
"parameters":{"id":"valore"}}
will have the following output:
{'masks.id': 'valore', 'maps.iscategorical2': '0', 'om_points': 'value', 'maps.iscategorical1': '0',
'maps.id1': 'blabla', 'parameters.id': 'valore', 'maps.perro.dog2': '2', 'maps.perro.dog1': '1', 'maps.id2': 'blabla'}
@chetan
chetan / list_hadoop_codecs.sh
Last active August 11, 2023 12:38
List the available hadoop codecs
#!/usr/bin/env bash
# list_hadoop_codecs.sh
#
# USAGE:
# curl -sL https://gist.github.com/chetan/6524829/raw/list_hadoop_codecs.sh | bash
# make sure hadoop is avail
if [[ -z `which hadoop 2>/dev/null` ]]; then
echo "hadoop command not found!"