I hereby claim:
- I am jobliz on github.
- I am jobliz (https://keybase.io/jobliz) on keybase.
- I have a public key ASDvrV7KVZ6FJMIq3ZP0e-vmMPhZdadkN9hMuZaE3sFEtwo
To claim this, I am signing this object:
| function set-title(){ | |
| if [[ -z "$ORIG" ]]; then | |
| ORIG=$PS1 | |
| fi | |
| TITLE="\[\e]2;$*\a\]" | |
| PS1=${ORIG}${TITLE} | |
| } |
| #!/bin/bash | |
| # | |
| # A script to sync a local directory with a remote directory through FTP. The local directory contents | |
| # will overwrite the remote directory. If a file was deleted locally, it will be deleted remotely. | |
| # Notes: | |
| # | |
| # - It excludes the content of the .git directory. | |
| # - The -P flag is for parallelizing work. | |
| # - 'set ssl:verify-certificate false' might not be necessary. YMMV. | |
| # |
| // gcc himmelblau_simulated_annealing.c -lm && ./a.out | |
| #include <math.h> | |
| #include <stdio.h> | |
| #include <stdlib.h> // RAND_MAX | |
| double rand01(void) | |
| { | |
| return rand() / ((double) RAND_MAX); | |
| } |
I hereby claim:
To claim this, I am signing this object:
| import sys | |
| import csv | |
| from elasticsearch_dsl.connections import connections | |
| from elasticsearch_dsl import DocType, Text, Date, Search | |
| from elasticsearch import Elasticsearch | |
| connections.create_connection(hosts=['localhost'], timeout=20) | |
| es = Elasticsearch() | |
| ess = Search(using=es) |
This is a modified version of this tutorial, the queries have been modified so that they work with ES6. Many backwards incompatible changes happened between previous verions and 6, so many how-to's on the internet are outdated. If you're just starting learning ElasticSearch with version 6 then you should read these links and keep a mental note of them.
| import os | |
| import sys | |
| import atexit | |
| import shutil | |
| import subprocess | |
| def main(width, height): | |
| # get image paths in full | |
| dir_path = os.path.dirname(os.path.realpath(__file__)) | |
| dir_images = os.path.join(dir_path, 'sprite-src/') |
apt install postgresql postgresql-contrib under Ubuntu)sudo --login --user postgrespsqlpostgres=# CREATE DATABASE my_database;postgres=# CREATE USER my_user WITH PASSWORD 'my_password';postgres=# GRANT ALL PRIVILEGES ON DATABASE my_database to my_user;\connect my_database;| from typing import List, Tuple | |
| from pyspark import SparkContext | |
| from pyspark.sql import SparkSession | |
| commands = [ | |
| ['read'], | |
| ['option', ("inferSchema", "true")], | |
| ['option', ("header", "true")], | |
| ['option', ("dateFormat", "dd/MM/yyyy H:m")], |
| """ | |
| CSV Dataset | |
| Download XLSX dataset from http://archive.ics.uci.edu/ml/datasets/online+retail | |
| Convert to CSV with LibreOffice Calc | |
| Spark timestamp and datetime conversion from string: | |
| https://stackoverflow.com/questions/46295879/how-to-read-date-in-custom-format-from-csv-file | |
| https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html | |
| Pandas dataframe to spark |