I have this Go application that uploads files to either a S3 bucket or a SFTP server (and potentially Google Cloud Storage buckets in the future) and the location of these are stored in JSON config files. I wanted to
package com.typesafe.slick.examples.lifted | |
// Use H2Driver to connect to an H2 database | |
import scala.slick.driver.H2Driver.simple._ | |
// Use the implicit threadLocalSession | |
import Database.threadLocalSession | |
/** | |
* A simple example that uses statically typed queries against an in-memory |
kibana_host: logstash.openstack.org | |
alerts: | |
messagealert2: | |
field: message | |
query: eyJzZWFyY2giOiIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6OTAwLCJncmFwaG1vZGUiOiJjb3VudCJ9 | |
limit: 10 |
#!/usr/bin/env node | |
var WebSocketClient = require('websocket').client; | |
var Player = require('play-sound')(opts = {"player": "omxplayer"}) | |
var winston = require('winston'); | |
winston.add(winston.transports.File, { filename: 'unacaster.log' }); | |
var client = new WebSocketClient(); | |
var playing = false; | |
var reconnectInterval = 1 * 1000 * 60; |
import Ansi._ | |
import sbt.complete.DefaultParsers._ | |
import sbt.complete.Parser | |
// A ADT representing the different commands this program understands | |
// The run() method is call whenever a command is encountered in the main loop | |
// in the Cli trait | |
sealed trait CliCommand { | |
def run(): Unit | |
} |
(ns streaming-word-extract | |
(:require | |
[pubsub-utils] ;; This is our local pubsub-utils namespace | |
[datasplash | |
[api :as ds] | |
[bq :as bq] | |
[pubsub :as ps]] | |
[clojure.string :as string]) | |
(:import |
.DS_Store |
(defproject pipeline-example "0.1.0-SNAPSHOT" | |
:dependencies [[org.clojure/clojure "1.8.0"] | |
[datasplash "0.4.1"] | |
[org.clojure/clojurescript "1.9.456"]] | |
:plugins [[lein-cljsbuild "1.1.5"] | |
[macchiato/lein-npm "0.6.2"]] ;; Using a fork to get a generated package.json | |
:profiles {:uberjar {:main pipeline-example.core | |
:source-paths ["src-clj"] | |
:target-path "target/pipeline-example" |
(ns pipeline-example.core | |
(:require [cljs.pprint :as pp] | |
[clojure.string :as string])) | |
(def circular-json (js/require "circular-json-es6")) | |
(def spawn (.-spawn (js/require "node-jre"))) | |
(def __dirname (js* "__dirname")) | |
(defn args |
We here at Unacast are using Google Cloud Datalab quite a bit for data analysis and exploration and think it's a great product.
The Version Control experience is a bit clunky to say the least. Either you'll have to use the bundled Ungit web interface, or you have to ssh
and docker
your way into the running Docker container to use the git
CLI. Either way the you'll have to work against a Google Cloud Source Repository as the remote, while we really want to utilize Github's .ipynb
preview functionallity and Pull Request mechanism. Source Repositories do have a Github sync feature, but it only works one way and you have to set it up when you create the repository (which you can't do for datalab repos).
So this is an attempt of setting up some git
tricks to make this workflow a bit smoother and setting up Github as the remote for the project.
First of all it's a bit of a pain to docker exec
into the container from the