#Some discussions on logging from docker: Using logstash Using Papertrail
A lot of this boils down to whether you want a single or multi-process (systemd, supervisord etc.) container...
#!/bin/sh | |
exec scala -savecompiled "$0" $@ | |
!# | |
// | |
// Copyright (C) 2009-2013 Typesafe Inc. <http://www.typesafe.com> | |
// Modified 2014 AI2 <http://www.allenai.org> | |
// | |
// This script will check that the commit is correctly formatted. It only checks files that are to be committed. | |
// To be run this file should be at `.git/hooks/pre-commit`. |
#!/usr/bin/env bash | |
# | |
set -euo pipefail | |
unset SBT_OPTS JVM_OPTS JDK_HOME JAVA_HOME | |
: ${TRAVIS_SCALA_VERSION:=2.11.8} | |
: ${SBT_TARGET:=$*} | |
: ${SBT_TARGET:=test} |
case class IO[A](unsafePerformIO: () => A) { | |
def map[B](ab: A => B): IO[B] = IO(() => ab(unsafePerformIO())) | |
def flatMap[B](afb: A => IO[B]): IO[B] =IO(() => afb(unsafePerformIO()).unsafePerformIO()) | |
def tryIO(ta: Throwable => A): IO[A] = | |
IO(() => IO.tryIO(unsafePerformIO()).unsafePerformIO() match { | |
case Left(t) => ta(t) | |
case Right(a) => a | |
}) | |
} | |
object IO { |
package com.cym_iot.training.testspark16 | |
import org.apache.spark.rdd.RDD | |
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder | |
import org.apache.spark.sql.{Dataset, Encoder, SQLContext} | |
import org.apache.spark.{SparkConf, SparkContext} | |
import shapeless.tag | |
import shapeless.tag.@@ | |
#Some discussions on logging from docker: Using logstash Using Papertrail
A lot of this boils down to whether you want a single or multi-process (systemd, supervisord etc.) container...
/** | |
* Let's write a typeclass for a coproduct. The idea is we're given the name of a string, and we need to: | |
* | |
* 1. check that the string matches a value | |
* 2. if the string matches a value, convert that string into some type and return it | |
* 3. If the string doesn't match that value, try another alternative. | |
* 4. If no alternatives match the value, return an error. | |
* | |
* The usecase is based on something I encountered in real life: we have to parse different kind of events in | |
* my work's data pipeline, and the type of event (and subsequent parsing) depends on an "event type" string. I |
package foo | |
import reactivemongo.bson.{BSONHandler, BSONDateTime, Macros} | |
import org.joda.time.format.ISODateTimeFormat | |
import org.joda.time.{DateTime, DateTimeZone} | |
package object bar { | |
DateTimeZone.setDefault(DateTimeZone.UTC) | |
implicit object BSONDateTimeHandler extends BSONHandler[BSONDateTime, DateTime] { |
Doesn't work on mobile ANYTHING!!! This item should be on a giant, 2 meter poster in the Atlassian office. It should be in all caps, red text, with images of blood droplets dripping from the lettering. There should be a pager alert sent to a random project manager every night at 2am containing this text until such time as the issue is resolved. There are no words for how much of a problem this is.
I've been asked a few times over the last few months to put together a full write-up of the Git workflow we use at RichRelevance (and at Precog before), since I have referenced it in passing quite a few times in tweets and in person. The workflow is appreciably different from GitFlow and its derivatives, and thus it brings with it a different set of tradeoffs and optimizations. To that end, it would probably be helpful to go over exactly what workflow benefits I find to be beneficial or even necessary.
#!/bin/bash | |
# Argument = -h -v -i groupId:artifactId:version -c classifier -p packaging -r repository | |
#shopt -o -s xtrace | |
# Define Nexus Configuration | |
NEXUS_BASE=http://repository.example.com:8081/nexus | |
REST_PATH=/service/local | |
ART_REDIR=/artifact/maven/redirect |