- obfuscate access keys in conf as: ${AWS_ACCESS_KEY_ID}
- if you get a error like
adf, make sureresolve()the config file like:ConfigFactory.parseFile(new File("conf/application.conf")).resolve()
- if you get a error like
- add
addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "0.8.0-M1")to project/plugins.sbt and.settings(com.typesafe.sbt.SbtNativePackager.packageArchetype.java_application: _*)to whereverval projectis defined (project/ScalaCollectorBuild.scala)- this provides the
sbt stagetask that heroku runs to compile - be careful to skip every other line in
plugins.sbt
- this provides the
- add
java.runtime.version=1.7to system.properties in repo root - add
worker: ./target/universal/stage/bin/name-of-my-app --config myconf.conftoProcfilein repo root, check name of executable aftersbt compile stage- don't use
shhere, heroku runs Ubuntu with dash not bash
- don't use
- obfuscate access keys in conf as: ${AWS_ACCESS_KEY_ID}
- add
java.runtime.version=1.7to system.properties in repo root - ensure that the jar compiles with
mvn clean package, include maven-shade-plugin like so otherwise - add
worker: java $JAVA_OPTS -jar ./target/kinesis-redshift-sink-0.0.1.jartoProcfilein repo root, check name of executable jar aftermvn clean package - delete gpg signing plugin option in mvn pom.xml if it exists
- push to heroku
heroku config:set AWS_ACCESS_KEY_ID=...and the same forAWS_SECRET_ACCESS_KEYheroku ps:scale worker=1to run the "worker" process defined inProcfile. In the case of the collector app which binds and listens on a port, run the web processheroku logs -tto verify
Maybe this should go in the README.