Consumer key: IQKbtAYlXLripLGPWd0HUA
Consumer secret: GgDYlkSvaPxGxC4X8liwpUoqKwwr3lCADbz8A7ADU
Consumer key: 3nVuSoBZnx6U4vzUxf5w
Consumer secret: Bcs59EFbbsdF6Sl9Ng71smgStWEGwXXKSjYvPVt7qys
Consumer key: CjulERsDeqhhjSme66ECg
Consumer key: IQKbtAYlXLripLGPWd0HUA
Consumer secret: GgDYlkSvaPxGxC4X8liwpUoqKwwr3lCADbz8A7ADU
Consumer key: 3nVuSoBZnx6U4vzUxf5w
Consumer secret: Bcs59EFbbsdF6Sl9Ng71smgStWEGwXXKSjYvPVt7qys
Consumer key: CjulERsDeqhhjSme66ECg
require 'launchy' | |
CLIENT_ID = 'a3506a51a28b4d639ca680123c43f88d' | |
REDIRECT_URI = 'http://www.epicodus.com/' | |
puts 'To use InstaCommandLine, you need to grant access to the app.' | |
puts 'Press enter to launch your web browser and grant access.' | |
gets | |
Launchy.open "https://instagram.com/oauth/authorize/?client_id=#{CLIENT_ID}&redirect_uri=#{REDIRECT_URI}&response_type=token" |
sudo apt-get install alien
)sudo alien -i oracle-instantclinet*-basic-*.rpm
sudo alien -i oracle-instantclinet*-devel-*.rpm
https://graph.facebook.com/oauth/access_token?client_id=YOUR_APP_ID&client_secret=YOUR_APP_SECRET&grant_type=client_credentials |
There are a lot of ways to serve a Go HTTP application. The best choices depend on each use case. Currently nginx looks to be the standard web server for every new project even though there are other great web servers as well. However, how much is the overhead of serving a Go application behind an nginx server? Do we need some nginx features (vhosts, load balancing, cache, etc) or can you serve directly from Go? If you need nginx, what is the fastest connection mechanism? This are the kind of questions I'm intended to answer here. The purpose of this benchmark is not to tell that Go is faster or slower than nginx. That would be stupid.
So, these are the different settings we are going to compare:
int | |
binary_search_first_position(int *A, int n, int target) { | |
int end[2] = { -1, n }; | |
while (end[0] + 1 < end[1]) { | |
int mid = (end[0] + end[1]) / 2; | |
int sign = (unsigned)(A[mid] - target) >> 31; | |
end[1-sign] = mid; | |
} | |
int high = end[1]; | |
if (high >= n || A[high] != target) |
import org.apache.spark.{AccumulableParam, SparkConf} | |
import org.apache.spark.serializer.JavaSerializer | |
import scala.collection.mutable.{ HashMap => MutableHashMap } | |
/* | |
* Allows a mutable HashMap[String, Int] to be used as an accumulator in Spark. | |
* Whenever we try to put (k, v2) into an accumulator that already contains (k, v1), the result | |
* will be a HashMap containing (k, v1 + v2). | |
* | |
* Would have been nice to extend GrowableAccumulableParam instead of redefining everything, but it's |
import java.io.{IOException, File, ByteArrayOutputStream} | |
import org.apache.avro.file.{DataFileReader, DataFileWriter} | |
import org.apache.avro.generic.{GenericDatumReader, GenericDatumWriter, GenericRecord, GenericRecordBuilder} | |
import org.apache.avro.io.EncoderFactory | |
import org.apache.avro.SchemaBuilder | |
import org.apache.hadoop.fs.Path | |
import parquet.avro.{AvroParquetReader, AvroParquetWriter} | |
import scala.util.control.Breaks.break | |
object HelloAvro { |
import scala.annotation.tailrec | |
object BinarySearch { | |
/** | |
* @param xs Sequence to search | |
* @param key key to find | |
* @param min minimum index (inclusive) | |
* @param max maximum index (inclusive) | |
* @param keyExtract function to apply to elements xs before comparing to key, defaults as identity | |
* @tparam T type of elements in the sequence |
import java.io.{ObjectInputStream, ObjectOutputStream} | |
import org.apache.spark.broadcast.Broadcast | |
import org.apache.spark.streaming.StreamingContext | |
import scala.reflect.ClassTag | |
// This wrapper lets us update brodcast variables within DStreams' foreachRDD | |
// without running into serialization issues | |
case class BroadcastWrapper[T: ClassTag]( | |
@transient private val ssc: StreamingContext, | |
@transient private val _v: T |