Each of these commands will run an ad hoc http static server in your current (or specified) directory, available at http://localhost:8000. Use this power wisely.
$ python -m SimpleHTTPServer 8000
# E.g. TaskClass = CaseClass('name', 'owner', 'pid') | |
# task1 = TaskClass(name = "hello", owner = "brian", pid = 15) | |
# task2 = TaskClass(name = "world", owner = "brian", pid = 13) | |
# tasks = [task1, task2] | |
# | |
# filter(lambda task: task.where(owner = "brian"), tasks) => [task1, task2] | |
# filter(lambda task: task.where(owner = "brian", pid = 13), tasks) => [task2] | |
# | |
# matcher = TaskClass(pid = 13) | |
# filter(lambda task: task.match(matcher), tasks) => [task2] |
import akka.actor._ | |
import akka.stream.scaladsl.Flow | |
import org.apache.spark.streaming.dstream.ReceiverInputDStream | |
import org.apache.spark.streaming.receiver.ActorHelper | |
import akka.actor.{ ExtensionKey, Extension, ExtendedActorSystem } | |
import scala.reflect.ClassTag | |
object AkkaStreamSparkIntegration { |
import scala.collection.mutable | |
/** | |
* Bounded priority queue trait that is intended to be mixed into instances of | |
* scala.collection.mutable.PriorityQueue. By default PriorityQueue instances in | |
* Scala are unbounded. This trait modifies the original PriorityQueue's | |
* enqueue methods such that we only retain the top K elements. | |
* The top K elements are defined by an implicit Ordering[A]. | |
* @author Ryan LeCompte ([email protected]) | |
*/ |
Each of these commands will run an ad hoc http static server in your current (or specified) directory, available at http://localhost:8000. Use this power wisely.
$ python -m SimpleHTTPServer 8000
Look at LSB init scripts for more information.
Copy to /etc/init.d
:
# replace "$YOUR_SERVICE_NAME" with your service's name (whenever it's not enough obvious)
As all objects must be Serializable
to be used as part of RDD
operations in Spark, it can be difficult to work with libraries which do not implement these featuers.
For simple classes, it is easiest to make a wrapper interface that extends Serializable. This means that even though UnserializableObject
cannot be serialized we can pass in the following object without any issue
public interface UnserializableWrapper extends Serializable {
public UnserializableObject create(String parm1, String parm2);
Delete all containers
$ docker ps -q -a | xargs docker rm
-q prints only the container IDs -a prints all containers
Notice that it uses xargs to issue a remove container command for each container ID
/* | |
* Copyright (C) 2015 Pavel Savshenko | |
* Copyright (C) 2011 Google Inc. All rights reserved. | |
* Copyright (C) 2007, 2008 Apple Inc. All rights reserved. | |
* Copyright (C) 2008 Matt Lilek <[email protected]> | |
* Copyright (C) 2009 Joseph Pecoraro | |
* | |
* Redistribution and use in source and binary forms, with or without | |
* modification, are permitted provided that the following conditions | |
* are met: |
if (isDebugging()) | |
engine.documentProperty().addListener(new ChangeListener<Document>() { | |
@Override | |
public void changed(ObservableValue<? extends Document> prop, | |
Document oldDoc, Document newDoc) { | |
enableFirebug(engine); | |
} | |
}); | |
/** |
import javafx.application.*; | |
import javafx.geometry.Pos; | |
import javafx.scene.*; | |
import javafx.scene.control.Label; | |
import javafx.scene.layout.*; | |
import javafx.scene.paint.Color; | |
import javafx.stage.*; | |
import javax.imageio.ImageIO; | |
import java.io.IOException; |