concepts
- forward and backward propagation
- vanishing gradient
- image convolution operation
- feature map, filter/kernel
- receptive field
- embedding
- translation invariance
package demo.cont; | |
import java.lang.invoke.MethodHandle; | |
import java.lang.invoke.MethodHandles; | |
import java.lang.invoke.MethodType; | |
import java.util.function.Consumer; | |
public final class Continuation { | |
private final Object delegate; |
Kafka 0.11.0.0 (Confluent 3.3.0) added support to manipulate offsets for a consumer group via cli kafka-consumer-groups
command.
kafka-consumer-groups --bootstrap-server <kafkahost:port> --group <group_id> --describe
Note the values under "CURRENT-OFFSET" and "LOG-END-OFFSET". "CURRENT-OFFSET" is the offset where this consumer group is currently at in each of the partitions.
#' @title Geolocate IP Addresses Through ip-api.com | |
#' @description | |
#' \code{geolocate} consumes a vector of IP addresses and geolocates them via | |
#' \href{http://ip-api.com}{ip-api.com}. | |
#' @param ip a character vector of IP addresses. | |
#' @param lang a string to specify an output localisation language. | |
#' Allowed values: en, de, es, pt-BR, fr, ja, zh-CN, ru. | |
#' @param fields a string to specify which fields to return. | |
#' @param delay a logical to whether or not to delay each request by 400ms. | |
#' ip-api.com has a maximum threshold of 150 requests a minute. Disable it for |
bin/kafka-topics.sh --zookeeper localhost:2181 --list
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic mytopic
bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic mytopic --config retention.ms=1000
... wait a minute ...
Picking the right architecture = Picking the right battles + Managing trade-offs
First a disclaimer: This is an experimental API that exposes internals that are likely to change in between different Spark releases. As a result, most datasources should be written against the stable public API in org.apache.spark.sql.sources. We expose this mostly to get feedback on what optimizations we should add to the stable API in order to get the best performance out of data sources.
We'll start with a simple artificial data source that just returns ranges of consecutive integers.
/** A data source that returns ranges of consecutive integers in a column named `a`. */
case class SimpleRelation(
start: Int,
end: Int)(
@transient val sqlContext: SQLContext)