- 实词:名词、动词、形容词、状态词、区别词、数词、量词、代词
- 虚词:副词、介词、连词、助词、拟声词、叹词。
n 名词
nr 人名
import org.apache.avro.Schema; | |
import org.apache.avro.generic.GenericData; | |
import org.apache.avro.generic.GenericDatumWriter; | |
import org.apache.avro.generic.GenericRecord; | |
import org.apache.avro.io.Encoder; | |
import org.apache.avro.io.EncoderFactory; | |
import org.apache.kafka.clients.producer.KafkaProducer; | |
import org.apache.kafka.clients.producer.ProducerRecord; | |
import org.springframework.boot.SpringApplication; | |
import org.springframework.boot.autoconfigure.EnableAutoConfiguration; |
public class RedisClient { | |
private JedisPool pool; | |
@Inject | |
public RedisClient(Settings settings) { | |
try { | |
pool = new JedisPool(new JedisPoolConfig(), settings.get("redis.host"), settings.getAsInt("redis.port", 6379)); | |
} catch (SettingsException e) { | |
// ignore |
#!/usr/bin/env bash | |
# This file contains environment variables required to run Spark. Copy it as | |
# spark-env.sh and edit that to configure Spark for your site. | |
# | |
# The following variables can be set in this file: | |
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node | |
# - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos | |
# - SPARK_JAVA_OPTS, to set node-specific JVM options for Spark. Note that | |
# we recommend setting app-wide options in the application's driver program. |
#!/bin/bash | |
# herein we backup our indexes! this script should run at like 6pm or something, after logstash | |
# rotates to a new ES index and theres no new data coming in to the old one. we grab metadatas, | |
# compress the data files, create a restore script, and push it all up to S3. | |
TODAY=`date +"%Y.%m.%d"` | |
INDEXNAME="logstash-$TODAY" # this had better match the index name in ES | |
INDEXDIR="/usr/local/elasticsearch/data/logstash/nodes/0/indices/" | |
BACKUPCMD="/usr/local/backupTools/s3cmd --config=/usr/local/backupTools/s3cfg put" | |
BACKUPDIR="/mnt/es-backups/" | |
YEARMONTH=`date +"%Y-%m"` |
#!/usr/bin/env sh | |
# | |
# $ echo 'source path_to_env.sh' >> ~/.bashrc | |
# $ yg_install | |
KAFKA_HOST=bj2-storm03:9092 | |
ES_HOST=bj2-storm03:9200 | |
ZK_HOST=bj2-storm03:2181,bj2-storm04:2181,bj2-storm05:2181 | |
STORM_LOG_HOME='/usr/local/storm-default/logs/' | |
TODAY=$(date '+%Y-%m-%d') |
Kafka acts as a kind of write-ahead log (WAL) that records messages to a persistent store (disk) and allows subscribers to read and apply these changes to their own stores in a system appropriate time-frame.
Terminology:
#!/usr/bin/env ruby | |
# Usage: deal_blanks.rb input.txt >out.txt | |
def isEn(char) | |
/\w/.match(char) != nil | |
end | |
File.open(ARGV[0], "r") do |file| | |
blanks_del = 0 |
import random | |
class Pool: | |
def __init__(self, names): | |
self.names = names | |
def pick(self): | |
if len(self.names) == 0: |