Skip to content

Instantly share code, notes, and snippets.

View c4pt0r's full-sized avatar

dongxu c4pt0r

View GitHub Profile
sudo: false
language: rust
cache:
directories:
- $HOME/.cache/rocksdb
matrix:
include:
- os: linux
rust: nightly
#![feature(convert)]
extern crate byteorder;
extern crate rocksdb;
extern crate time;
use time::PreciseTime;
use std::io::Cursor;
use byteorder::{BigEndian, WriteBytesExt, ReadBytesExt};
struct Error(i32, String);
2015-12-08 16:05:50,879 INFO [RS_OPEN_REGION-localhost:50249-1] regionserver.ThemisRegionObserver: themis expired data clean enable, deleteThemisDeletedDataWhenCompact=false
2015-12-08 16:05:50,879 INFO [AM.ZK.Worker-pool2-t9] master.RegionStates: Transition {ac44a9c2850510013632c08bd2e3c642 state=PENDING_OPEN, ts=1449561949808, server=localhost,50249,1449561943865} to {ac44a9c2850510013632c08bd2e3c642 state=OPENING, ts=1449561950879, server=localhost,50249,1449561943865}
2015-12-08 16:05:50,879 DEBUG [AM.ZK.Worker-pool2-t1] zookeeper.ZKAssign: master:50247-0x151809fec310000, quorum=localhost:2181, baseZNode=/hbase Deleted unassigned node 30b5b70391a9e9c6e88e357254bda96f in expected state RS_ZK_REGION_OPENED
2015-12-08 16:05:50,879 DEBUG [AM.ZK.Worker-pool2-t1] master.AssignmentManager: Znode usertable,user9729,1449560880850.30b5b70391a9e9c6e88e357254bda96f. deleted, state: {30b5b70391a9e9c6e88e357254bda96f state=OPEN, ts=1449561950877, server=localhost,50249,1449561943865}
2015-12-08 16:05:50,879 DEBUG [RS
func main() {
ts := time.Now()
prefix := fmt.Sprintf("%ld", ts.UnixNano())
var err error
cli, err = hbase.NewClient([]string{"localhost"}, "/hbase")
if err != nil {
panic(err)
}
dropTable(benchTbl)
createTable(benchTbl)
func main() {
ts := time.Now()
prefix := fmt.Sprintf("%ld", ts.UnixNano())
var err error
cli, err = hbase.NewClient([]string{"localhost"}, "/hbase")
if err != nil {
panic(err)
}
dropTable(benchTbl)
@c4pt0r
c4pt0r / Dockerfile
Created October 21, 2015 06:40
HDFS Dockerfile
FROM sequenceiq/pam:centos-6.5
MAINTAINER SequenceIQ
USER root
# install dev tools
RUN yum clean all; \
rpm --rebuilddb; \
yum install -y curl which tar sudo openssh-server openssh-clients rsync
# update libselinux. see https://github.com/sequenceiq/hadoop-docker/issues/14
{"swagger": "2.0","info": {"description": "","version": "1.0.0","title": "RebornDB API","contact": {"email": "[email protected]"},"license": {"name": "MIT","url": "https://opensource.org/licenses/MIT"}},"schemes": ["http","https"],"produces": ["application/json"],"paths": {"/api/server_groups": {"get": {"description": "Get server groups information.","parameters": [],"responses": {"200": {"description": "An array of ServerGroup.","schema": {"type": "array","items": {"$ref": "#/definitions/ServerGroup"}}},"500": {"description": "Unexpected error"}}},"put": {"description": "Create new server group.","parameters": [{"in": "body","name": "body","description": "Server group information.","required": true,"schema": {"$ref": "#/definitions/ServerGroup"}}],"responses": {"200": {"description": "An object of Result.","schema": {"$ref": "#/definitions/Result"}},"500": {"description": "Unexpected error"}}}},"/api/overview": {"get": {"description": "Get overview infomation.","parameters": [],"responses": {"200": {"descr
package main
import (
"fmt"
"strconv"
"sync"
"time"
"runtime"
while(true) {
Transaction transaction = new Transaction(conf, connection);
ThemisPut put = new ThemisPut(JOE).add(FAMILY, CASH, Bytes.toBytes(0));
transaction.put(CASHTABLE, put);
transaction.commit();
final Boolean[] b = {false};
Thread thread1 = new Thread() {
public void run() {
RebornDB (Codis) 的设计和实现及我眼中未来的分布式存储
你好, 我是 开源项目 Codis 的 co-author 黄东旭, 之前在豌豆荚做 infrastructure 相关的事情, 现在在创业 公司是 PingCAP, 方向也是分布式存储方向(NewSQL). Codis 是一个分布式 Redis 解决方案, 和官方的纯 P2P 的模式不同, Codis 走了一个 Proxy-based 的方案. 今天我们介绍一下 Codis 及下一个大版本 RebornDB 的设计, 同时会介绍一些 Codis 在实际应用场景中的 Best Practices. 最后抛砖引玉, 我会介绍一下我对分布式存储的一些观点和看法, 望各位首席们雅正.
1. Redis, Redis Cluster 和 Codis
想必大家的架构中, Redis 已经是一个必不可少的部件, 丰富的数据结构和超高的性能以及简单的协议, 让 Redis 能够很好的作为数据库的上游缓存层, 但是我们总是会比较担心 Redis 的单点问题, 单点 Redis 容量大小总是受限于内存, 在业务对性能要求比较高的情况下, 我们理想上希望所有的数据都能在内存里面, 不要打到数据库上, 所以很自然的就会寻求其他方案, 比如, SSDB 将内存换成了磁盘, 以换取更大的容量. 更自然的想法是将 Redis 变成一个可以水平扩展的分布式缓存服务, 在 Codis 之前, 业界只有 Twemproxy, 但是 Twemproxy 本身是一个静态的分布式 Redis 方案, 进行扩容/缩容时候对运维要求非常高, 而且很难做到平滑的扩缩容. Codis 的目标其实就是尽量兼容 Twemproxy 的基础上, 加上数据迁移的功能以实现扩容和缩容.
与 Codis 同期发布正式版的官方 cluster, 我认为有优点也有缺点, 作为架构师, 我并不会在生产环境中使用, 原因有几个: