#SkyDB installation
Super simple instructions for installing SkyDB and it's ruby client for Ubuntu 12.10 using the standard AWS ec2 AMI
After installing these, a simple gem install [skybox | skycap] will work.
#SkyDB installation
Super simple instructions for installing SkyDB and it's ruby client for Ubuntu 12.10 using the standard AWS ec2 AMI
After installing these, a simple gem install [skybox | skycap] will work.
ubuntu@ip-10-226-86-151:~$ gem install skydb | |
Building native extensions. This could take a while... | |
ERROR: Error installing skydb: | |
ERROR: Failed to build gem native extension. | |
/home/ubuntu/.rvm/rubies/ruby-2.0.0-p0/bin/ruby extconf.rb | |
checking for bzlib.h... yes | |
checking for BZ2_bzWriteOpen() in -lbz2... yes | |
creating Makefile |
✘ ⮀ ~/code/sky ⮀ ⭠ go ⮀ make skyd | |
mkdir -p build/ | |
cd skyd && go build -o /Users/sandfox/code/sky/build/skyd | |
# _/Users/sandfox/code/sky/skyd | |
lua_cmsgpack.go:9:17: error: lua.h: No such file or directory | |
lua_cmsgpack.go:10:21: error: lauxlib.h: No such file or directory | |
make: *** [build/skyd] Error 2 |
I can forsee the tech/idea that Mobeam have taking off by itself, though either Mobeam licensing it in a commercially sane way for integrators, or competitors launching competing products.
Alternatively I can see it becoming a irrelevance because
A very lightweight and simple event collector based upon a rough back of envelope guess at what mixpanel might be doing. Intended to easy to deploy and low on moving parts. Hopefully to be used/adapted as part of larger event tracking and processing system.
Collects JSON data sent in from clients running some form of event tracking code or generating events themselves. Tags events as they come in with timestamp and ip address, (can be overridden by the request) and then queues them up (redis/RabbitMQ - local redis most likely) to be processed later by something else somewhere else
line 18+ skyd/server_event_handlers.go
The only method I can see for creating events is PUT
and it must be aimed at an objectId
for a given timestamp
s.ApiHandleFunc("/tables/{name}/objects/{objectId}/events/{timestamp}", func(w http.ResponseWriter, req *http.Request, params map[string]interface{}) (interface{}, error) {
return s.replaceEventHandler(w, req, params)
}).Methods("PUT")
#Rough unscientific performance stats
##conditions
Sessions length per object (and maybe also per session per object if max session length
is specified) would be über useful as property that could be used in future steps and also just as an output from a query.
More of a question really, How are you thinking that distributed processing is going to work? Is this going to be based on the assumption that all the data for a table exists in 1+x sky instances and every instance has a full copy of the table?
[2:20pm] felixge: ^--- would love if you could explain ideas on how to do it here
[2:20pm] felixge: but ideally it'd be an optional part of the protocol
[2:21pm] felixge: Baughn: <3 thanks!
[2:22pm] Baughn: felixge: The problem is, TCP uses a 16-bit checksum. The algorithm is resilient against single-bit errors - the checksum should always change - but for multiple-bit errors, there is a 2^-16 chance you get the /same/ checksum.
[2:22pm] felixge: yes, we need to figure it out
[2:22pm] Baughn: And you're aiming this protocol squarely for noisy networks.
[2:22pm] felixge: a few thoughts: clients may not always be able to generate a checksum, and doing this for large files can be problematic (especially in JS)
[2:22pm] Baughn: Right, I'd mark that part of the protocol with SHOULD.
This example assumes your running a recent Ubuntu with upstart installed and you have install n
from npm
.
To see an example use of this ina a wider context, look at this gist for deploying node.js + nginx
Adapt as required.
node.conf
in /etc/init/
/etc/node
node-test.conf
inside /etc/node
/var/logs/node