Unionize lets you connect together docker containers in arbitrarily complex scenarios.
Just check those examples.
Let's create two containers, running the web tier and the database tier:
| #!/bin/bash | |
| set -e | |
| sudo apt-get install jython python-jpype | |
| #statements | |
| cd /home/hadoop | |
| git clone git://git.apache.org/pig.git apache-pig | |
| cd apache-pig | |
| git checkout release-0.12.0 |
| hadoop@ip-10-201-2-159:~/pig$ pig -f idb_view.pig | |
| 2013-09-24 15:18:50,446 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.1.1-amzn (rexported) compiled Aug 03 2013, 22:52:20 | |
| 2013-09-24 15:18:50,446 [main] INFO org.apache.pig.Main - Logging error messages to: /home/hadoop/pig/pig_1380035930444.log | |
| 2013-09-24 15:18:50,572 [main] INFO org.apache.pig.Main - Final script path: /home/hadoop/pig/idb_view.pig | |
| 2013-09-24 15:18:50,576 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/hadoop/.pigbootup not found | |
| 2013-09-24 15:18:50,643 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: maprfs:/// | |
| 2013-09-24 15:18:50,704 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: maprfs:/// | |
| Schema for view_data unknown. | |
| Schema for view_data unknown. | |
| 2013-09-24 15:18:51,150 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: LIMI |
| register /home/hadoop/pig/jars/elephant-bird-core-4.1.jar; | |
| register /home/hadoop/pig/jars/elephant-bird-hadoop-compat-4.1.jar; | |
| register /home/hadoop/pig/jars/elephant-bird-pig-4.1.jar; | |
| register /home/hadoop/pig/jars/json-simple-1.1.jar; | |
| register /home/hadoop/pig/jars/hadoop-lzo-0.4.15.jar | |
| set mapred.compress.map.output true; | |
| set mapred.output.compress true; | |
| set mapred.output.compression.codec com.hadoop.compression.lzo.LzoCodec; | |
| set mapred.child.java.opts -Djava.library.path=/home/hadoop/pig/libs/; |
| { | |
| "Counter": { | |
| "hostname": "bw1-ams-sh", | |
| "timestamp": "1363118126", | |
| "Current": { | |
| "Auctions": 1000, | |
| "Bids": 20, | |
| "Pixels": 4000 | |
| }, | |
| "Total": { |
| register /home/hadoop/lib/pig/piggybank.jar | |
| register jar/datafu-0.0.10.jar | |
| register jar/guava-14.0.1.jar; | |
| define Sessionize datafu.pig.sessions.Sessionize('30m'); | |
| define UnixToISO org.apache.pig.piggybank.evaluation.datetime.convert.UnixToISO(); | |
| define Max org.apache.pig.piggybank.evaluation.math.Max(); | |
| define Median datafu.pig.stats.Median(); | |
| define Quantile datafu.pig.stats.StreamingQuantile('0.75','0.90','0.95'); |
| (1970-01-16T21:10:23.011Z,1372223011,7683542458878598324,14614,ffff4831-8529-4098-a31f-0a8a3ed8517c,ffff4831-8529-4098-a31f-0a8a3ed8517c,1372223011) | |
| (1970-01-16T21:10:22.754Z,1372222754,7683542458878598324,14614,ffff4831-8529-4098-a31f-0a8a3ed8517c,ffff4831-8529-4098-a31f-0a8a3ed8517c,1372223011) | |
| (1970-01-16T21:10:22.746Z,1372222746,7683542458878598324,380725,ffff4831-8529-4098-a31f-0a8a3ed8517c,ffff4831-8529-4098-a31f-0a8a3ed8517c,1372223011) |
| data = (time, user, segment, sessionId); | |
| last_segment_per_sessionId = foreach data by sesssionID | |
| GENERATE MAX(time), | |
| user, | |
| segment, | |
| sessionID; |
| #!/usr/bin/env node | |
| var http = require('http') | |
| var data = '' | |
| http.createServer(function (req, res) { | |
| res.writeHead(204, {'Content-Type': 'text/plan'}); | |
| res.end(data); | |
| }).listen(8080); |
Unionize lets you connect together docker containers in arbitrarily complex scenarios.
Just check those examples.
Let's create two containers, running the web tier and the database tier:
Unionize lets you connect together docker containers in arbitrarily complex scenarios.
Just check those examples.
Let's create two containers, running the web tier and the database tier: