Skip to content

Instantly share code, notes, and snippets.

@okram
Created September 11, 2012 23:42
Show Gist options
  • Save okram/3703057 to your computer and use it in GitHub Desktop.
Save okram/3703057 to your computer and use it in GitHub Desktop.
faunus$ bin/gremlin.sh
\,,,/
(o o)
-----oOOo-(_)-oOOo-----
gremlin> g = FaunusFactory.open("bin/faunus-titan.properties")
==>faunusgraph[titancassandrainputformat]
gremlin> g.V.out.name.groupCount.submit()
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: Faunus: A Library of Hadoop-Based Graph Tools
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: ,
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: , |\ ,__
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: |\ \/ `\
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: \ `-.:. `\
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: `-.__ `\/\/\|
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: / `'/ () \
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: .' /\ )
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: .-' .'| \ \__
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: .' __( \ '`(()
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: /_.'` `. | )(
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: \ |
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: |/
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: Generating job chain:
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: Compiled to 2 MapReduce job(s)
12/09/11 17:36:09 INFO mapreduce.FaunusCompiler: Executing job 1 out of 2: MapSequence[com.thinkaurelius.faunus.mapreduce.transform.VerticesMap.Map, com.thinkaurelius.faunus.mapreduce.transform.VerticesVerticesMapReduce.Map, com.thinkaurelius.faunus.mapreduce.transform.VerticesVerticesMapReduce.Reduce]
12/09/11 17:36:09 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/09/11 17:36:10 INFO mapred.JobClient: Running job: job_201209100833_0021
12/09/11 17:36:11 INFO mapred.JobClient: map 0% reduce 0%
12/09/11 17:36:28 INFO mapred.JobClient: map 100% reduce 0%
12/09/11 17:36:40 INFO mapred.JobClient: map 100% reduce 100%
12/09/11 17:36:45 INFO mapred.JobClient: Job complete: job_201209100833_0021
12/09/11 17:36:45 INFO mapred.JobClient: Counters: 29
12/09/11 17:36:45 INFO mapred.JobClient: com.thinkaurelius.faunus.mapreduce.transform.VerticesMap$Counters
12/09/11 17:36:45 INFO mapred.JobClient: VERTICES_PROCESSED=12
12/09/11 17:36:45 INFO mapred.JobClient: EDGES_PROCESSED=0
12/09/11 17:36:45 INFO mapred.JobClient: Job Counters
12/09/11 17:36:45 INFO mapred.JobClient: Launched reduce tasks=1
12/09/11 17:36:45 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=23263
12/09/11 17:36:45 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
12/09/11 17:36:45 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
12/09/11 17:36:45 INFO mapred.JobClient: Rack-local map tasks=2
12/09/11 17:36:45 INFO mapred.JobClient: Launched map tasks=2
12/09/11 17:36:45 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=10037
12/09/11 17:36:45 INFO mapred.JobClient: File Output Format Counters
12/09/11 17:36:45 INFO mapred.JobClient: Bytes Written=2111
12/09/11 17:36:45 INFO mapred.JobClient: FileSystemCounters
12/09/11 17:36:45 INFO mapred.JobClient: FILE_BYTES_READ=2699
12/09/11 17:36:45 INFO mapred.JobClient: HDFS_BYTES_READ=212
12/09/11 17:36:45 INFO mapred.JobClient: FILE_BYTES_WRITTEN=77130
12/09/11 17:36:45 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=2111
12/09/11 17:36:45 INFO mapred.JobClient: File Input Format Counters
12/09/11 17:36:45 INFO mapred.JobClient: Bytes Read=0
12/09/11 17:36:45 INFO mapred.JobClient: com.thinkaurelius.faunus.mapreduce.transform.VerticesVerticesMapReduce$Counters
12/09/11 17:36:45 INFO mapred.JobClient: EDGES_TRAVERSED=17
12/09/11 17:36:45 INFO mapred.JobClient: Map-Reduce Framework
12/09/11 17:36:45 INFO mapred.JobClient: Map output materialized bytes=2705
12/09/11 17:36:45 INFO mapred.JobClient: Map input records=12
12/09/11 17:36:45 INFO mapred.JobClient: Reduce shuffle bytes=2705
12/09/11 17:36:45 INFO mapred.JobClient: Spilled Records=58
12/09/11 17:36:45 INFO mapred.JobClient: Map output bytes=2627
12/09/11 17:36:45 INFO mapred.JobClient: Total committed heap usage (bytes)=454238208
12/09/11 17:36:45 INFO mapred.JobClient: Combine input records=0
12/09/11 17:36:45 INFO mapred.JobClient: SPLIT_RAW_BYTES=212
12/09/11 17:36:45 INFO mapred.JobClient: Reduce input records=29
12/09/11 17:36:45 INFO mapred.JobClient: Reduce input groups=12
12/09/11 17:36:45 INFO mapred.JobClient: Combine output records=0
12/09/11 17:36:45 INFO mapred.JobClient: Reduce output records=12
12/09/11 17:36:45 INFO mapred.JobClient: Map output records=29
12/09/11 17:36:45 INFO mapreduce.FaunusCompiler: Executing job 2 out of 2: MapSequence[com.thinkaurelius.faunus.mapreduce.sideeffect.ValueGroupCountMapReduce.Map, com.thinkaurelius.faunus.mapreduce.sideeffect.ValueGroupCountMapReduce.Reduce]
12/09/11 17:36:45 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/09/11 17:36:46 INFO input.FileInputFormat: Total input paths to process : 1
12/09/11 17:36:46 INFO mapred.JobClient: Running job: job_201209100833_0022
12/09/11 17:36:47 INFO mapred.JobClient: map 0% reduce 0%
12/09/11 17:37:00 INFO mapred.JobClient: map 100% reduce 0%
12/09/11 17:37:12 INFO mapred.JobClient: map 100% reduce 100%
12/09/11 17:37:17 INFO mapred.JobClient: Job complete: job_201209100833_0022
12/09/11 17:37:17 INFO mapred.JobClient: Counters: 27
12/09/11 17:37:17 INFO mapred.JobClient: com.thinkaurelius.faunus.mapreduce.sideeffect.ValueGroupCountMapReduce$Counters
12/09/11 17:37:17 INFO mapred.JobClient: PROPERTIES_COUNTED=11
12/09/11 17:37:17 INFO mapred.JobClient: Job Counters
12/09/11 17:37:17 INFO mapred.JobClient: Launched reduce tasks=1
12/09/11 17:37:17 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=11954
12/09/11 17:37:17 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
12/09/11 17:37:17 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
12/09/11 17:37:17 INFO mapred.JobClient: Launched map tasks=1
12/09/11 17:37:17 INFO mapred.JobClient: Data-local map tasks=1
12/09/11 17:37:17 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=9978
12/09/11 17:37:17 INFO mapred.JobClient: File Output Format Counters
12/09/11 17:37:17 INFO mapred.JobClient: Bytes Written=98
12/09/11 17:37:17 INFO mapred.JobClient: FileSystemCounters
12/09/11 17:37:17 INFO mapred.JobClient: FILE_BYTES_READ=192
12/09/11 17:37:17 INFO mapred.JobClient: HDFS_BYTES_READ=2258
12/09/11 17:37:17 INFO mapred.JobClient: FILE_BYTES_WRITTEN=48007
12/09/11 17:37:17 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=98
12/09/11 17:37:17 INFO mapred.JobClient: File Input Format Counters
12/09/11 17:37:17 INFO mapred.JobClient: Bytes Read=2111
12/09/11 17:37:17 INFO mapred.JobClient: Map-Reduce Framework
12/09/11 17:37:17 INFO mapred.JobClient: Map output materialized bytes=192
12/09/11 17:37:17 INFO mapred.JobClient: Map input records=12
12/09/11 17:37:17 INFO mapred.JobClient: Reduce shuffle bytes=0
12/09/11 17:37:17 INFO mapred.JobClient: Spilled Records=22
12/09/11 17:37:17 INFO mapred.JobClient: Map output bytes=164
12/09/11 17:37:17 INFO mapred.JobClient: Total committed heap usage (bytes)=269619200
12/09/11 17:37:17 INFO mapred.JobClient: Combine input records=11
12/09/11 17:37:17 INFO mapred.JobClient: SPLIT_RAW_BYTES=147
12/09/11 17:37:17 INFO mapred.JobClient: Reduce input records=11
12/09/11 17:37:17 INFO mapred.JobClient: Reduce input groups=11
12/09/11 17:37:17 INFO mapred.JobClient: Combine output records=11
12/09/11 17:37:17 INFO mapred.JobClient: Reduce output records=11
12/09/11 17:37:17 INFO mapred.JobClient: Map output records=11
==>null
gremlin> hdfs.copyToLocal('/user/marko/output.txt','target/output.txt')
==>null
gremlin> local.more('target/output.txt')
==>
alcmene 1
cerberus 2
hydra 1
jupiter 3
nemean 1
neptune 2
pluto 2
saturn 1
sea 1
sky 1
tartarus 2
gremlin> g = TitanFactory.open('../titan/bin/cassandra.local')
==>titangraph[cassandrathrift:127.0.0.1]
gremlin> v = ['saturn','sky','sea','jupiter','neptune','hercules','alcmene','pluto','nemean','hydra','cerberus','tartarus'].collect{ g.V('name',it).next() }
==>v[20]
==>v[36]
==>v[32]
==>v[16]
==>v[8]
==>v[24]
==>v[44]
==>v[4]
==>v[40]
==>v[12]
==>v[48]
==>v[28]
gremlin> v._().out.name.groupCount.cap.next()
==>sky=1
==>cerberus=2
==>alcmene=1
==>neptune=2
==>nemean=1
==>jupiter=3
==>sea=1
==>tartarus=2
==>saturn=1
==>pluto=2
==>hydra=1
gremlin>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment