-
Kafka Binary files : http://kafka.apache.org/downloads.html
-
Atleast 2 AWS machines : AWS EMR or EC2 will be preferable
-
A Kafka Manager Utility to watch up the Cluster : https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// CREATE COLLECTION | |
solr create_collection -c my_collection -shards 2 -d path-to-my-conf | |
// CHECK COLLECTION SCHEMA | |
curl http://solr-host.dev:8983/solr/my_collection/schema?wt=schema.xml | |
// SCHEMA ITS GOOD | |
// UPDATE SCHEMA |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Best way to create Mutliple files into a single RDD | |
================================== | |
val fileRDD = sc.textFile(filename).repartition(1) | |
Where the filename is the location of your directory only. |
My goal was to set up Flume on my web instances, and write all events into s3, so I could easily use other tools like Amazon Elastic Map Reduce, and Amazon Red Shift.
I didn't want to have to deal with log rotation myself, so I setup Flume to read from a syslog UDP source. In this case, Flume NG acts as a syslog server, so as long as Flume is running, my web application can simply write to it in syslog format on the specified port. Most languages have plugins for this.
At the time of this writing, I've been able to get Flume NG up and running on 3 ec2 instances, and all writing to the same bucket.
Install Flume NG on instances