Updated: August 21, 2018
NOTE: This setup is used as an exporter destination for BigStats: https://npearce.github.io
On the AWS Console:
- In 'Instances', click 'Luanch Instance'.
- Select 'Amazon Linux 2 AMI (HVM), SSD Volume Type'
- Select 't2.medium' (perfectly fine for lab testing), and click 'Next: Configure Instance Details'
- Select the appropriate 'Network' and 'subnet' for your environment that can reach your BIG-IP mangement interface. Click 'Review and Launch'.
- Apply the correct Securty Group to access:
- Kafka 9092/tcp
- SSH to the docker host: 22/tcp
- Click 'Launch'
- Select the appropriate key pair you have access to, click 'Launch Instances'.
- Click 'View Instances' to be taken to the newly reated instance and watch it boot!
Once the 'Status Checks' transition from Initializing to '2/2 checks passed':
- [OPTIONAL] Give the new instance a name.
- Select the instance and click 'Connect' to access the connection details.
- SSH into the new instace, e.g.
ssh -i "YourAWSKey.pem" [email protected]
- Update the package cache:
sudo yum update -y
- Install the most recent Docker Community Edition package:
sudo yum install -y docker
- Start the Docker service:
sudo service docker start
- Add the ec2-user to the docker group:
sudo usermod -a -G docker ec2-user
- Log out and log back in again to pick up the new docker group permissions.
- Verify that the ec2-user can run Docker commands without sudo:
docker info
Note: In some cases, you may need to reboot your instance to provide permissions for the ec2-user to access the Docker daemon. Try rebooting your instance if you see the following error:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Install docker-compose
to build the kafka proxy cluster from a single definition file.
- Download docker-compose:
sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
- Fix permissions after download:
sudo chmod +x /usr/local/bin/docker-compose
- Verify by executing:
docker-compose version
- ssh onto the Docker Host and install git:
sudo yum install -y git
- Clone the
wurstmeister/kafka-docker
repository:git clone https://github.com/wurstmeister/kafka-docker
- Change to the repo directory:
cd kafka-docker/
- [For a 'single kafka broker' environment] Edit the file
docker-compose-single-broker.yml
and change the KAFKA_ADVERTISED_HOST_NAME value.
NOTE: Use the hostname or IP of the current docker host, Not
localhost
.
- Start kafka:
docker-compose -f docker-compose-single-broker.yml up -d
You should see something like:
Starting kafka-docker_kafka_1 ... done
Starting kafka-docker_zookeeper_1 ... done
docker-compose has created two new containers for your single broker kafka environment.
- On the docker host, execute:
docker ps
and you should see a status of 'UP' for both.
NOTE: A
t2.micro
instance may not have enough resource and you may have trouble keeping the containers runnning.
- Open a shell prompt to the kafka container:
docker exec -it kafka-docker_kafka_1 sh
- To list the kafka topics known to this broker, execute:
kafka-topics.sh --list --zookeeper zookeeper
You should see two default topis:
topic1
topic2
- View message inside a topic by executing:
kafka-simple-consumer-shell.sh --broker-list localhost:9092 --topic topic1
- Delete a topic:
kafka-topics.sh --zookeeper zookeeper --delete --topic topic2
NOTE: There's not much to see right now. However, you are ready to configure BigStats to send data to the Kafka Broker.