Problem: Normal node cluster based deployments are no longer supported in cloud.docker.com. New users get Swarm mode by default.
Solution: Deploy a swarm of nodes, and use your image to export it as a service on the nodes.
Prerequisite - You have already push your image as a public repository on to Docker hub using the docker.sh file present here - https://github.com/paulnguyen/cmpe281/tree/master/docker
DISCLAIMER: When running in Swarm mode, docker cloud will create Instances, ELBs, Network Interfaces, Security Groups, VPC etc in your AWS account. Delete them after use or you run the risk of overshooting the Free Tier Usage!
- Follow the instructions at - https://docs.docker.com/docker-cloud/cloud-swarm/link-aws-swarm/
- After linking, docker cloud will be able to create instances, VPC etc on AWS
- Docker Cloud > Swarms > Create
- Service provider: AWS
- Region: 'us-west-1'
- Swarm Managers: 1
- Swarm Workers: 0
- Instance Type: t2.micro
- Select your key pair
- Wait for swarm deployment to finish. This should create the manager instance on EC2
- Connect to the manager node
- Find the manager node public IP from AWS EC2 console
- Ssh into the manager node using 'ssh -i <ssh-key> docker@<public ip>'. Note - the user you login as has to be 'docker'.
- There are other ways to connect to the manager node in a swarm, but I found ssh-ing into it the easiest.
- Inside the manager mode, run the following commands
- 'docker node ls'. This should show one node with 'Status' as 'Ready'
- 'docker service create --name <service_name> --publish 90:9090 <dockehub_image_name>'
- e.g docker service create --name starbucks --publish 90:9090 binoymichael/starbucks
- This will deploy the starbucks service as a container and map port 90 of the manager node to port 9090 of the container.
- Run 'docker ps' to check the output to see if the container has started running
Note: The security rules created by Docker cloud role blocks incoming traffic to port 90. We will have to open that up in AWS. I am not sure if this can configured automatically via the swarm manager node.
- Go to EC2 dashboard, and select the manager node instance.
- There should a security group with inbound rules for SSH port 22 and few other ports
- Add a custom TCP rule for the exposed port (90)
- Test with browser by accessing http://<public ip of manager node>:90/ This should open up the default page on the container.
- Test with Postman by editing the 'Postman Starbucks API, Docker Cloud' settings to either the manager node public ip or DNS name.
- Edit: The manager node sits behind an ELB that the docker cloud creates automatically. You can also hit the ELB DNS port 90 to reach the deployed service.
- The swarm logs are not present in the docker cloud dashboard.
- Running 'docker logs' on the manager node is also not showing the logs.
- The AWS Cloud Watch > Logs displays the logs of the hits to the service. I am not sure whether this will suffice for quiz 4.
- Delete the IAM roles, policies created by Docker Cloud
- Unlink AWS from docker cloud
- Delete all AWS instances, ELB etc created by Docker Cloud
- The manager will act also as a worker node in our case.
- The manager node can launch worker nodes if needed. In production applications, we will need more worker nodes. I think it can be scaled up depending on application load
- The swarm mode ensures that the manager and the nodes can communicate to each other by automatically creating a security group and opening the necessary ports - 2376, 2377, 4789, 7946.
- You can read more about swarm mode here - https://docs.docker.com/engine/swarm/