-
...an open source project to pack, ship and run any application as a lightweight container.
- An abstraction layer to "containerize" any application and allow it to run on any infrastructure
- Used to containerize OMERO, OMERO.web and and the additional components of OMERO.cloudarchive
-
...a secure cloud services platform, offering compute power, database storage, content delivery and other functionality.
- Use S3 object storage to store the archive once dehydrated
- Use EC2 to provide a scalabale compute environment to hydrate archives
- Use ECS to manage and deploy the services required by OMERO as Docker containers
- Use LoadBalancers to provide endpoints for multiplexed OMERO.web and OMERO RO
- Use EFS to provide shared storage for multiple OMERO and OMERO RO instances
- Ensure AWS CLI is installed (
pip install awscli
) and configured - Make sure and use "us-east-1" as the region for now to eliminate regions as a potential source of error.
In the future we will specify exact versions of our docker images to use, but for now we will just use the latest release. Docker compose will actually do the pulls for us, but for reference:
docker pull dpwrussell/omero.cloudarchive
docker pull dpwrussell/omero-grid-web
docker pull postgres:9.4
The easiest way to orchestrate several Docker containers together is by using a Docker compose file. This specifies settings for each container and how the containers will interoperate.
- Download docker compose YAML
- Examine docker-compose.yml
docker-compose up
- That's it. Goto http://localhost:8080 and see you have a running OMERO.
- Note: Unfortunately due to how OMERO user configuration works, the server must start, create the public-user, stop, finish the configuration and then start again. So there may be a brief window where the OMERO server appears to be down, or is up but does not have the public user.
The OMERO docker container is now running on localhost which can be connected to with Insight. Alternately, we can log in to import an image with the CLI.
## Log in to the OMERO docker container
docker exec -it --user omero omerocloudarchivedocker_omero_1 /bin/bash
## Download an image from the web
wget <image_url>
## Import the image, use user: public-user, password: omero to import
## directly to the public user
~/OMERO.server/bin/omero import <image_file>
- Check using the web interface that the image is imported correctly
- Create a bucket in AWS S3 Console to dehydrate the archive into. I recommend a "subfolder" inside a bucket as it is easier to later make public with the AWS S3 Console. e.g.
mybucket/test1
- On the machine (not inside the container) configured to access AWS, generate temporary credentials for the dehydration process.
aws sts get-session-token
Then inside the container, use the credentials and the S3 bucket to dehydrate the archive.
~/dehydrate <aws_access_key_id> <aws_secret_access_key> <aws_session_token> <s3_bucket>
- Inspect the S3 bucket for new contents
- Select the S3 bucket and click
More -> Make Public
- Launch Cloudarchive Cloudformation Template in us-east-1 and login to the AWS Cloudformation Console if necessary.
- Click
Next
- Leave most of the settings as default, but populate the
KeyName
,S3Bucket
,SubnetIds
, andVpcId
. - Click
Next
. The Cloudformation stack should be provisioned. This will take several minutes. - Once complete, click on the
Outputs
tab and copy the hostname of the web endpoint. Paste this into the browser. Again, this can take a few minutes until it works correctly as the load balancer is monitoring the health of the service and it takes a little time to come up correctly.
- Select the cloudformation stack in the AWS Console and goto
Actions -> Delete Stack
. Ctrl+C
the docker-compose process to stop the containers and thendocker-compose rm
to remove them.
THX for the nice tutorial,
but what do you mean with "inside docker"
Then inside the container, use the credentials and the S3 bucket to dehydrate the archive.
~/dehydrate <aws_access_key_id> <aws_secret_access_key> <aws_session_token> <s3_bucket>
I have everything but I dont know how to do the dehydration correct.
How ca I get inside docker?
Cheers, Stefan