Introduction
We will provide an example of how you can work with multiple containers, how to share volumes and how to deploy multiple containers locally suing the docker-compose and the Amazon Elastic Beanstalk’s multi container option. For this scenario, we assume that we are dealing with 2 APIs and we want to share data across APIs containers using anonymous volume. Assume that you have the following two APIs with the corresponding docker files
Dockerfile for price_api
:
FROM node:11-alpine RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY . . RUN npm install EXPOSE 3000 CMD ["npm", "run", "start"]
Dockerfile for service_api
:
FROM node:11-alpine RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY . . RUN npm install EXPOSE 3000 CMD ["npm", "run", "start"]
Create an image for each API
For the service_api
You change the directory to the service_api
folder and you run the build command:
docker build --tag service-api .
For the price_api
You change the directory to the price_api
folder and you run the build command:
docker build --tag price-api .
Anonymous Volume Pattern to share pre-existing data
We will show how we can pre-seed a volume with data and share it. Our goal is to manage the volume outside of the build process. We will create a new folder called my-data in Cloud9 by creating price_data.json inside my-data. Right click on the my-data folder and choose New File. Double click the price_data.json file to open it and paste in the following:
{
"hawaii": "1450.95",
"sf": "1758:76"
}
Now create a second file inside my-data
called service_data.json
.
["hawaii", "sf"]
And finally we need to create a Dockerfile inside the my-data
folder:
FROM node:11-alpine RUN mkdir -p /usr/src/app WORKDIR /usr/src/app VOLUME /my_amazing_shared_folder COPY . /my_amazing_shared_folder CMD tail -f /dev/null
This Dockerfile creates a new container containing the two json files under the my-data folder. Let’s build it:
docker build --tag my-data .
Now let’s run it:
docker run -d --name my-shared-data my-data
Now, we will run both these containers using volume from my-shared-data
and test it. We start with the service-api and reference the container called my-shared-data
:
docker run -d --name service-api --volumes-from my-shared-data -p 8080:3000 service-api
Now do the same for the price api:
docker run -d --name price-api --volumes-from my-shared-data -p 8081:3000 price-api
Volume-from using a seeded named volume
In most cases we may not need such tight coupling and volume sharing and you might be better off creating each as a task with just one container. Then treat these APIs independently and scale them as such. This can be accomplished by using externally created named volumes along with what we call a docker-compose file. The docker-compose file is the following:
version: "3.7" services: price-api: image: price-api networks: - backend ports: - "8081:3000" volumes: - my-named-shared-data:/contains_your_price_data service-api: image: service-api networks: - backend ports: - "8080:3000" volumes: - my-named-shared-data:/contains_your_service_area_data volumes: my-named-shared-data: external: "true" networks: backend: driver: bridge
In this file we are running two services one called service-api and one called price-api. In both the services we are referencing my-named-shared-data . The key here is notice that we are using the top level volumes section to define the name of the container and notably you are setting the external flag to true. This means we plan to create the volume outside of our external system. Thus we do not want docker compose to create a new volume every time this is run. It also helpfully raises an error if the volume doesn’t exist.
Create a named volume that we will reference in our docker-compose file using the docker volume create
command.
docker volume create my-named-shared-data
How to install docker-compose on Linux
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
Add your required permissions:
sudo chmod +x /usr/local/bin/docker-compose
Build the docker compose:
docker-compose up
In order to run this and not hold your command line hostage you can do the following:
docker-compose start
Deploy locally using Elastic Beanstalk Multi container option
We already have you two images and the volume already “good to go” for our beanstalk demo. However first we will need to install the EB CLI which requires a few prerequisites.
sudo yum groupinstall -y "Development Tools"
Then:
sudo yum install -y zlib-devel openssl-devel ncurses-devel libffi-devel sqlite-devel.x86_64 readline-devel.x86_64 bzip2-devel.x86_64 #Some of these packages may already be installed
Then clone the following Elastic Beanstalk CLI repository using git:
git clone https://github.com/aws/aws-elastic-beanstalk-cli-setup.git
Then install the EB CLI with this command below:
./aws-elastic-beanstalk-cli-setup/scripts/bundled_installer
Let’s add eb
and python
to our PATH environment to make things easier to work with:
echo 'export PATH="/home/ec2-user/.ebcli-virtual-env/executables:$PATH"' >> ~/.bash_profile && source ~/.bash_profile
And:
echo 'export PATH=/home/ec2-user/.pyenv/versions/3.7.2/bin:$PATH' >> /home/ec2-user/.bash_profile && source /home/ec2-user/.bash_profile
Create a file called Dockerrun.aws.json
inside the resources
folder using new file in Cloud9. Paste in the following and save it:
{ "AWSEBDockerrunVersion": 2, "volumes": [ { "name": "my-named-shared-data", "host": { "sourcePath": "/var/lib/docker/volumes/my-named-shared-data/_data" } } ], "containerDefinitions": [ { "name": "service-api", "image": "service-api", "essential": true, "memory": 128, "portMappings": [ { "hostPort": 8080, "containerPort": 3000 } ], "mountPoints": [ { "sourceVolume": "my-named-shared-data", "containerPath": "/contains_your_service_area_data" } ] }, { "name": "price-api", "image": "price-api", "memory": 128, "portMappings": [ { "hostPort": 8081, "containerPort": 3000 } ], "mountPoints": [ { "sourceVolume": "my-named-shared-data", "containerPath": "/contains_your_price_data" } ] } ] }
Then run:
eb init -k vockey -p "64bit Amazon Linux 2018.03 v2.20.0 running Multi-container Docker 18.09.9-ce (Generic)" --region us-west-2 resources
Now, run the stack locally with Elastic Beanstalk:
eb local run