Deploy your app with docker and docker-compose - Part 1
In order to deploy an application using docker, the first step is to set up a development environment.
This tutorial will walk you through creating a dockerized development environment for a single page application.
- The examples will use a Vuejs frontend and a Python backend, but the code is kept to a minimum so we can focus on the docker setup.
- The concepts are language/framework-agnostic, they should be useful for somebody deploying a Golang + React application.
- Although we will start from scratch, you should be able to adapt this to an existing application.
Target audience:
- Developers that want to understand their
docker-compose
setup better or improve it. - Developers that want to deploy an existing application using docker.
- Teams that want to standardize their development environment (without necessarily going to production with it).
Requirements:
- You should have
docker
anddocker-compose
installed on your machine. - You should be comfortable running commands in your terminal.
If you are not very comfortable with the basic docker concepts or are not sure why you should use it, I wrote this extra section.
Read the "Overview" sectionOverview
We will develop a simple application with a Python Flask backend, a Postgresql database and a Vuejs frontend.
All our code will run in containers but most of the time you should be able to forget that your code is not running locally.
You can follow along by creating an empty directory or using the code on github1.
As a preview here's what we will get to at the end of this post:
docker-compose.yml
version: '3'
services:
front:
image: docker-tutorial/front
build:
context: ./
dockerfile: front.dockerfile
command: npm run dev
volumes:
- ./client:/app
ports:
- "8080:8080"
web:
image: docker-tutorial/web
command: python manage.py runserver
build:
context: ./
dockerfile: web.dockerfile
ports:
- "5000:5000"
volumes:
- ./:/app
depends_on:
- db
db:
image: postgres:10.2-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
This file specifies how to build the docker images required for our project and how to run them.
By the end of this post you should have some understanding of every line in this file.
For now you can notice we have 3 sections under services
, namely front
, web
and db
.
Our application will run in three containers (based on three images) that communicate with each other: our javascript development server, our Flask application and our database.
Now we're going to start from an empty directory and detail everything we need for this to actually run.
Read the "Backend" sectionBackend
Let's start with the simplest Flask application.
app.py
from flask import Flask
app = Flask(__name__)
app.config['DEBUG'] = True
@app.route('/')
def hello_docker():
return 'Soon this will all run from docker.'
if __name__ == '__main__':
app.run()
app.config['DEBUG'] = True
makes the development server reload files when we change our code.
requirements.txt
Flask
We want to run this from docker. We need to create a docker image that can do this.
Let's create a web.dockerfile
:
FROM python:3.6
RUN mkdir /app
WORKDIR /app
ADD requirements.txt ./
RUN pip install -r requirements.txt
ADD ./ ./
CMD python app.py
To build the image from this dockerfile we run the following command:
docker build -f web.dockerfile -t docker-tutorial/web ./
We can then run a container based on the image we just built:
docker run --rm -it docker-tutorial/web
At this stage it should display:
$ docker run --rm -it docker-tutorial/web
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
Great. However if you open up your browser you'll notice it does not work.
This is because Flask is running on port 5000 on the container which is not the same as port 5000 on the host (which is your machine if you are on linux, and a virtual machine created by docker if you are on Mac OS or Windows).
We need to map our host port to the container port so we can access it:
$ docker run -p 5000:5000 -t docker-tutorial/web
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
At this stage... it still does not work!
The ports are linked properly but the flask server only listens to requests coming from the container (it is running on 127.0.0.1
on the container).
This is a common gotcha, here's how we fix it:
if __name__ == '__main__':
app.run('0.0.0.0')
0.0.0.0
tells our server to listen to requests from any origin on the network, so we can reach the container from our host.
Now we can open http://127.0.0.1:5000/
in the browser and it works!
However if you try changing app.py
to return a different message, you will see that changes are not taken into account.
With this setup you need to build the image again so that the container has the new app.py
file.
Of course we don't want to build the image again every time we write new code. We can use a volume to synchronize a directory on our machine with a directory on the container:
$ docker run --rm -v $(pwd):/app -p 5000:5000 -it docker-tutorial/web
Our image already had our code in /app
because we copied it there at build time.
The volume overwrites it and allows us to edit code on our machine and to have the changes reflected in the container.
At this point you might wonder why we copied the code into the image during the build
.
This is because the volume is only for development purposes and we will want to use the same dockerfile for production.
Read the "Adding a database" sectionAdding a database
A web app usually requires a database. Here we go:
docker run --rm --name db -p 5432:5432 postgres:10.2
That's it. Now we have a postgres server running in the container and bound to our host port 5432. This means we can run:
psql --host localhost -U postgres
In this tutorial we won't detail the code that talks with the database (as this is Python-related) but the code on github does this.
We could run our docker commands one at a time every time we want to run our app, but it is a bit tedious when these commands get long or we add more containers.
Enter docker-compose.
At the basic level docker-compose is just a tool that lets us to store all our docker commands in a single configuration file and have them run all at once if we want to 2.
docker-compose.yml
version: '3'
services:
web:
image: docker-tutorial/web
build:
context: ./
dockerfile: web.dockerfile
ports:
- "5000:5000"
volumes:
- ./:/app
depends_on:
- db
db:
image: postgres:10.2-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
You can now use docker-compose build
to build your images (right now we only have one custom image, for web
).
And you can launch both the backend and the database with:
docker-compose up
Looking at the `docker-compose.yml` file, each of the `services` sections defines:
- How to build the image for that service.
- What parameters to pass to docker when launching the container for that service.
- Some extra parameters like
depends_on
(docs).
Optional here, makes sure that if youdocker-compose up web
it launches the database as well.
You will recognize some of the parameters we used in our docker run
commands: ports
and volumes
.
Note the db
service does not provide a build
section because it uses an image from dockerhub.
Read the "Adding a frontend" sectionAdding a frontend
We mentioned a single page application so we need a frontend.
Though we could use vanilla JavaScript for the purpose of this tutorial, we will use Vue.js because most people start with a framework.
We will also use a docker container to run our frontend development server.
This means we don't have to install npm
on our host machine and everybody in our team has the same version of node.
mkdir client
Let's create our front.dockerfile
:
FROM node:9.5
## Specify the version so builds are (more) reproducible.
RUN npm install --quiet --global vue-cli@2.9.3
RUN mkdir /app
WORKDIR /app
Add a new service to our docker-compose.yml
file:
front:
image: docker-tutorial/front
build:
context: ./
dockerfile: front.dockerfile
volumes:
- ./client:/app
ports:
- "8080:8080"
This should start to look familiar.
- We bind port 8080 on our machine with port 8080 on our container (8080 because it is the default port that our
webpack-dev-server
will use). - We use a volume to sync our code between the container and the
client/
folder.
You might have noticed that the front.dockerfile
is very simple.
Basically we use it only to get npm
. All the packages will be installed on the container.
Thanks to the volume everything will happen as if we had installed them locally 3.
Now we will use vue-cli
to bootstrap our client code. We need to run this in our container since this is where vue-cli
is installed.
# This will give us boilerplate code for a Vue application with a full-blown build process.
# See https://github.com/vuejs-templates/webpack for the template we are using
docker-compose run --rm front vue init webpack
When you run this for the first time, docker-compose
wants to launch a container based on the docker-tutorial/front
image.
This image does not exist yet because we have not built it: docker-compose
realizes that and builds the image.
Then it can launch our container and run the command we passed it: vue init webpack
.
# You will get a number of questions. Remember this runs on the container.
$ docker-compose run --rm front vue init webpack
? Generate project in current directory? (Y/n) Y
For this tutorial I chose to not use vue-router, eslint and opted out of the tests.
I accepted the option to run npm install
after the project has been created.
This will install all required javascript packages on the container in a node_modules
directory.
Since we are using a volume we will see these files appear in our host client/node_modules
directory.
The vue
command line also created a lot of files for us to bootstrap our app.
We can now run:
docker-compose run --rm --service-ports front npm run dev
This should launch a server running on localhost:8080
on the container. However this is not enough so we can access it from the host!
Same thing as with the backend, we need to serve on 0.0.0.0:8080
in the container. We can change the webpack config to do this.
In client/config/index.js
under the dev section change host: 'localhost'
to host: '0.0.0.0'
.
You will need to restart the webpack-dev-server
(running in the front container) as it does not pick up changes in the webpack config on its own.
The easiest way is to restart the entire front container with docker-compose restart front
.
Finally this will let us access our frontend in the browser:
docker-compose run --rm --service-ports front npm run dev
Visiting http://localhost:8080/
you should see a warm Welcome to Your Vue.js App
.
Let's add this command to our compose file so docker-compose up
uses it:
command: npm run dev
volumes:
- ./client:/app
ports:
- "8080:8080"
Great! Now running docker-compose up
launches our frontend, backend and database servers.
If you have an existing application you can change the dockerfile to remove the vue-cli
installation.
For consistency between developers you should use npm
from the frontend container to install packages rather than
your local version.
Read the "Putting things together" sectionPutting things together
Now we want to connect our backend with our frontend.
We'd like to write code like:
fetch('http://localhost:5000/api') // Whether we'd really like to use `fetch` is a separate matter.
.then(...)
However this is not going to work due to Cross-Origin Resource Sharing (CORS) restrictions.
This happens because we make a request on localhost:5000
from another origin localhost:8080
, to which our server answers "I don't know you; I'm not answering.".
CORS is only an issue in development, because we will not be using webpack-dev-server
in production.
There are several ways of fixing it, we will use webpack proxying feature to get around it.
The changes we have to make are in the webpack config again:
In index.js
client/config/index.js
proxyTable: {
'/api': {
target: 'http://web:5000',
changeOrigin: true
}
},
Note hot-reloading does not work for webpack config files so you need to stop and re-run docker-compose up
so changes take effect 4.
From http://localhost:8080
we will make a request to http://localhost:8080/api
instead of http://localhost:5000/api
and it is webpack-dev-server
that will transmit it to the backend.
In this process our webpack development server will change the origin header so our request is accepted.
Now why are we using web:5000
and not localhost:5000
5?
Remember it will be webpack-dev-server
relaying the request to our web container: the request won't originate from your host but from the frontend container.
That means if you use target: 'http://localhost:5000'
, requests will be made to the port 5000 on the frontend container, and there's nothing there!
But why does web:5000
even work?
This is docker-compose
magic: it created a docker network with all our containers.
Our containers can talk to each other using the service names as addresses.
You can test this configuration by adding this bit of JS to your client/index.html
file:
client/index.html
<script type="text/javascript">
fetch('/api').then(res => res.text())
.then(text => console.log('text', text))
</script>
Note we also need to make sure we have an api/
route on our flask server, so change the url in app.py
:
app.py
@app.route('/api')
def hello_docker():
return 'Now this really runs from docker!'
If you open your browser with the devtools and refresh the page, you should see the message logged in the console!
Read the "Tips for developing with docker-compose" sectionTips for developing with docker-compose
Though this setup is a good start, you will have to learn about docker along the way. This can usually be done progressively though.
Here are some tips for day-to-day work:
docker-compose up
does not play well with standard input: that means you can't usepdb
if you start your backend server with docker-compose up.
I usually docker-compose up front db ... [-d]
to launch all the containers but my backend server.
I use the daemon flag (-d
) when I don't need to see the logs (you can always docker-compose logs front db
later).
Then I do docker-compose run --rm --name web --service-ports web
as this works with pdb
.
-
It's nice to explicitly name your containers when using
docker-compose run
, as it makes it easier to run ad-hoc docker (non-compose!) commands you might need.
Otherwise you need to lookup the random-generated name that docker gave your container. -
Sometimes you need/want to go look on your container what the hell is going on.
# web here needs to be the name of a running container. # It does not refer to the service, docker does not even know about the compose file. docker exec -it web bash
-it
makes sure you get a terminal prompt so you can explore things. Otherwise docker just runs bash and exits. (-it
is short for--interactive --tty
:--tty
: give me a prompt! ;--interactive
: keep stdin attached so I can use that prompt!)
Read the "Conclusion and next steps" sectionConclusion and next steps
We have a development environment that runs completely inside docker. Great! This is already a big win if you are working with a team and you need a consistent development environment.
Also note that you don't have to do this all at once: an easy start could be to use docker-compose only for things like your database, redis
or rabbit-mq
.
On some projects I do not dockerize my frontend development server and just run it on my machine.
I feel there are less benefits to dockerizing the frontend than the backend, since we won't really reuse the frontend part for deployment.
The next big win is ease of deployment. Though we will have to make some changes ;).
The goal of the next tutorial will be to get a production setup that:
- Does not duplicate everything we've done so far.
- Can be tested locally so you can fight with that nginx configuration on your ground.
- Can be deployed efficiently - in terms of both speed and developer input.
- Includes logging, restart policies and other niceties.
- Makes you feel good if you've ever struggled with a deployment process.
Take a look at the code for this post or read the next part!
Read the "Footnotes" sectionFootnotes
-
There's more in the repository than what we do in this blog post. Run
git clone
thengit checkout tags/dev-setup-blog
to get just what we do in this post.git checkout tags/working-app-dev-setup
will give you the same setup applied with a simple programming quiz application. ↩ -
There's a bit more to it than that,
docker-compose
also lets our containers talk to each other as we will see later on. ↩ -
We are doing this so that we don't have to worry about installing node and npm, or different versions between developers. ↩
-
You could also run
docker-compose down frontend
anddocker-compose up frontend
to restart just this service. ↩ -
This is really key to understand. How would we do it if we were running the webpack server locally (not using docker for the frontend)? What address would we use? ↩