Prologue
So, you went through the first blog post in the series of dockerizing and you are thinking: But wait, if I need to run all these commands and remember all the volumes, ports and so on. So how is that helpful in normal development workflow and in DevOps practices???
To be honest, it's not too much xD . But it's a start!
In this blog post I'm going to cover using docker compose to create a complete project with all the configuration needed for your PHP, your Nginx, how to add a database to the project, the database configuration, maybe adding another service like Redis or ElasticSearch and so on.
Dockerizing your application is a first step
Since you are already here and I want to adhere to DRY principle for writing this post, I'm going to shamelessly paste the first post about dockerizing a PHP application. So here it is: https://gorandespotoski.com/posts/1-dockerizing-php-applications . Take a look, enjoy it!
Back so soon? Yeah, it was easy peasy, no sci-fi there. You think ok, now I can make some shell script that will receive arguments and start the processes and apps that I need in Docker. But let's be real, it really becomes annoying to have to remember all these docker commands in order to start your services and then imagine having a local or other environment. Maintaining and reading your script for different setups and environments can be daunting.
Bad copy paste here and there and now you have a broken environment. Imagine copying a newly added volume to staging properly, but forgetting to add some volume when updating the command for production, heads might roll... Imagine doing it in SOP scripts maybe. Or hardcoding some code into your shell scripts I mentioned before. Whatever the combination, it's gonna be a mess.
In come the docker compose yml files to the rescue
Docker compose yml files are a way to define and run multiple containerized applications/services. This means that with one docker-compose.yml file, you can define multiple apps/services that can run at once for each of your projects and keep everything separate and clean (and with its own version accordingly).
We'll have these 4 services:
- Nginx
- PHP
- Postgresql (works same if we want to use Mysql)
- Redis
and we'll see how they all connect and interact between each other and we'll configure some of them as needed.
Now, opposite to what we did before when we added the Dockerfile to the root of the project, we'll have multiple Dockerfiles and configuration files for each service that we need since we want to structure them better. So our final Dockerfile(s) and config structure should look like this:
Root
- .docker
- app
- Dockerfile
- web
- Dockerfile
- default.conf
- postgresql
- redis
Note that I keep the dot in front of the name so the folder is always on top and doesn't obstruct when developing the application. In each folder there will be a Dockerfile and/or configurations of appropriate services that we need to be overwritten/used.
Another note, notice that I haven't added any config or Dockerfile for postgresql or redis services, since I don't need to build custom image or change anything from the default configuration (for now). I am just using the default official images from Docker Hub with their internal default configurations.
So we'll begin with a clean project that can have a src
folder with index.php
with same content like previously
<?php
echo "Hello World from a Dockerized PHP Application";
This will be our app that we will expand on later in some next blog post hopefully.
Next in order start the docker-compose.yml file, in latest Docker compose versions we start it by adding this section
services:
and under this section we list our services and their definitions. Note that version
section is deprecated in latest versions, so I'm omitting it here.
PHP (app) service
This time since we aren't going to use the official PHP Apache image and we are going to have separate Nginx and separate PHP service, we'll use the official php-fpm image. So our PHP Dockerfile will still be simple, but it will look like this:
FROM php:8.2-fpm
COPY ./src /var/www/
EXPOSE 9000
CMD ["php-fpm"]
This does the same that we had before, but it will additionally expose port 9000 (which is a PHP FPM port that Nginx will connect to as its upstream for all PHP code to be executed) and use the CMD command to run the FPM process constantly in the running container as it does in a common FPM setup in a normal VM or physical server.
Note on why separate the services into Nginx and PHP FPM: Once we have these separated, we should be able to scale the services separately for the future million users we'll have and that will be fairly easy if I may say so. Well it's not that simple, but it's just a rule that is good to follow along with some other rules and you should be able to scale up parts of your application. To scale an app usually is an art by itself, so your mileage may vary, you may not even need to scale Nginx at all for example, but you'll need to in case you serve many many files or you have many PHP FPM backends (upstreams). But hopefully I'll get to this some day.
So as we began with the Dockerfile, the first thing we'll do is create the app service, which is the PHP FPM process that will contain and process all the code. The section for this service in the most basic form will look like this:
...
app:
build:
context: .
dockerfile: .docker/app/Dockerfile
volumes:
- ./src:/var/www/
working_dir: /var/www
...
In the build subsection, we simply say that the image that is to be built, will need to be built from current context/directory and we set the path to the desired Dockerfile. Since this is not the previous Apache image of PHP, we'll use the /var/www
folder inside the image for copying the code into it (and for the volume).
We can run this to check if it will be ok by using the docker compose up
command. It will start working and show you a message like:
app-1 | [31-Jul-2024 08:34:29] NOTICE: ready to handle connections
which means that fpm works and listens to connections from web servers (Nginx, which we haven't setup yet).
If you made a mistake or added something in your Dockerfile and want to rebuild the image, you can use either docker compose build
and then again docker compose up
or use just docker compose up --build
to rebuild and start your services (this is good if you're debugging something and need to build and up and build and up). To check that your service is running, open another terminal and while in the same folder, run docker compose ps
and it will show you all the running services in the docker-compose.yml file in this current folder. If you want to check ALL the docker containers running on your PC, just run docker ps
and it will show them to you.
Note we haven't exposed any ports and we don't have to for this service. This is because Docker will create its own internal network (topic for another day) for this docker-compose.yml file and the port 9000 which is default for fpm and exposed through our Dockerfile to the other services in the docker compose yml file.
If you really don't trust me if the php doesn't work, you can try to do something like: docker compose exec app bash
and walk around your working container structure. You'll notice you are in the /var/www
folder and the folder has all the files and folders you shared through the Volume (dot prefixed files are hidden, use ls -la
to show them too).
Another note: In the past, docker-compose
was a separate executable that was installed separately from docker
. That's why if you do docker help
, you won't find the compose
command in list of Commands there. This is because it's a plugin/extension of docker. So on older systems/installations, docker compose
commands are called with docker-compose
.
Nginx (web) service
In order for PHP to work and serve files through HTTP protocol, you need a web server in front of it. The web server can ask PHP FPM to process the code or in case of assets like images or javascript or css files, it will just serve those files to the client (the browser or API client). For some text files, it can even compress them and increase performance, but that's another topic. Different optimizations for different server configurations can be also done through Nginx.
You could even add a different version of PHP FPM service in your docker-compose.yml file and proxy some parts of your app to maybe some older php FPM versions for legacy apps. Imagine having one app with PHP version 8.3, but you need your older app running with PHP 7.4. You can do this with 2 different php(app-1 and app 2) services and just one Nginx.
So, this is basically similar, we need the web section for our Nginx server
...
web:
build:
context: ./
dockerfile: ./.docker/web/Dockerfile
working_dir: /var/www
volumes:
- ./src:/var/www/
- ./.docker/web/default.conf:/etc/nginx/conf.d/default.conf
ports:
- "8080:80"
depends_on:
- app
...
We added 3 things here, one is the ports, it just opens port 8080 on your host pc and forwards it to the port 80 in the Nginx server (similar to what we've done in the previous post with the docker command).
Second thing is we told docker compose that this web
service depends_on
the app
service and it will first wait for the app
service to start and then run the Nginx
process (it will have the PHP FPM upstream up and running and connect with no error immediately). We'll see this depends_on
feature when we connect with database and redis services next.
Last thing we did is we added a volume for a Nginx configuration in order to tell Nginx where the upstream PHP process is (in the docker compose app
service on port 9000) and you can see that in the content here
server {
listen 80;
index index.php index.html;
root /var/www;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_max_temp_file_size 0;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
client_max_body_size 24M;
client_body_buffer_size 128k;
}
}
These are some common settings for configuring Nginx with fpm that I used before that worked fine with Docker and Kubernetes, so your mileage (and your perfectionism) may vary.
The same files that you added to the app service need to be copied/used with the web service. This way Nginx will know what it should do, serve a file or give it to PHP FPM to process.
And this is how our .docker/web/Dockerfile
will look like
FROM nginx:1.23-alpine
COPY --chown=nginx:nginx ./src /var/www/
WORKDIR /var/www/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Nothing special, it's just copying the files into the /var/www
folder and running the nginx process in CMD directive. As mentioned before, we need the same folder structure in both app
and web
, so that Nginx know what to send to PHP FPM to process and what to serve directly.
So let's check this if it works (fingers crossed), let's do the ```
docker compose up --build
once and see if it works properly. After this, you should be able to open http://localhost:8080
in your browser and see our Hello World message from the index.php.
Note: The fastcgi_pass app:9000;
part in the Nginx site config file is accepting a named address app
with port 9000
, so this means that app
is basically an address that is created in our docker compose with the app:
section line. This address is available only internally in the scope of the current docker-compose.yml file, but we could call it even from another docker-compose.yml file if we wanted to with some trickery.
Postgres (db) service
The database service will be relatively simple, nothing too fancy here, we are just going to use a simple db
section here:
db:
image: postgres:14
environment:
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: app
POSTGRES_USER: app_user
If you notice here, we don't have the build subsection, we instead use directly an image from the official Postgres Docker Hub page. If we wanted to build stuff on top of the image, we would create a Dockerfile in .docker/postgresql
folder, install few things in our own image (maybe we want to have some monitoring or alerting agent added directly in image or like nano as an editor as I do xD ) and then add the build section as in the app
and web
services sections.
Another thing you'll notice is we pass environment variables in the environment
section, which are programmed in the container by the official image makers and can be used by the container to configure stuff or turn off/on some features. This can be also done with our Dockerfiles to accept environment variables, not just ready images.
In our case we tell Postgres container to create database called app (by using the POSTGRES_DB environment variable and similarly set the password and user for the database). So when the db service is started, it will inject these environment variables and create this database. In other images it might do something else like configure https enforcing or set some default url and so on in our app
service.
Needless to say, you can do anything you want. You
Once all this is done, we can add db
in the depends_on
section of the app
service section so that we make sure the database is up and running. The depends_on
section in app
section will look like this:
depends_on:
- db
Note: There is a bit more to make sure the Postgres service is fully started and database engine is running, but I'll try to take a look at healthchecks more a bit later in some other blog post. Why we need it fully started? Because we might want to immediately run some migration by default and the database engine might not be still running and docker compose doesn't know the internal progress of the startup.
Redis (redis) service
So in the past I've used Redis for Laravel and Queue Workers. Those are super awesome for asynchronous execution of scripts, like for example sending emails in the background.
Redis service is straightforward and in its simplest configuration it's only 2 lines.
redis:
image: redis
That's it, usually I've used this to develop on my local and on staging and production environments we usually used Cloud Managed Redis stores (for supposed easier server management, but our cloud bill size was yelling for help xD). Once this is done, same as we did with db
service we can add a depends_on
section to this service in the app
service section, so this is the final looks of it:
depends_on:
- db
- redis
NOTE: In the part where I add the structure, I add postgresql, app, web and redis folders but I don't use all of them, just the app and web configuration. The idea is, if you want to for example add POST FILE SIZE to your php.ini, you can read the official PHP Docker hub information and check how you can do changes in the Docker container on runtime by sharing the ini file as a volume. This is something that I'll probably cover in the next blog post since I want to dive more into team work with Docker and the usage in DevOps practices.
Booting everything up
So finally, how do we boot everything up?
We just run as told before in this blog post:
docker compose up
All of these services will output the logs to the stdout so Docker will show them in the output from this command. All the different services' logs will show with the service name as prefix, like for example (db-1):
db-1 | 2024-07-31 11:31:55.174 UTC [1] LOG: database system is ready to accept connections
This prefix is needed along with the number since we might have a scale
section in each service that can start multiple containers of the same service. So there can be db-2
or db-3
.
Summary
In this blog post we saw how to create a whole environment with docker compose.
Why? Because remembering the docker run
and docker build
commands as we wrote them in the first blog post of Dockerizing PHP applications is really tedious and requires A LOT of copy pasting and minding many things. When working with a team, you just need to help your team start the work asap and not waste time on remembering many commands and their parameters. Also you'll help with a lot of overhead and learning curve when a new team member will want to start one of your fancy over engineered microservices xD.
And when you want to deploy, instead of manually installing a PHP FPM and Nginx processes on your server, you can use Docker to pull and build images and deploy your exact app specification and configuration. You need 2 different php fpm versions? Easy, just create 2 services in your docker container and add new section in your Nginx default.conf that will forward/proxy the requests to appropriate version.
All of a sudden, no one needs to remember anything, they all have it written! You won't have a single person that needs to go from person to person, from environment to environment in order to set them up for work.
Code from this blog post can be found here: