This is the second post in our Docker + WordPress series, so if you haven’t read the first one yet, do so in order to catch up.
Before we continue with adding WordPress to the mix, let’s revisit the setup we have currently; Because we decided to use Nginx instead of Apache, we’ve had to build two Dockerfiles. This approach encourages the single responsibility principle, but also brings a few issues to the mix.
The biggest one is a necessary maintenance of two separate images, with almost identical source code – you need to install WordPress and set proper volumes and environment variabes in both. This can quickly lead to issues that shouldn’t be there in the first place. But, now that we know the rules of the game, let’s consider breaking them. Here are the three possible scenarios how to use Docker images for our project:
- Two separate images, one running PHP-FPM and the other Nginx
- The same image with both PHP-FPM and Nginx inside, running in separate containers with different startup commands
- One single image and container, running two processes
Like with most software, it comes down to personal preference of developers, and sometimes, it’s worth breaking some rules for convenience (WordPress does it all the time), so I’m going to go with option 3: We will run both processes in one container.
In the previous article, we’ve created two Dockerfiles
, so in order to continue, we must now decide, which of these two is going to become our main (and only) one, the one that’s based on Nginx
or the one with PHP-FPM
? Again, the answer can be it depends, but for me, the decision is based on the complexities that installing either of the two brings. Since PHP-FPM
is more complex, I’m going to base our main image on that and let someone more experienced than me worry about it.
So go ahead, delete the Dockerfile.php-fpm
we created in our previous tutorial, and also delete all the contents of our main Dockerfile
. Next, put in our FROM
directive and MAINTAINER
directives:
FROM php:7.0.6-fpm-alpine
MAINTAINER Tomaz Zaman <tomaz@codeable.io>
Installing system-wide packages
Next, add all the necessary system dependencies that the image needs in order to run properly:
RUN apk add --no-cache nginx mysql-client supervisor curl
bash redis imagemagick-dev
Let me go over them and explain why each and every is needed:
– nginx
requires no explanation, we need to serve our website
– mysql-client
is needed for WP to connect to our mysql
image
– supervisor
allows us to run multiple processes (more on that at the end of the article)
– curl
to download files from the web
– bash
is a widely popular shell (Alpine ships with Almquist shell by default)
– redis
will allow WP to connect to our redis
image (speed! SPEED!)
– imagemagick-dev
comes with all the necessary graphic libraries for our Media
(Notice the backslash in the directive – this is how linux commands are separated into multiline commands for clarity)
Installing PHP extensions
With all the necessary programs and libraries in place, it’s time to configure PHP, or rather, to install all the PHP extension that WordPress needs in order to run. Add this directive:
RUN apk add --no-cache libtool build-base autoconf
&& docker-php-ext-install
-j$(grep -c ^processor /proc/cpuinfo 2>/dev/null || 1)
iconv gd mbstring fileinfo curl xmlreader xmlwriter spl ftp mysqli opcache
&& pecl install imagick
&& docker-php-ext-enable imagick
&& apk del libtool build-base autoconf
Whoa! That’s a big one, right? But as you’ll learn now, this it a fairly common pattern in building Docker images, it’s called a chained command; You just take multiple commands you’d like to run, and join them with double ampersands (&&
). This is important, ss we’ve learned in the previous article, results of each directive in Dockerfile
are cached and merged into the final image, which adds a significant weight (in terms of Megabytes) to the final image – to absolutely no benefit.
Case in point: build-base
package takes ~200MB and installs various compilation tools that are only used to build (compile) packages – imagick
in our case – but are completely useless once the image has been built. Incorrectly, we could do it like this, and it would still work:
RUN apk add --no-cache libtool build-base autoconf
RUN pecl install imagick
RUN docker-php-ext-enable imagick
RUN apk del libtool build-base autoconf
However, because each directive is cached independently, ~200MB is added to the final image, and what’s worse, because we remove those build packages on the last directive, they become unaccessible/useless inside the container as well, which is why most of the official Docker images (like PHP) use the chaining pattern.
As you might noticed, we’re also installing a PHP extension that is not mandatory, but highly recommended, since it significantly speeds up WordPress without any configuration: opcache
. PHP is an interpreted language, which means that every time a visitor requests index.php
, PHP-FPM has to load and compile all the required PHP files into code that the computer can understand, which takes a significant amount of CPU cycles and memory. What opcache
does is it saves the result of that compilation (called a bytecode or opcode) into memory, so that on the next call of the script, PHP can just load that version instead of compiling it again from scratch. No reading from hard drive and no compilation equals better performance.
Installing WordPress
With all the PHP dependencies in place, it’s time to install WordPress, but before that, let’s revisit the topic of environment variables. We want updating the Docker image as fast and painless and possible so we’re going to set a couple of environment variables that will be available system-wide, and will help us shorten some of the subsequent commands, because we can access them in shell with the dollar sign, similar to how we access them in PHP.
Put these lines into the Dockerfile
next:
ENV WP_ROOT /usr/src/wordpress
ENV WP_VERSION 4.5.2
ENV WP_SHA1 bab94003a5d2285f6ae76407e7b1bbb75382c36e
ENV WP_DOWNLOAD_URL https://wordpress.org/wordpress-$WP_VERSION.tar.gz
We’re setting up WordPress root directory, it’s version, SHA1
checksum and the download URL. The most important one is of course the version, followed by the checksum, which makes sure that the downloaded file is indeed correct – this is important if the download directory was hacked and WordPress injected with malware. As a consequence, the checksum wouldn’t be correct and your command would fail. You can find file checksums on WordPress’s download page.
The WP_ROOT
and WP_DOWNLOAD_URL
are just convenient shortcuts and have no effect on the installation of WordPress itself but it makes sense to have them all in one place, much like you usually define commonly used variables at the start of the script.
Now it’s time to download WordPress, check the checksum and extract it into $WP_ROOT
:
RUN curl -o wordpress.tar.gz -SL $WP_DOWNLOAD_URL
&& echo "$WP_SHA1 *wordpress.tar.gz" | sha1sum -c -
&& tar -xzf wordpress.tar.gz -C $(dirname $WP_ROOT)
&& rm wordpress.tar.gz
RUN adduser -D deployer -s /bin/bash -G www-data
The first directive downloads WordPress tarball, checks whether the file has not been tampered with, extracts it into /usr/src
and removes the original tarball afterwards. It will not be needed from this point on – all in one command, for the reasons explained above.
The second directive just adds a custom user in the same group as www-data
. This will increase security of our installation, since the files will be owned by this user, rather than the one our webserver is running under (we will set those permissions in one of the following steps when everything is in place).
Why is WordPress extracted into /usr/src
? In order to answer that, let’s look at the important parts of WordPress’s filesystem. It’s divided into three main sections:
wp-content
is where your custom functionality iswp-config.php
is the configuration file that’s in the WordPress root directory- WordPress core (everything in the WordPress directory, apart from the previous two)
What the directive above does it extracts the WordPress core into a directory, but we need to consider where it makes the most sense to put wp-content
, the only unique part of the install, and the answer is outside the core directory. Why? Because it’s the only directory we need locally, on the host, outside the image. This allows us to install and/or develop themes and plugins on the host and persist those files even if the container is shut down.
In order to achieve that, we need two things; First, we need to have a VOLUME
directive in the Dockerfile
(if you forgot what it does, revisit the previous article), so add these three lines into it:
VOLUME /var/www/wp-content
WORKDIR /var/www/wp-content
Now we only need to let WordPress know to look into /var/www/wp-content
when requiring files from the usual wp-content
directory, and we can do that with properly configured wp-config.php
.
Configuring wp-config.php
Now it’s time to show you some magic – and by magic I mean one of the Docker’s strengths. Create an empty wp-config.php
in your project root directory on the host, and put the following contents in it:
<?php
define('WP_CONTENT_DIR', '/var/www/wp-content');
$table_prefix = getenv('TABLE_PREFIX') ?: 'wp_';
foreach ($_ENV as $key => $value) {
$capitalized = strtoupper($key);
if (!defined($capitalized)) {
define($capitalized, $value);
}
}
if (!defined('ABSPATH'))
define('ABSPATH', dirname(__FILE__) . '/');
require_once(ABSPATH . 'wp-settings.php');
Sorcery! Witchcraft! No, my friend, the true power of environment variables. Told you they would be very useful – we can re-use the same configuration whether it’s for development, staging or production – one wp-config.php
to rule them all! It’s primary function is to loop through all the environment variables and define them as PHP constants – apart from WP_CONTENT_DIR
, which cannot be changed since our Dockerfile
expects the directory to be in /var/www/wp-content
.
Note: This approach also implicitly brings another, huge benefit to security of our WordPress install. No production values get hardcoded, so in case PHP-FPM fails and Nginx mistakenly delivers the plain text version of the file, a potential attacker (or just a random visitor) can’t get credentials to our database or other, password protected areas.
Now this file on it’s own isn’t very useful (because we still need a place to define our constants), so let’s create a new one, in which we will actually define all these values; Name it .env
and put it into your project directory with contents, similar (or identical, for now) to this:
# All of these are being read by wp-config.php
DISABLE_WP_CRON=true
WP_REDIS_HOST=redis
DB_NAME=wp
DB_USER=wp
DB_PASSWORD=wp
DB_HOST=mysql
TABLE_PREFIX=wp_
WP_SITEURL=http://localhost:8080
WP_DEBUG=true
WP_CACHE_KEY_SALT=my-site-
FS_METHOD=direct
# Don't forget to update these: https://api.wordpress.org/secret-key/1.1/salt/
AUTH_KEY=your_auth_key
SECURE_AUTH_KEY=your_secure_auth_key
LOGGED_IN_KEY=your_logged_in_key
NONCE_KEY=your_nonce_key
AUTH_SALT=your_auth_salt
SECURE_AUTH_SALT=your_secure_auth_salt
LOGGED_IN_SALT=your_logged_in_salt
NONCE_SALT=your_nonce_salt
When we’ll run the image we are building, we will let Docker know to load this file properly, but for now, just leave it there, we have a few other things to put in our Dockerfile
, before we are done, starting with copying this wp-config.php
into the source directory and setting proper ownership/permissions on it. Add the following two directives:
COPY wp-config.php $WP_ROOT
RUN chown -R deployer:www-data $WP_ROOT
&& chmod 640 $WP_ROOT/wp-config.php
Configuring WP-CRON
The official solution to WordPress cron jobs is mostly rejected by many developers and it comes down to two main reasons:
- low traffic sites won’t trigger it in correct time because it depends on visits
- high traffic sites won’t trigger it in correct time because caching often prevents any WordPress file to be called on visits
To solve this, I’ve decided to completely disable WP-CRON (note the DISABLE_WP_CRON
environment variable above) and use a custom solution, that’s fairly simple to implement. First, create a file called cron.conf
in your project directory and put the following configuration in:
# Cron configuration file, set to run WordPress cron once every minute
* * * * * php /usr/src/wordpress/wp-cron.php
This is the standard cron syntax which triggers WordPress’s cron every minute – feel free to modify it according to your needs.
For it to work, we only need to copy this file into the proper directory and make sure the permissions are correct. On Alpine Linux that our image is based off, that directory is /etc/crontabs
, so put this into the Dockerfile
next:
COPY cron.conf /etc/crontabs/deployer
RUN chmod 600 /etc/crontabs/deployer
As you might have noticed, in the image, the cron.conf
file is being named deployer
, because this is the user we need to run our cron command as and the cron daemon understands that without any further configuration.
(Optional) Installing wp-cli
If you haven’t used it yet, wp-cli is a very useful tool to manipulate WordPress through the command line: You can install and activate plugins, manage object caches, options, posts,… you name it.
To have it at our disposal within our image, we only need to add the following directive to our Dockerfile
:
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
&& chmod +x wp-cli.phar
&& mv wp-cli.phar /usr/local/bin/wp
Because we don’t have all the components yet to run this image (MySQL), we will revisit this topic (to test it out) towards the end of this article, but for now, let’s continue with configuring the last major part of our stack: Nginx.
Nginx configuration
While Nginx has already been installed in our very first directive, we need to configure it to work properly, and it takes two files in order to do so (create them both in the project directory): nginx.conf
, which is the main configuration file and vhost.conf
, which will be our virtual host.
In our previous article, we built upon the official Nginx image, which has a pretty good main configuration file, so copy it’s contents to our nginx.conf
and only change the line saying user nginx;
to say user www-data;
, because that’s the user our PHP-FPM process is running as, and no point in having a different one for Nginx.
This file also has one very useful line (31), which just includes any configuration files that end in .conf
and are located in /etc/nginx/conf.d/
. This makes it easy to create any kind of additional configuration, most useful one being our primary virtual host. So put the following code in the vhost.conf
:
server {
server_name _;
listen 80 default_server;
root /usr/src/wordpress;
index index.php index.html;
access_log /dev/stdout;
error_log /dev/stdout info;
location /wp-content {
root /var/www;
expires 1M;
access_log off;
add_header Cache-Control "public";
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ .php$ {
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# Optionally set max upload size here
fastcgi_param PHP_VALUE "upload_max_filesize = 20M n post_max_size=20M";
}
}
Now that we have both files in place, we need to copy them over to the image and make sure that the logs are being redirected (symlinked) to the standard output and error streams (stdout
and stderr
):
COPY nginx.conf /etc/nginx/nginx.conf
COPY vhost.conf /etc/nginx/conf.d/
RUN ln -sf /dev/stdout /var/log/nginx/access.log
&& ln -sf /dev/stderr /var/log/nginx/error.log
&& chown -R www-data:www-data /var/lib/nginx
We’re also changing the ownership of /var/lib/nginx
which is used by Nginx for caching and temporary file upload storage to www-data
, because by default, it’s owned by nginx
and we’re not using that user.
(Optional) Setting an entrypoint
If you followed the previous article, you might have noticed one annoyance: When you ran $ docker-compose up
and visited your browser, you might have gotten a database connection error, but when you refreshed, it was no longer there.
This happens because docker-compose
runs all the containers in a somewhat random order, so the Nginx container was up and ready faster than MySQL, resulting in a (short-lived) error. While in PHP world this is a mere annoyance, some applications (like Ruby on Rails) won’t even start properly if they can’t connect to the database during boot.
So the problem we’re facing is that not all conditions were met before the main container process has started, but luckily, Docker comes with a built-in solution for this, and it’s called an ENTRYPOINT
directive.
Entrypoint is, simply put, a command that is ran before our main command (the CMD
directive) and many image authors are leveraging it’s potential by writing custom shell scripts that perform various tasks like checking whether the database is up, that proper environment variables are set or other conditions.
For the purpose of learning, let’s make a script that checks whether our MySQL container is accepting connections and delay the execution of our main command in case it doesn’t.
Make a new file in the project root, call it docker-entrypoint.sh
(this is the most commonly used name) and put the following code in, and don’t forget to change it into an executable with $ chmod +x docker-entrypoint.sh
.
#!/bin/bash
set -e # terminate on errors
function test_mysql {
mysqladmin -h "${DB_HOST}" ping
}
until (test_mysql); do
>&2 echo "MySQL unavailable - sleeping."
sleep 3
done
>&2 echo "MySQL is up - executing command."
exec "$@"
Even if it’s not PHP, you shouldn’t have any problems reading this; We define a function to test MySQL connection then just loop over it in 3-second intervals until it’s ready. When it is, we execute the main container command.
If you look closely at the last line (exec "$@"
), you’ll notice we are executing the remaining arguments (which is what $@
stands for), so what’s the first argument then? It’s the entrypoint! Broadly speaking, if the ENTRYPOINT
directive is present, Docker just prepends it to the CMD
and runs both as one single command.
Let’s look at a two different examples:
– ENTRYPOINT [ "docker-entrypoint.sh" ]
and CMD [ "nginx" ]
will result in $ docker-entrypoint.sh nginx
– ENTRYPOINT [ "ls" ]
and CMD [ "-la" ]
will result in $ ls -al
. This means that CMD
doesn’t need to be a command at all, it can just be a list of arguments passed to the entrypoint. The more you’ll dig into Docker, the more powerful you’ll find this approach.
Now that we have a basic understanding how the entrypoint works, add this to the Dockerfile
to copy it into image and set it properly:
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT [ "docker-entrypoint.sh" ]
Bringing it together with Supervisor
Like I said at the beginning of the article, Supervisor is an integral part of our setup, because we’re breaking the single responsibility principle, and on purpose. It’s official description says it’s a process control system, meaning it’s a master process that takes care of the child processes, PHP-FPM and Nginx in our case. Using it will allow us to only run one single command (the supervisor start) and let it handle other processes through a configuration file.
Create that file in the project root directory, name it supervisord.conf
and put the following code in:
[supervisord]
nodaemon=true
loglevel=debug
logfile=/var/log/supervisor/supervisord.log
pidfile=/var/run/supervisord.pid
childlogdir=/var/log/supervisor
[program:nginx]
command=nginx -g "daemon off;"
redirect_stderr=true
autorestart=false
startretries=0
[program:php-fpm]
command=php-fpm
redirect_stderr=true
autorestart=false
startretries=0
[eventlistener:processes]
command=stop-supervisor.sh
events=PROCESS_STATE_STOPPED, PROCESS_STATE_EXITED, PROCESS_STATE_FATAL
This is the standard supervisor configuration syntax, so check the official documentation if you want to learn how it works. In a nutshell, we define two programs that should be supervised (like [program:nginx]
) and an event listener ([eventlistener:processes]
), which stops the supervisor itself should any of the child processes fail with one of the events listed in all caps.
This is, in my opinion, the proper approach, because by default, supervisor will attempt to restart a failed process. On a normal server or virtual machine, this is a desired behavior, but on Docker, it’s is not – remember, Docker containers run under some management software, such as Docker Swarm or Kubernetes and we want that to be the master container manager, not supervisor.
So create that stop-supervisor.sh
script in the project directory, make it executable ($ chmod +x stop-supervisor.sh
) and put the following code in:
#!/bin/bash
printf "READYn";
while read line; do
echo "Incoming Supervisor event: $line" >&2;
kill -3 $(cat "/var/run/supervisord.pid")
done < /dev/stdin
This is basically an infinite loop that’s waiting for Supervisor to write to stdin
and when it does, it stops it via the kill
command. This, in turn also stops the container, letting the higher level software to know that the container is no longer running.
Next, open the Dockerfile
and paste these final lines in:
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisord.conf
COPY stop-supervisor.sh /usr/local/bin/
CMD [ "/usr/bin/supervisord", "-c", "/etc/supervisord.conf" ]
When you run the container, the default command runs Supervisor, which in turn starts both needed services (PHP-FPM and Nginx).
Trying it out
Since we already know how to run our container with plain docker commands, let’s skip that and create a docker-compose.yml
that will make our lives a bit easier. Put this in:
version: '2'
services:
wordpress:
build: .
volumes:
- ./wp-content:/var/www/wp-content
env_file: .env
ports:
- "8080:80"
links:
- mysql
- redis
cron:
build: .
command: crond -f -l 6 -L /dev/stdout
volumes:
- ./wp-content:/var/www/wp-content
env_file: .env
links:
- mysql
- redis
mysql:
image: mariadb:5.5
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: wp
MYSQL_DATABASE: wp
MYSQL_USER: wp
MYSQL_PASSWORD: wp
redis:
image: redis:3.2.0-alpine
command: redis-server --appendonly yes
volumes:
- ./redis:/data
As you might have noticed, we’re mapping different host directories for each container so that once we shut them down, the data will persist. So create these two directories in your project root: redis
and mysql
. Also make sure to copy either the default wp-content
directory from a fresh install or one that you already have on an existing site into the project root – without the WordPress core, of course, that’s being provided by the image.
Now build and run the image:
$ docker-compose build
# After some time...
$ docker-compose up
You should now see the logs from four different containers: wordpress
, mysql
, redis
and cron
. The latter is also using our primary image, just running a different command; One that runs the cron
daemon in the foreground and outputting logs to the screen (stdout
) rather than into a log file.
We could indeed add cron (or all other processes for that matter) into our main image, but then we’d end up with the same problem Docker is trying to fix: one single container having too many responsibilities. We’re already pushing it with having Nginx and PHP-FPM in the same one, so let’s keep it at that.
And there’s one more reason, which we will discuss at length in the next article: scalability. The configuration we currently have allows us to start as many wordpress
containers horizontally as we want (literally, as long as there’s underlying hardware to support them and money to pay for it), connecting them to a load balancer in front.
As promised earlier, we can now also test wp-cli
. Run these commands to install a plugin:
– $ docker-compose run wordpress /bin/bash
to connect to container’s shell
– $ su deployer
to switch to our deployer user
– $ cd /usr/src/wordpress
to switch directory to where WP is
– $ wp plugin install redis-cache
to install a very useful plugin
– $ wp plugin activate redis-cache
to activate it
– $ exit
enter this twice to exit the container completely
Now just add these two lines to your .env
file and you’re all set!
WP_REDIS_HOST=redis
WP_CACHE_KEY_SALT=my-redis-salt-
Just run $ docker-compose up
again, login to your WP install, go to Settings -> Redis
, turn on object cache and enjoy your faster WordPress!
Bonus and conclusion
While all this may seem a bit overwhelming at first, it brings incredible benefits in the long run. Once you build this image, you can effectively use it on any number of WordPress websites locally, or in production, and I’ve done just that, I pushed this image to the official Docker repository for all of you to use.
To dockerize one of your existing websites that you have in development, do these steps:
1) Copy over docker-compose.yml
in your WordPress root directory.
2) Modify the docker-compose.yml
and change lines with build: .
to image: codeable/wordpress:4.5.2
(so Docker won’t look for a Dockerfile
but rather download a prepared image)
3) Create an .env
file and put in values that you have in your existing wp-config.php
.
4) (Optionally) Delete wp-admin
, wp-includes
and all .php
files in the project directory. Remember, WordPress core is provided by the image!
5) (Optionally) Export the database from where you’ve used it until now, put it into wp-content
, log into the container ($ docker-compose run wordpress /bin/bash
, cd to /var/www/wp-content
where the SQL dump should be and import it into our mysql
container)
6) Run $ docker-compose up
7) Rejoice!
These same steps apply to all your WordPress projects and as you can see, it takes less than a minute to convert it to a Docker-frendly version, and what’s even better, you no longer need to deal with installing all the dependencies locally, bye bye long hours wasted on installing Nginx, PHP and MySQL on your development machine. Take an image, run it, done.
In the next article, we’re going to do the final and arguably the most important step, we’re going to deploy this WordPress image to production. Since there’s a big event I’m speaking at coming this month (WCEU ahoy!) it may take a while, but I won’t forget about you, loyal reader.