If you have been following our Magento 2 Docker architectures series, you are almost there to configure high level architecture for your e-commerce store. Although in all my previous blogs in this series, I have been using Magento 2 only but these docker architecture can be used for other frameworks as well. So before exploring today’s architecture, let us review our journey so far.
In our previous blogs, we have covered,
- Magento 2 setup on the single Docker container architecture using Dockerfile.
- Magento 2 setup on the multi-container Docker architecture utilising Docker containers linking concept with the help of Docker-Compose tool.
- Magento 2 setup and its integration with Varnish-Cache utilising Docker containers linking concepts with the help of Docker-Compose tool.
- Very brief introduction to Docker networks and Docker containers linking in Default bridge network.
All these above mentioned architecture can be used as per the server set up requirements. But there few things missing from these architectures that might not be needed in beginning but they can be essential need as our e-commerce store grows.
We all have seen that our previous architectures lack SSL and do not have any mechanism to take database backup on the scheduled basis. Although we had secured our application code keeping it on our host but database is as important as server code.
So in this blog, in addition to previous setups we will also configure SSL for our Magento 2 store and we will create a bash script that will make regular database backups on our docker hosts. Also as we had discussed in our last blog that we will achieve an extra layer of optimisation, so in this blog we will also integrate Redis-server as well.
Introduction to Redis
As Varnish-Cache works as a HTTP accelerator, Redis alongside its various features, can be used as database cache. Quoting from docs itself,
Redis is an open source, BSD licensed, advanced key-value store that can optionally be used in Magento for back end and session storage. It can be used to cache the database resulting in exploitation of less database resources, and provides a tunable persistent cache. It is a good alternative to memcacheD.
When first time page is loaded, a database is queried on the server. Redis caches the query. Next time other user loads the page the results are provided from the redis without quering the actual database. It implements a persistent object cache (no expiration). An object cache works by caching the SQL queries in memory which are needed to load the web page. When the data of main database server is updatedc,then corresponding key in the redis is invalidated. So it provides the updated data instead of caching the data. If a query is not available in redis, the database provides the result and it adds the result to its cache.
Magento supports many backend caches like MemcacheD and APC that are commonly used. However, Redis has become a popular and powerful cache system for Magento and other web applications.
Need For Nginx as SSL Termination
You all, who are not much familiar with nginx-varnish relationship, might have thought that why we are going to use nginx for SSL instead of configuring SSL with apache2 server itself. Take a note that, Varnish being a reverse proxy caching server sits in front of apache2 server. Also as Varnish is a HTTP accelerator it cannot deal with HTTPS traffic. So we must deploy a way to direct both HTTP and HTTPS traffic to Varnish cache server which in turn, if needed, forward it apache2 server.
Nginx comes in action here. Nginx serves as a reverse proxy server that receives traffic on port 80 and 443 and then proxy pass it to listening port of Varnish Cache server.
So continuing our legacy of multi-container Docker architecture, we will be using separate containers for apache2 server, mysql-server, varnish-cache server, redis-server and nginx-server (for ssl termination) for its integration with Magento 2 on Ubuntu 16.04.
The main directory holding all the files/directories will be considered as the project name. Custom project name can also be in set docker-compose.yml file. Here we are creating separate directories for apache2, mysql-server, redis-server, varnish cache server and nginx server setup that hold their Dockerfile(s) and their associated volumes.
Take a note that: following the same approach, Magento 2 files will be placed on our host. As it is a good practice to keep application files on host so that it will not be lost if containers or images get accidentally removed. These magento files will be mapped from host to running docker container.
We are also using supervisor to control all five of our servers in their respective containers. Apart from controlling these servers, supervisor is also running various commands and scripts that will be mentioned later in this blog.
To begin with create a directory on your Ubuntu 16.04 server for this project. Our directory architecture will be something like:
- docker-compose.yml
- web_server
Dockerfile
supervisord.conf
- database_server
Dockerfile
mysql.sh
supervisord.conf
- cache_server
Dockerfile
default.vcl
supervisord.conf
varnish
- redis_server
Dockerfile
supervisord.conf
- ssl_server
Dockerfile
default
nginx.conf
supervisord.conf
- magento2
Unarchived Magento 2 files and directories.
The docker-compose.yml file is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
version: '3' services: ssl_server: build: context: ./ssl_server/ container_name: nginx depends_on: - web_server - cache_server - database_server - redis_server volumes: - ./ssl_server/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf - ./ssl_server/default:/etc/nginx/sites-enabled/default - ./ssl_server/nginx.conf:/etc/nginx/nginx.conf links: - web_server - cache_server - database_server - redis_server ports: - "80:80" - "443:443" redis_server: build: context: ./redis_server/ container_name: redis depends_on: - web_server - cache_server - database_server volumes: - ./redis_server/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf links: - web_server - database_server ports: - "6379:6379" cache_server: build: context: ./cache_server/ container_name: varnish depends_on: - web_server volumes: - ./cache_server/default.vcl:/etc/varnish/default.vcl - ./cache_server/varnish:/etc/default/varnish - ./cache_server/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf ports: - "6081:6081" - "6082:6082" links: - web_server - database_server web_server: build: context: ./web_server/ container_name: apache2 volumes: - ./magento2:/var/www/html - ./web_server/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf ports: - "8080:8080" links: - database_server database_server: build: context: ./database_server/ args: - mysql_password=mention_your_mysql_root_password - mysql_database=mention_your_database_name container_name: mysql volumes: - ./database_server/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf - ./database_server/mysql.sh:/etc/mysql.sh ports: - "3306:3306" |
Here in YAML file above, we are defining five services: ssl_server, redis_server, cache_server, web_server and database_server.
- The ssl_server is associated with Nginx server configuration. Container name is nginx, linked to all the other services and port 80 and 443 is allocated to it. There are three files that are being mapped from host to docker container and “context” points to it Dockerfile installing nginx version 1.10.
- The redis_server is associated with Redis-server configuration. Container name is redis, linked to web_server and port 6379 is allocated to it. There is one file being mapped from host to docker and “context” points to its Dockerfile installing redis-server.
- The cache_server is associated with our Varnish cache server configuration. Container name defined is varnish, linked to web_server and port 6081 & 6082 is allocated to it. There are 3 files that are being mapped from host to docker container and “context” points to its Dockerfile installing Varnish version 4.1.
- The web_server is associated with our apache server configuration. Container name defined for this service is apache2, linked to database_server and port 8080 is allocated to it. There are four volumes or files are being mapped from host to docker container and “context” under “build” points to location of its Dockerfile.
- Also, the service database_server is associated with mysql-server. Container name is defined as mysql and port 3306 is allocated to it. Mysql root password will be passed as build argument and their are two volumes/files mapped from host to docker container. Same as web_server, “context” points to location of its Dockerfile installing mysql-server-5.7.
Lets take a look in our web_server directory. It contains 2 files. Dockerfile is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
FROM ubuntu:16.04 RUN apt-get update \ && apt-get -y install apache2 nano mysql-client \ && a2enmod rewrite \ && a2enmod headers \ && export LANG=en_US.UTF-8 \ && apt-get update \ && apt-get install -y software-properties-common \ && apt-get install -y language-pack-en-base \ && LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php \ && apt-get update \ && apt-get -y install php7.1 php7.1-curl php7.1-intl php7.1-gd php7.1-dom php7.1-mcrypt php7.1-iconv php7.1-xsl php7.1-mbstring php7.1-ctype php7.1-zip php7.1-pdo php7.1-xml php7.1-bz2 php7.1-calendar php7.1-exif php7.1-fileinfo php7.1-json php7.1-mysqli php7.1-mysql php7.1-posix php7.1-tokenizer php7.1-xmlwriter php7.1-xmlreader php7.1-phar php7.1-soap php7.1-mysql php7.1-fpm php7.1-bcmath libapache2-mod-php7.1 \ && sed -i -e"s/^memory_limit\s*=\s*128M/memory_limit = 512M/" /etc/php/7.1/apache2/php.ini \ && rm /var/www/html/* \ && sed -i "s/None/all/g" /etc/apache2/apache2.conf \ && sed -i "s/80/8080/g" /etc/apache2/ports.conf /etc/apache2/sites-enabled/000-default.conf \ ##install supervisor and setup supervisord.conf file && apt-get install -y supervisor \ && mkdir -p /var/log/supervisor env APACHE_RUN_USER www-data env APACHE_RUN_GROUP www-data env APACHE_PID_FILE /var/run/apache2.pid env APACHE_RUN_DIR /var/run/apache2 env APACHE_LOCK_DIR /var/lock/apache2 env APACHE_LOG_DIR /var/log/apache2 env LANG C WORKDIR /var/www/html CMD ["/usr/bin/supervisord"] |
And at last we have supervisord.conf file that supervisor use to run apache2 server and ownership commands. Its contents are shown below:
1 2 3 4 5 6 7 8 |
[supervisord] nodaemon=true [program:apache2] command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND" [program:user_permission] command=/bin/bash -c "chown -R www-data: /var/www/" |
Moving on to our database_server directory, it contains 3 files. Dockerfile is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
FROM ubuntu:16.04 ARG mysql_password ARG mysql_database env MYSQL_ROOT_PASSWORD {mysql_password} env MYSQL_DATABASE ${mysql_database} RUN apt-get update \ && echo "mysql-server-5.7 mysql-server/root_password password ${mysql_password}" | debconf-set-selections \ && echo "mysql-server-5.7 mysql-server/root_password_again password ${mysql_password}" | debconf-set-selections \ && DEBIAN_FRONTEND=noninteractive apt-get -y install mysql-server-5.7 && \ mkdir -p /var/lib/mysql && \ mkdir -p /var/run/mysqld && \ mkdir -p /var/log/mysql && \ touch /var/run/mysqld/mysqld.sock && \ touch /var/run/mysqld/mysqld.pid && \ chown -R mysql:mysql /var/lib/mysql && \ chown -R mysql:mysql /var/run/mysqld && \ chown -R mysql:mysql /var/log/mysql &&\ chmod -R 777 /var/run/mysqld/ \ && sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/mysql.conf.d/mysqld.cnf \ ##install supervisor and setup supervisord.conf file && apt-get install -y supervisor nano \ && mkdir -p /var/log/supervisor CMD ["/usr/bin/supervisord"] |
As we have discussed many times we cannot perform any operation from Dockerfile that requires a particular service to be running. As with case of database, we cannot create database from Dockerfile as mysql service is not running. For database and its user creation, we will create a bash script that will run whenever a container will launch, hence creating mentioned database and its user
We are using “mysql.sh” as the bash script as mentioned in Dockerfile. Bash script “msyql.sh” resides on our host parallel to Dockerfile.
Contents of mysql.sh is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
#!/bin/bash set -u sleep 4 database_connectivity_check=no var=1 while [ "$database_connectivity_check" != "mysql" ]; do /etc/init.d/mysql start sleep 2 database_connectivity_check=`mysqlshow --user=root --password=$MYSQL_ROOT_PASSWORD | grep -o mysql` if [ $var -ge 4 ]; then exit 1 fi var=$((var+1)) done database_availability_check=`mysqlshow --user=root --password=$MYSQL_ROOT_PASSWORD | grep -ow "$MYSQL_DATABASE"` if [ "$database_availability_check" == "$MYSQL_DATABASE" ]; then exit 1 else mysql -u root -p$MYSQL_ROOT_PASSWORD -e "grant all on *.* to 'root'@'%' identified by '$MYSQL_ROOT_PASSWORD';" mysql -u root -p$MYSQL_ROOT_PASSWORD -e "create database $MYSQL_DATABASE;" mysql -u root -p$MYSQL_ROOT_PASSWORD -e "grant all on $MYSQL_DATABASE.* to 'root'@'%' identified by '$MYSQL_ROOT_PASSWORD';" supervisorctl stop database_creation && supervisorctl remove database_creation echo "Database $MYSQL_DATABASE created" fi |
Apart from mysql.sh and Dockerfile, we are also mapping supervisord.conf file. Its contents are shown below:
1 2 3 4 5 6 7 8 9 |
[supervisord] nodaemon=true [program:mysql] command=/bin/bash -c "touch /var/run/mysqld/mysqld.sock;touch /var/run/mysqld/mysqld.pid;chown -R mysql:mysql /var/lib/mysql;chown -R mysql:mysql /var/run/mysqld;chown -R mysql:mysql /var/log/mysql;chmod -R 777 /var/run/mysqld/;/etc/init.d/mysql restart" [program:database_creation] command=/bin/bash -c "chmod a+x /etc/mysql.sh; /etc/mysql.sh" |
Moving on and taking a look on our cache_server directory, It contains 4 files. Dockerfile contents are shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
From ubuntu:16.04 MAINTAINER Alankrit Srivastava alankrit.srivastava256@webkul.com ##update server RUN apt-get update \ ##install supervisor and setup supervisord.conf file && apt-get install -y supervisor \ && mkdir -p /var/log/supervisor \ ##install varnish && apt-get -y install varnish \ && rm /etc/varnish/default.vcl \ && rm /etc/default/varnish EXPOSE 6082 6081 CMD ["/usr/bin/supervisord"] |
We also have server configuration file named varnish whose contents are shown below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
# Configuration file for varnish # # /etc/init.d/varnish expects the variables $DAEMON_OPTS, $NFILES and $MEMLOCK # to be set from this shell script fragment. # # Note: If systemd is installed, this file is obsolete and ignored. You will # need to copy /lib/systemd/system/varnish.service to /etc/systemd/system/ and # edit that file. # Should we start varnishd at boot? Set to "no" to disable. START=yes # Maximum number of open files (for ulimit -n) NFILES=131072 # Maximum locked memory size (for ulimit -l) # Used for locking the shared memory log in memory. If you increase log size, # you need to increase this number as well MEMLOCK=82000 # Default varnish instance name is the local nodename. Can be overridden with # the -n switch, to have more instances on a single server. # You may need to uncomment this variable for alternatives 1 and 3 below. # INSTANCE=$(uname -n) # This file contains 4 alternatives, please use only one. ## Alternative 1, Minimal configuration, no VCL # # Listen on port 6081, administration on localhost:6082, and forward to # content server on localhost:8080. Use a 1GB fixed-size cache file. # # This example uses the INSTANCE variable above, which you need to uncomment. # # DAEMON_OPTS="-a :6081 \ # -T localhost:6082 \ # -b localhost:8080 \ # -u varnish -g varnish \ # -S /etc/varnish/secret \ # -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" ## Alternative 2, Configuration with VCL # # Listen on port 6081, administration on localhost:6082, and forward to # one content server selected by the vcl file, based on the request. # DAEMON_OPTS="-a :6081 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,256m" ## Alternative 3, Advanced configuration # # This example uses the INSTANCE variable above, which you need to uncomment. # # See varnishd(1) for more information. # # # Main configuration file. You probably want to change it :) # VARNISH_VCL_CONF=/etc/varnish/default.vcl # # # Default address and port to bind to # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. # VARNISH_LISTEN_ADDRESS= # VARNISH_LISTEN_PORT=6081 # # # Telnet admin interface listen address and port # VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 # VARNISH_ADMIN_LISTEN_PORT=6082 # # # The minimum number of worker threads to start # VARNISH_MIN_THREADS=1 # # # The Maximum number of worker threads to start # VARNISH_MAX_THREADS=1000 # # # Idle timeout for worker threads # VARNISH_THREAD_TIMEOUT=120 # # # Cache file location # VARNISH_STORAGE_FILE=/var/lib/varnish/$INSTANCE/varnish_storage.bin # # # Cache file size: in bytes, optionally using k / M / G / T suffix, # # or in percentage of available disk space using the % suffix. # VARNISH_STORAGE_SIZE=1G # # # File containing administration secret # VARNISH_SECRET_FILE=/etc/varnish/secret # # # Backend storage specification # VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" # # # Default TTL used when the backend does not specify one # VARNISH_TTL=120 # # # DAEMON_OPTS is used by the init script. If you add or remove options, make # # sure you update this section, too. # DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ # -f ${VARNISH_VCL_CONF} \ # -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ # -t ${VARNISH_TTL} \ # -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ # -S ${VARNISH_SECRET_FILE} \ # -s ${VARNISH_STORAGE}" # ## Alternative 4, Do It Yourself # # DAEMON_OPTS="" |
And most importantly, our Varnish configuration language file. This VCL file is provided by Magento 2 itself and it works perfectly. Its contents are shown below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 |
vcl 4.0; import std; # The minimal Varnish version is 5.0 # For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https' backend default { .host = "apache2"; .port = "8080"; .first_byte_timeout = 600s; } acl purge { "localhost"; } sub vcl_recv { if (req.method == "PURGE") { if (client.ip !~ purge) { return (synth(405, "Method not allowed")); } # To use the X-Pool header for purging varnish during automated deployments, make sure the X-Pool header # has been added to the response in your backend server config. This is used, for example, by the # capistrano-magento2 gem for purging old content from varnish during it's deploy routine. if (!req.http.X-Magento-Tags-Pattern && !req.http.X-Pool) { return (synth(400, "X-Magento-Tags-Pattern or X-Pool header required")); } if (req.http.X-Magento-Tags-Pattern) { ban("obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern); } if (req.http.X-Pool) { ban("obj.http.X-Pool ~ " + req.http.X-Pool); } return (synth(200, "Purged")); } if (req.method != "GET" && req.method != "HEAD" && req.method != "PUT" && req.method != "POST" && req.method != "TRACE" && req.method != "OPTIONS" && req.method != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } # We only deal with GET and HEAD by default if (req.method != "GET" && req.method != "HEAD") { return (pass); } # Bypass shopping cart, checkout and search requests if (req.url ~ "/checkout" || req.url ~ "/catalogsearch") { return (pass); } # Bypass health check requests if (req.url ~ "/pub/health_check.php") { return (pass); } # Set initial grace period usage status set req.http.grace = "none"; # normalize url in case of leading HTTP scheme and domain set req.url = regsub(req.url, "^http[s]?://", ""); # collect all cookies std.collect(req.http.Cookie); # Compression filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") { # No point in compressing these unset req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") { set req.http.Accept-Encoding = "deflate"; } else { # unkown algorithm unset req.http.Accept-Encoding; } } # Remove Google gclid parameters to minimize the cache objects set req.url = regsuball(req.url,"\?gclid=[^&]+$",""); # strips when QS = "?gclid=AAA" set req.url = regsuball(req.url,"\?gclid=[^&]+&","?"); # strips when QS = "?gclid=AAA&foo=bar" set req.url = regsuball(req.url,"&gclid=[^&]+",""); # strips when QS = "?foo=bar&gclid=AAA" or QS = "?foo=bar&gclid=AAA&bar=baz" # Static files caching if (req.url ~ "^/(pub/)?(media|static)/") { # Static files should not be cached by default return (pass); # But if you use a few locales and don't use CDN you can enable caching static files by commenting previous line (#return (pass);) and uncommenting next 3 lines #unset req.http.Https; #unset req.http.X-Forwarded-Proto; #unset req.http.Cookie; } return (hash); } sub vcl_hash { if (req.http.cookie ~ "X-Magento-Vary=") { hash_data(regsub(req.http.cookie, "^.*?X-Magento-Vary=([^;]+);*.*$", "\1")); } # For multi site configurations to not cache each other's content if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } # To make sure http users don't see ssl warning if (req.http.X-Forwarded-Proto) { hash_data(req.http.X-Forwarded-Proto); } } sub vcl_backend_response { set beresp.grace = 3d; if (beresp.http.content-type ~ "text") { set beresp.do_esi = true; } if (bereq.url ~ "\.js$" || beresp.http.content-type ~ "text") { set beresp.do_gzip = true; } # cache only successfully responses and 404s if (beresp.status != 200 && beresp.status != 404) { set beresp.ttl = 0s; set beresp.uncacheable = true; return (deliver); } elsif (beresp.http.Cache-Control ~ "private") { set beresp.uncacheable = true; set beresp.ttl = 86400s; return (deliver); } if (beresp.http.X-Magento-Debug) { set beresp.http.X-Magento-Cache-Control = beresp.http.Cache-Control; } # validate if we need to cache it and prevent from setting cookie if (beresp.ttl > 0s && (bereq.method == "GET" || bereq.method == "HEAD")) { unset beresp.http.set-cookie; } # If page is not cacheable then bypass varnish for 2 minutes as Hit-For-Pass if (beresp.ttl <= 0s || beresp.http.Surrogate-control ~ "no-store" || (!beresp.http.Surrogate-Control && beresp.http.Vary == "*")) { # Mark as Hit-For-Pass for the next 2 minutes set beresp.ttl = 120s; set beresp.uncacheable = true; } return (deliver); } sub vcl_deliver { if (resp.http.X-Magento-Debug) { if (resp.http.x-varnish ~ " ") { set resp.http.X-Magento-Cache-Debug = "HIT"; set resp.http.Grace = req.http.grace; } else { set resp.http.X-Magento-Cache-Debug = "MISS"; } } else { unset resp.http.Age; } unset resp.http.X-Magento-Debug; unset resp.http.X-Magento-Tags; unset resp.http.X-Powered-By; unset resp.http.Server; unset resp.http.X-Varnish; unset resp.http.Via; unset resp.http.Link; } sub vcl_hit { if (obj.ttl >= 0s) { # Hit within TTL period return (deliver); } if (std.healthy(req.backend_hint)) { if (obj.ttl + 300s > 0s) { # Hit after TTL expiration, but within grace period set req.http.grace = "normal (healthy server)"; return (deliver); } else { # Hit after TTL and grace expiration return (miss); } } else { # server is not healthy, retrieve from cache set req.http.grace = "unlimited (unhealthy server)"; return (deliver); } } |
Take a note that we have mentioned apache2 (apache container name) for backend host in our default.vcl file as our magento code will be mapped to apache container. Also, we also have supervisor for controlling varnish server. Its contents are shown below:
1 2 3 4 5 |
[supervisord] nodaemon=true [program:varnish3.0] command=/bin/bash -c "/usr/sbin/varnishd -P /run/varnishd.pid -a :6081 -F -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m" |
Proceeding to redis_server directory, we have Dockerfile as,
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
From ubuntu:16.04 MAINTAINER Alankrit Srivastava alankrit.srivastava256@webkul.com ##update server RUN apt-get update \ && apt-get install -y locales \ && locale-gen en_US.UTF-8 \ && export LANG=en_US.UTF-8 \ && apt-get update \ && apt-get install -y software-properties-common \ && LC_ALL=en_US.UTF-8 add-apt-repository -y ppa:chris-lea/redis-server \ && apt-get update \ && apt-get -y install redis-server \ && sed -i -e"s/^bind\s127.0.0.1/bind 0.0.0.0/" /etc/redis/redis.conf \ && chown -R redis: /var/log/redis/ \ ##install supervisor and setup supervisord.conf file && apt-get install -y supervisor \ && mkdir -p /var/log/supervisor COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf EXPOSE 6379 CMD ["/usr/bin/supervisord"] |
and the supervisord.conf file,
1 2 3 4 5 6 |
[supervisord] nodaemon=true [program:redis] command=/usr/bin/redis-server /etc/redis/redis.conf |
And at last, lets take a look on our ssl_server directory. Its Dockerfile is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
From ubuntu:16.04 MAINTAINER Alankrit Srivastava alankrit.srivastava256@webkul.com ##update server RUN apt-get update \ ##install nginx && apt-get install -y locales \ && locale-gen en_US.UTF-8 \ && export LANG=en_US.UTF-8 \ && apt-get update \ && apt-get install -y software-properties-common \ && LC_ALL=en_US.UTF-8 add-apt-repository -y ppa:nginx/stable \ && apt-get -y update \ && apt-get -y install nginx \ && rm /etc/nginx/sites-enabled/default \ ## Generate self signed certificate && cd /etc/nginx && echo -e "\n\n\n\n\n\n\n" | openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/cert.key -out /etc/nginx/cert.crt \ ##install supervisor and setup supervisord.conf file && apt-get install -y supervisor \ && mkdir -p /var/log/supervisor Expose 80 443 CMD ["/usr/bin/supervisord"] |
And its default configuration file,
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
server { listen 80 default_server; add_header 'Access-Control-Allow-Origin' '*'; server_name localhost; ## mention ip address or domain name # return 302 https://$server_name$request_uri; location / { include /etc/nginx/proxy_params; proxy_pass http://varnish:6081; } } server { listen 443; add_header 'Access-Control-Allow-Origin' '*'; server_name localhost; ## mention ip address or domain name ssl on; ssl_certificate /etc/nginx/cert.crt; ssl_certificate_key /etc/nginx/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES"; ssl_prefer_server_ciphers on; location / { include /etc/nginx/proxy_params; proxy_pass http://varnish:6081; } } |
As you can see that, nginx is listening to port 80 and 443 and forwarding the traffic to varnish container. In our configuration, we have used private SSL certificates. You can use your own certificates and mention their path in default configuration file.
Contents of nginx.conf file is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
user www-data; worker_processes auto; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } |
And its supervisord.conf file,
1 2 3 4 5 |
[supervisord] nodaemon=true [program:nginx] command=/usr/sbin/nginx -c /etc/nginx/nginx.conf |
And our last directory is magento2 directory. In our case, we have download Magento 2.2.5 from https://magento.com/tech-resources/download and unarchived it in magento2 directory. This directory will be mapped with /var/www/html directory in apache2 docker container.
After extracting Magento 2 files in the desired directory, our project directory setup will be completed. Now in order to build the images with docker-compose.yml file, go to project parent directory and run command,
1 |
docker-compose build |
This command will build up all three images for apache2, mysql and varnish server. To check the built images, run command,
1 |
docker images |
Now to run the containers as a part of single project being as mentioned in docker-compose.yml file, run the command:
1 |
docker-compose up -d |
Your containers will get running. To list running containers under docker-compose, run command:
1 2 3 |
docker-compose ps docker ps |
Now, your server setup is all ready, now hit your domain name or IP to install Magento 2 store and configure it with Varnish Cache server as we did in https://cloudkul.com/blog/magento-2-and-varnish-cache-integration-with-docker-compose/. Also test the Varnish cache using varnishhist tool.
NOTE:- Use name or id of the mysql container as database host.
Now, to configure redis-server, go to magento2 directory in your main project directory and open file ~/app/etc/env.php. It will be something just like this,
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
<?php return array ( 'backend' => array ( 'frontName' => 'admin', ), 'crypt' => array ( 'key' => '03fb403b80643a6a11255d40ac72631a', ), 'session' => array ( 'save' => 'files', ), 'db' => array ( 'table_prefix' => '', 'connection' => array ( 'default' => array ( 'host' => 'mysql, 'dbname' => 'magento_db', 'username' => 'magento_user', 'password' => 'databasepassword123', 'active' => '1', ), ), ), 'resource' => array ( 'default_setup' => array ( 'connection' => 'default', ), ), 'x-frame-options' => 'SAMEORIGIN', 'MAGE_MODE' => 'default', 'cache_types' => array ( 'config' => 1, 'layout' => 1, 'block_html' => 1, 'collections' => 1, 'reflection' => 1, 'db_ddl' => 1, 'eav' => 1, 'customer_notification' => 1, 'full_page' => 1, 'config_integration' => 1, 'config_integration_api' => 1, 'translate' => 1, 'config_webservice' => 1, ), 'install' => array ( 'date' => 'Thu, 07 Sep 2017 08:17:00 +0000', ), ); |
Add following piece of code in the last second line of this file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
'cache' => array ( 'frontend' => array ( 'default' => array ( 'backend' => 'Cm_Cache_Backend_Redis', 'backend_options' => array ( 'server' => 'redis', 'port' => '6379', 'persistent' => '', 'database' => '0', 'force_standalone' => '0', 'connect_retries' => '1', 'read_timeout' => '10', 'automatic_cleaning_factor' => '0', 'compress_data' => '1', 'compress_tags' => '1', 'compress_threshold' => '20480', 'compression_lib' => 'gzip', ), ), 'page_cache' => array ( 'backend' => 'Cm_Cache_Backend_Redis', 'backend_options' => array ( 'server' => 'redis', 'port' => '6379', 'persistent' => '', 'database' => '1', 'force_standalone' => '0', 'connect_retries' => '1', 'read_timeout' => '10', 'automatic_cleaning_factor' => '0', 'compress_data' => '0', 'compress_tags' => '1', 'compress_threshold' => '20480', 'compression_lib' => 'gzip', ), ), ), ), |
Please take a note that we have mentioned name of our redis-server container in server value in above code.
Now go to redis-server container and restart the server as,
1 2 3 |
docker exec -ti redis bash /etc/init.d/redis-server restart |
You can also check if Redis-server is able to set keys or not,
1 2 3 4 5 6 7 8 |
redis-cli 127.0.0.1:6379> set mykey KEY OK 127.0.0.1:6379> get mykey "KEY" 127.0.0.1:6379> exit |
You can also use ‘info’ command to get information and statistics about the server as,
1 |
redis-cli info |
Backing Up Databases from Mysql Docker Container
Our databases inside Docker Containers are as critical as our application code. So in order to keep their backup we schedule a shell script that will take backups of all the databases present in mysql-server container and keep them in archived from on our host.
The database backup bash script is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
#!/bin/bash set -u ## Mention your database container name container_name=mysql ## Mention mysql root password MYSQL_ROOT_PASSWORD=mention_your_mysql_root_password DATE=`date +%F-%H-%M-%S` for database in `echo 'show databases;' | docker exec -i mysql mysql --user=root --password=$MYSQL_ROOT_PASSWORD | grep -v Database | grep -v information_schema | grep -v mysql | grep -v performance_schema` do echo $database docker exec $container_name mysqldump -u root -p$MYSQL_ROOT_PASSWORD $database > $database-$DATE.sql && tar -zcvf $database-$DATE.tar.gz $database-$DATE.sql && rm $database-$DATE.sql && echo "$database-$DATE.tar.gz has been created on `date`" >> database_backup.log done |
Now setup the cron to take regular backup.
1 |
crontab -e |
Add the script as,
1 |
0 */12 * * * bash path_to_script/db_backup.sh |
It will take database backups twice a day at interval of 12 hours.
So far we have discussed an optimised architecture of Magento 2 using docker-compose tool. Having very complex and code structure, Magento 2 needs optimisation which we have achieved with Varnish-cache, redis-server cache and also our store is configured with SSL. Please refer to repository https://github.com/webkul/magento2-varnish-redis-ssl-docker-compose setup same architecture.
In our later blogs, we explore see more applications of docker. Stay tuned.