wordpress-nginx-docker
wordpress-nginx-docker copied to clipboard
How can I use multisite?
I was asked to use Wordpress with multisite setup. I know, that it needs to add to wp-config.php:
/* Multisite */
define( 'WP_ALLOW_MULTISITE', true );
However, how do I get the ssl certificates for these sites into the system?
@Elmit2015 - I'm far from a WordPress expert, and more of a tourist based initially on the requirement that my wife wanted a website, and it seemed like something I could probably figure out myself. Things then slowly grew from there.
That said, there looks to be a superset of features when running WordPress in multisite mode
WordPress Multisite Network Features
- You can run a network of multiple WordPress blogs and websites from a single WordPress installation.
- You can have a network of subdomains like
http://john.example.comor directories likehttp://www.example.com/john/. - Open up your WordPress Multisite Network for other users to create an account and get their own WordPress blogs.
- As a Super Admin you can install themes and plugins and make them available to all other sites on the network. However, other site admins on the network will not have the capability to install themes or plugins
- As Super Admin you can change make changes to themes for all websites. Website Admins can not make changes to their themes.
I cannot tell you how to solve it, but I can tell you how I would approach it.
- If the deployment is to be dockerized, you first need to understand how the various parts interact within a standard (non docker) deployment. I'd look up a fair number of tutorials and read what they have to say regarding interactions with the database, admin vs. super admin, subdomain vs named directory, etc.
- Look at the base components within the existing docker-compose stack to determine if the containers can be modified to fit what was learned from part 1. This may involve digging into the image definitions themselves to see what each container is doing on startup via it's entrypoint script, and whether or not that needs to be modified. This can be tricky if you've not worked with docker much.
- Sort out your SSL certificates. The two options look to be subdomain vs named directory slugs. If using subdomains you'll either need to get a star certificate (
*.example.com) or multiple singleton certificates for each subdomain. If using named directories you'll only need a singleton certificate. - Map out your deployment strategy regarding host volume usage, maintenance and persistence (backups). As wonderful as docker is, you can find yourself in a world of hurt if a container goes away and it's volume contents were not persisted somewhere... My "goal" is to create definitions that not only survive a reboot, but can survive and be regenerated even after having the containers, images and virtual volumes purged from the system. This means persisting all data and configuration information to the host by one means or another. May also require having entrypoint scripts that are smart enough to determine state and standup accordingly based on what is discovered.
Do feel free to post back what you've learned as I'm sure others would appreciate the knowledge. Perhaps a blog entry on your WordPress site :-)
Best of luck in your endeavors!
Thanks for the lengthy reply. My current question was only how to get the ssl certificates into the system. Currently we use /letsencrypt/letsencrypt-init.sh However, this secript uses one variable to get the certificate for domain.com and www.domain.com I guess this script needs to be modified to take in a file with all possible host names to compile a fetch for all the certificates. I will try to figure out a solution.
My current question was only how to get the ssl certificates into the system
It's worth looking into wildcard certificates if your domain is going to be the same for all sites. For instance if you'll have a john.example.com and a sally.example.com then a wildcard (or star) certificate for *.example.com would cover them all.
Reference: https://community.letsencrypt.org/t/acme-v2-and-wildcard-certificate-support-is-live/55579
Here is what I have changed:
- I created a file letsencrypt/vhosts with a line for each host, like:
omicrontek.com
www.omicrontek.com
elmit.org
www.elmit.org
I changed in letsencrypt/letsencrypt-init.sh the command (docker run -it --rm ....) to fetch the certificate to:
p="docker run -it --rm
-v ${CERTS}:/etc/letsencrypt
-v ${CERTS_DATA}:/data/letsencrypt
certbot/certbot
certonly
--webroot --webroot-path=/data/letsencrypt"
while IFS=' ' read -r line do p="$p -d $line" done < vhosts
echo $p
$p
That SAYS it fetches all new certificates. Even with the original script the directories /cert-data remain empty and the certificate is in /certs/live/omicrontek.com/
I expected to get the certificates in the directories: certs-data/live/omicrontek.com/ certs-data/live/elmit.org
Can you help me to figure out why it didn't go into certs-data ??? How can we fix it?
I can now access the site as https://omicrontek.com and https://elmit.org In my next steps I figure out how to make it multisite (as you described before)
OK, I do not need the vhosts file, I can just run several times the ./letsencrypt-init.sh file as: ./letsencrypt-init.sh omicrontek.com ./letsencrypt-init.sh elmit.org
It works, but the certificates are in: /certs/live and not in /certs-data/live It is a small error. I cannot see why. However, since it is using everywhere /certs and not /certs-data anyway, its fine.
However, in the letsencrypt-init.sh is a small error:
cd ${LE_DIR}
#rm -f ${REPO_DIR}/lets_encrypt.conf
rm -f lets_encrypt.conf
or/and maybe it should be also: rm -f ${REPO_DIR}/nginx/lets_encrypt.conf