In full transparency; with some help I recently set up a Laravel application in Kubernetes on Amazon Web Services for my employer, Ageras. This article is the summary of what I learned along the way. Why keep this knowledge only for myself and the people I closely work with?! It intentionally isn’t a full guided how-to, I’d simply like to share some topics that I struggled with.
Setting up the cluster
I won’t lie, setting up the Kubernetes cluster on AWS wasn’t easy from the start. In the end my CTO and me used Terraform to script our infrastructure, setting up two EC2 instances, the VPC, the RDS MySQL database, Elasticsearch, all security groups and network interfaces. I’d have to say that it is a lot easier to take already prepared Terraform scripts, than to write your own completely from scratch. This Terraform Github repo was a really good starting point, which is linked to this Terraform AWS EKS tutorial.
SSL/TLS termination
In order to simplify things within the setup, it was chosen to let the AWS Elastic LoadBalancer terminate the SSL/TLS connection. This way the container images don’t need to have access to our public-facing SSL certificates, which is an advantage during the build process. Though, it is a necessity to have the application itself running on an HTTPS connection. In order to achieve that the following lines were added to the Nginx configuration:
if ($http_x_forwarded_proto = 'http'){
return 301 https://$host$request_uri;
}
The following Kubernetes service exposes both port 80 and 443, as the ELB doesn’t support the SSL redirect itself.
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: my-laravel-app
namespace: default
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:*********..."
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80
selector:
app: nginx
type: LoadBalancer
For now this solution only supports HTTP/1.1
, even though our wish is to use HTTP/2
. AWS’s ELB doesn’t support HTTP/2 directly, the backend should support it so that the ELB just acts as a pass-through. That would mean connecting via https
to the nodes, so no SSL termination. With AWS’s Application Load Balancer (ALB) there seems to be an easier solution available, though its integration with Kubernetes is still in an experimental stage. Or we should use a more complicated IngressController
🤷🏼, I’ll wait for the ALB support.
Task scheduling
A Laravel application uses its own scheduling of tasks within App\Console\Kernel
, where you can define all tasks and their expected start time without having to define each individual one within a cron
entry.
There are multiple ways you can have this running within a container. The recommended way on regular servers is to set up the following cron
entry:
* * * * * cd /path-to-your-project && php artisan schedule:run >> /dev/null 2>&1
However, getting this to work within Kubernetes is tricky according to Paul Redmond, as you need a foreground process within your container to keep it running. So in order to have a foreground process, you can setup a while true
with a sleep
action in order to mimick the behaviour.
I added the following command and arguments to the Kubernetes configuration file for the scheduler
container, using the same php-fpm
container image.
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do cd /path-to-your-project; php artisan schedule:run & sleep 60; done;" ]
Dealing with secret information
There are multiple ways to deal with secret environment variables in a Kubernetes setup, and the use of Kubernetes native secrets
seems to be the most straight forward. However, this isn’t the most secure.
We did like to have 1) only one source of the data, in order to avoid confusion, 2) ease of use, 3) be able to distinguish data for separate environments, and 4) keep secret information outside of the Github repository.
This makes quite a case for using Hashicorp’s Vault as source of the secret information. It can run within AWS, yet in its own instance. Another option is to use Sealed-Secrets
, which is an open source tool that sets up a controller within the Kubernetes cluster that controls access to the secrets. The advantage of the latter is that we can include the secret data in an encrypted state into our Github repository, so that is lives together with our code, and that after setup it practically is using the native Kubernetes secrets
(example by the Bitnami team).
⚠️ Note: there is still an issue with kubeseal
in Sealed-Secrets
regarding authentication.. but yes, there is a workaround
Example of commands to create a collection of two secrets, for instance credentials:
kubectl create secret generic --dry-run --output json \
name-of-this-secret-collection \
--from-literal=password=supersekret \
--from-literal=another_password=alsosupersekret | kubeseal --cert kubeseal-cert.pem > ./kubernetes/name-of-this-secret-collection.json
kubectl create -f kubernetes/name-of-this-secret-collection.json
The approach should be to create small secrets, with a limited scope. So that they are easy to change/update.
You can then use your secrets as follows within your Kubernetes configuration file (only showing an excerpt):
...
containers:
- name: php-fpm
image: <your-container-image>
env:
- name: password
valueFrom:
secretKeyRef:
name: name-of-this-secret-collection
key: password
- name: another_password
valueFrom:
secretKeyRef:
name: name-of-this-secret-collection
key: another_password
ports:
- containerPort: 9000
protocol: TCP
...
Environment variables
As our Laravel application works with environment variables within the container, which is also shown in the section above, I needed a way to store a collection of variables that are not secret. Kubernetes by default has a ConfigMap
object that can do such a thing.
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: default
data:
APP_ENV: 'qa'
APP_DEBUG: 'true'
ANOTHER_ENV_VARIABLE: 'value'
The definition of the ConfigMap
.
containers:
- name: php-fpm
image: <your-container-image>
envFrom:
- configMapRef:
name: your-configmap-name
Using the ConfigMap
within a container specification for a Kubernetes deployment
Consider the ConfigMap
as an immutable resource, and when you change something in there, you should change the name. This recreates the Kubernetes pods with the updated environment variables (sourced from StackOverflow).
An alternative is to not change the name of the ConfigMap
, but apply the changes, and then delete the existing pods. New pods with the updated configuration should then automatically be initiated by Kubernetes.
Within Ageras we’ve chosen the latter solution, as a rebuild of containers happens for every deploy, and thus the pods would be recreated anyway.
Local development
It is all nice to have a stable and auto-scaling production and/or QA setup for your Laravel application, though for development you’d also want to have a similar setup to keep the differences to a minimum.
I’ve ensured that XDebug will be included within the php-fpm
Docker container for local development purposes, but not on our production and staging builds. This is done by using a build argument, which is false
by default. Below is an excerpt of the Dockerfile
we’re using, with only the relevant information for this example:
FROM php:7.0-fpm-alpine
ARG WITH_XDEBUG=false
# Install XDebug
RUN if [ $WITH_XDEBUG = "true" ] ; then \
apk add --no-cache $PHPIZE_DEPS \
&& pecl install xdebug-2.6.1 \
&& docker-php-ext-enable xdebug; \
fi ;
COPY php-fpm/xdebug.ini $PHP_INI_DIR/conf.d/docker-php-ext-xdebug.ini
# Remove the xdebug.ini file when it is not needed, as it is referenced by PHP when present
# There is unfortunately no conditional `COPY` or `ADD` statement available
RUN if [ $WITH_XDEBUG = "false" ] ; then \
rm $PHP_INI_DIR/conf.d/docker-php-ext-xdebug.ini; \
fi ;
php-fpm Dockerfile
Within the docker-compose.yml
I’ve set the WITH_XDEBUG
argument to true
for building the php-fpm
container.
php-fpm:
build:
context: .
dockerfile: ./php-fpm/Dockerfile
args:
- WITH_XDEBUG=true
restart: always
environment:
- XDEBUG_CONFIG=remote_host=host.docker.internal
- APP_ENV=${APP_ENV:-local}
expose:
- 9000 # Default php-fpm port
- 9001 # XDebug, as specified in php-fpm/xdebug.ini
volumes:
- .:/path-to-your-project
Part of the docker-compose.yml
file
There are some different thoughts for local development, and where you should place your project’s files; either within the container, or in a Docker volume. I’ve chosen to use a Docker volume, as you’d otherwise need to rebuild your container everytime you make a code change.
I’ve personally didn’t have any negative experiences so far with this local development setup, though there are examples (here is one, and I’ve heard other negative stories at an AWS conference) where it didn’t really work or was frustrating for the team. So please use it in a way that is suitable for you and your project.
Conclusion
I’m a fan of running a Laravel application on Kubernetes with Docker containers, especially for larger applications that require load balancing. I am absolutely sure that the setup that I’m now running can be optimized and improved, though for now it works quite wonderfully. Scaling up additional containers to handle our application’s traffic is nice, and the self-recovery of Kubernetes can save you a headache or two.
You can reach out to me for questions or a conversation via @eddokloosterman on Twitter.
P.S. If you’ve enjoyed this article or found it helpful, please share it, or check out my other articles. I’m on Instagram and Twitter too if you’d like to follow along on my adventures and other writings, or comment on the article.