Kubernetes made easy: Develop and deploy on a cloud cluster
Part 1: Basis
Table of Contents |
Part 1: Basis
Part 2: Kubernetes Part 3: Ingress Controller Part 4: Design & Conclusions |
Why?
Let's say you'r goal is to run one or more websites. We'll assume you know enough of Linux to set up a dedicated server, event though we won't. Compared to a traditional dedicated server we'll also get:
- Manage the software stack, meaning you can update any application, say even the operating system, for only a component without affecting the others, and test locally with exactly what will run on the server
- High availability, meaning your website can be always up and running even while you upgrade/deploy a new version or when some hardware fails
- Improved security by having isolated components and using cloud network security
- Scales to adapt to the number of active users
- Free or cheaper than on premise as it now has a free tier
- Not locked in solution so that if you wish to move to any other cloud or non-cloud provider, it'd be tons of time easier
How?
We'll use two key technologies one built on top of the other:
- Docker: Creates "images" that are like a static binary running your website exactly the same on any OS in a partially sandboxed environement
- Kubernetes: Starts/restarts those images on a cluster of machines and creates a resilient network that supports failures
Even those both of these are open source and can run on pretty much any Linux, we'll use Google Cloud Platform to simplify set up and maintenance.
Docker: Managing the application stack
If you know Docker you can skip this paragraph.
Practically Docker works a little bit like a lightweight virtual machine but technologically very different. Common scenario is to create a text file named Dockerfile
with a list of commands to run one top of a base image like Debian, or a premade service like Nginx, MySQL, PHP...
Unlike a Virtual Machine, you can run thouthands of Docker images on a single machine without issue. To build or run a Docker image you need to install Docker. Then your image will run exactly the same on all OSes running Docker. A running (or previously running) instance of an image is called a Docker container (that is why some people talk about containerizing applications, meaning making a Docker image in which they can properly run). Processes inside the container cannot access more of the host machine unless specifically allowed; this isn't as strong as a virtual machine so don't run processes as root
even from within the container. Docker runs mostly from the CLI and most Dockerized (aka containerized) application run headless (i.e. without GUI) which is perfect for any website component.
Example
Dockerfile
FROM nginx:stable-alpine
RUN apk add --update ca-certificates php-pda_mysql && rm -rf /var/cache/apk*
COPY . /usr/share/nginx/html
Here we start from an Nginx image that is itself base on Alpine Linux. Now on a Linux let's install Docker, build, and start our web server:
curl -sSL https://get.docker.com/ | sh
docker build -t my-frontend-image .
docker run -d -p 127.0.0.1:8080:80 my-frontend-image
Now you can open your browser an go to http://127.0.0.1:8080
and you should be able to access the server along with an file below the directory containing your Dockerfile
.
docker-compose: Simplifying local development
docker-compose is a small official binary that simplifies commands for a set of linked Docker images. It's optional but it simplifies things a bit.
Example of linking our Nginx to MySQL (yes it's kind of stupid as you'd probably link Nginx to Python or PHP and link those to MySQL but you get the idea):
docker-compose.yml
version: '2'
services:
frontend:
build: .
# By specifying 'build' and 'image' fields we force the built image tag name.
image: my-frontend-image
mount: .:/usr/share/nginx/html:ro
links:
- db
db:
image: mysql:5.7
Let's install docker-compose and build and run both images (note from within the Nginx container a host named db
will be created an it'll point to the container running MySQL):
curl -L https://github.com/docker/compose/releases/download/1.6.2/docker-compose-`uname -s`-`uname -m` > docker-compose
chmod +x docker-compose
./docker-compose up -d
We did another thing here which is that we asked to mount the current directory as /usr/share/nginx/html
(as read-only). This means that any change done to the files will be reflected inside the container without having to re-create your Docker image (i.e. docker build isn't required to pick up changes). This is something very useful during development, but we want our Docker image to contain all necessary files for later deployment in production on the server.
Now let's see how to deploy on a cluster in the cloud (can be a single machine or multiple cloud computers).