Kubernetes made easy: Develop and deploy on a cloud cluster


Part 4: Design & Conclusions

Table of Contents
Part 1: Basis
Part 2: Kubernetes
Part 3: Ingress Controller
Part 4: Design & Conclusions

Security

I feel that I should warn you about what I see as a pretty strong security risk which you can patch until it's properly fixed in v2. By default a service account token giving access to your cluster API is auto-mounted on all containers. This means that all processes running as any user on any container can starts even a priviledged new container as root or read all your Secrets. Try for example to run this from within a running Kubernetes container: curl -ik --header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes/api/v1/secrets. Most of the time however you'll not need this token, so for all your containers, add a emptyDir volume mount to disable the access as below:


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-deployment
spec:
  template:
    spec:
      containers:
        - name: my-container
          ...
          volumeMounts:
            # Security hack, see https://github.com/kubernetes/kubernetes/issues/16779
            - name: no-service-account
              mountPath: /var/run/secrets/kubernetes.io/serviceaccount
              readOnly: true
      volumes:
        - name: no-service-account
          emptyDir: {}

With that you'll be safe. Outside of this risk, Kubernetes is pretty secure. You may want to keep in mind that network accesses within your cluster is not controlled by default: Any Pod can connect to any other opened port on any other Pod on your cluster. Generally it's safe enough that way; you can add password for most databases if you're more on the paranoid side (like me).

Design decisions

The various possible combinations to do somethin with Google Cloud + Kubernetes + Docker are sometimes too much, so here are some tips:

  • GKE (Google Container Engine) is simpler to set up and maintain than doing the same via GCE (Google Compute Engine) and simpler than a self set up dedicated server (even though Kubernetes can run on a dedicated server).
  • Unless you've big projects, you're generally fine having a single Google Project for multiple websites. It'll simplify your monitoring allows to share a cluster, have unlimited number of logical project (websites) running on it (especially using Ingress controllers). You won't have ACL or network separation for now however.
  • Kubernetes namespace vs label: A namespace is pretty good to contain everything related to a logical project and they seperate secrets; each work unit name has be unique within the namespace. I'd also add labels to help during monitoring. So use both.
  • Multiple Pods with one container vs. Single Pod with multiple containers vs. Single container with multiple responsibilities: Generally prefer multiple Pods as it scales better, unless you need more than one container to access the same persistent disk in which can have them in a single Pod. Containers doing more than one thing are not the Docker way and I'd discourage doing it.
  • Local disk vs. nfs disk vs. Database: Local disk (as we used in our example) are probably the simplest to adapt your existing website and to port to any system. They do limit scalability and I'd go for a database (like Redis or Google Cloud Datastore) whenever possible.

Conclusion

There is a lot more we didn't cover. Docker changed the way things are run for many people (a bit like Portable Applications changed the way some things are run on Windows but even more fundamentally).

Kubernetes can do a lot more like checking your service is live before continuing, storing secret passwords... You may start by their Kubernetes 101 (Walkthrough).

Google Cloud also has tons of more things to offer like source control, debugging, logging, monitoring...

Overall you can now create a service that runs pretty much anywhere, that scales if necessary and rapidly (even automatically), and that is easier to maintain in the long run than ever before.

You may also want to check out our continuous integration article and run it on your Kubernetes cluster (I'd advise to use GitLab as source repository and CI). In case you're wondering, this websibe is running using what we described (at the time of writing).

— Werner BEROUX, updated on August 22, 2016

Parts: Previous123, 4