/ Cloud

12 Factors App With Google Kubernetes Engine

During these months I've played with Google Cloud Platform, in particular Google Kubernetes Engine. In this short post I want to share my experience and show how it is easy to build 12 factors apps on GKE.

My tech stack:

  • A PostgresSQL database
  • Some SpringBoot Rest APIs
  • Hazelcast
  • NodeJS frontend
  • UI built with React

I. Codebase

One codebase tracked in revision control, many deploys

I used Bitbucket as a single source of truth, tagging the master branch for each release version. 😁 As I will describe, It can be easily integrated with GCP.

II. Dependencies

Explicitly declare and isolate dependencies

Simple, easy and clean:

  • Containers dependencies are explicitly declared in Dockerfiles.
  • Application dependencies are declared through npm and gradle.

III. Config

Store config in the environment

Configurations are part of Kubernetes Deployment descriptors.

  • Password are stored as Kubernetes secrets.
  • All these values are passed to containers as environment variables.
  • Kubernetes ConfigMaps can be used to increase decoupling
        - image: gcr,io/awesome-project/awesome-api:v1.0.0
          name: awesome-api
          - name: DB_PASSWORD
                  name: awesome-postgres-secret
                  key: postgres_password
          - name: DB_USERNAME
            value: postgres
          - name: DB_HOST
            value: awesome-postgres-service
          - name: DB_PORT
            value: "5432"
          - name: DB_DATABASE
            value: postgres

IV. Backing services

Treat backing services as attached resources

Database, Cloud Storage, email service are considered attached resources accessed through classic url or Kubernetes service discovery. These services can be easily swapped out by modifying the configuration passed to containers via environment variables. No urls or service name hard-coded. Disk volume themselves are seen as attached resources.

V. Build, release, run

Strictly separate build and run stages

I've used Cloud Container Builder to build my Docker images from my Bitbucket repository. Images are stored on Cloud Container Registry.... then Pods can be rolling updated.

VI. Processes

Execute the app as one or more stateless processes

I've set up a stateful backing service with Hazelcast, and kept the other containers stateless. This made rolling updates incredibly easy and robust.

VII. Port binding

Export services via port binding

Services must be exposed as Kubernetes Services, you can't do it wrong.

kind: Service
apiVersion: v1
  name: awesome-service
    app: AwesomeAPIApp
  - protocol: TCP
    port: 80
    targetPort: 8081

VIII. Concurrency

Scale out via the process model

Autoscaling gave me a lot of fun. I've used:

  • Kubernetes horizontal pod autoscaling based on CPU usage controls the number of needed replicas and creates containers when needed
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
  maxReplicas: 5
  minReplicas: 2
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: AwesomeAPIApp
  targetCPUUtilizationPercentage: 60
  • an autoscaling node pool to spin up pre-emptible machines and overcome the need of extra capacity

  • inter-pods anti-affinity to avoid the deployment of a same container on a same node

          - labelSelector:
              - key: app
                operator: In
                - AwesomeAPIApp
            topologyKey: kubernetes,io/hostname

IX. Disposability

Maximize robustness with fast startup and graceful shutdown

These are the key points for reliability and fault tolerance:

  • Kubernetes liveness and readiness probes helped me in achieving a robust rolling update strategy and fault tolerance on pre-emptible VMs.
  • Spring Boot Actuator gave the endpoints to probe for free.
  • Nodejs based services have faster startup times than JVM ones... no more Java next time 😂.
            failureThreshold: 3
              path: /management/health
              port: 8081
              scheme: HTTP
            initialDelaySeconds: 300
            periodSeconds: 60
            successThreshold: 1
            timeoutSeconds: 10
            failureThreshold: 10
              path: /management/health
              port: 8081
              scheme: HTTP
            initialDelaySeconds: 40
            periodSeconds: 5
            successThreshold: 1
            timeoutSeconds: 5

X. Dev/prod parity

Keep development, staging, and production as similar as possible

The "Infrastructure as code" approach gave me the exactly same structure to all my environments. Pre-emptible machines help in reducing cost.

XI. Logs

Treat logs as event streams

Stackdriver Logging and Error Reporting worked automagically.

XII. Admin processes

Run admin/management tasks as one-off processes

Not tried yet, but Kubernetes Jobs and Cron Jobs seems promising to run ETLs and administration tasks.