This is part three of a three part miniseries looking at Deis Workflow, the open source Platform as a Service built on top of Kubernetes.
In part one, we took a look at some basic concepts: Twelve-Factor apps, Docker, Kubernetes, and the basics of Workflow. In part two, we took a look at Workflow as a system. Both it's architecture and modular composability.
In this post, we're going to install and use Workflow.
Before you continue, you need a Kubernetes cluster.
We'll use Helm Classic to install Deis Workflow on a Kubernetes cluster. If you don't have a cluster up and running that you can use for this tutorial, you can use the GKE cluster I show you how to set up in my previous post.
Got a cluster you can use?
Cool. Let's install some stuff on it.
Helm Classic CLI
First, we must install the Helm Classic CLI.
There are two different sets of commands you must run, depending on how you're approaching this tutorial. If you're using GKE, you have access to the Google Cloud Shell. If you're not using GKE you'll just be using your regular shell.
If you're using Google Cloud Shell, run:
Google Cloud Shell puts ~/bin on your path, so all we did here is is create that directory and install Helm there.
If you're not using Google Cloud Shell, run:
Next, we install the Deis Workflow CLI.
Using Google Cloud Shell, run:
Now we need to add the Deis chart repository to Helm, which will allow us to install the workflow-v2.4.2 chart with a single command.
Add the chart repository like so:
Update your charts:
Now fetch the Workflow chart:
Note: for production deployments be sure to configure off-cluster object storage before you run the helmc generate command. If you don't, your object storage will be ephemeral and you could lose data.
The -w flag we used here allows us to watch for status updates.
Watch until all Workflow components are in the Running state. This means Workflow is ready to use.
After a while, you should see this:
Now we need to get the Workflow router's external IP address. We'll use this to access our Workflow cluster.
Specifically here, we need the EXTERNAL_IP value. When you run this command, you will see a different IP address here. For the rest of this post, I will refer to your value of this as YOUR_EXTERNAL_IP. When copying a command that uses this reference, be sure to replace the value!
Register and Set Up SSH Keys
To use Workflow, you must first register a user with the controller.
For the purposes of this post, we're not going to set DNS records for our Workflow router's external IP address. But we are going going to use nip.io, which is a simple wildcard DNS for any IP address.
We will use deis register with the controller URL (YOUR_EXTERNAL_IP we got above) to create a new account.
Do that like so:
If the registration is successful you should see:
Now you need to upload your SSH public key so you can use git push to deploy applications to Deis Workflow.
If you do not have an SSH key you must generate one.
This includes Google Cloud Shell users, as Google Cloud Shell does not come with an SSH key pre-generated for you.
Generate a key like so:
Press enter to accept the default location and filename. You can then press enter two more times if you want to use an empty passphrase. Or set a passphrase. Up to you.
Once you're done, change the file permissions:
Now, upload your SSH public key:
Let's make a folder where we are going to store our applications:
Workflow can deploy any application or service that can run inside a Docker container and uses HTTP port 80.
To be scaled horizontally, application containers must be stateless. Apps can store state in an external backing services. This is one component of the Twelve-Factor approach.
Workflow supports three ways of building applications:
Heroku buildpacks are useful if you want to follow Heroku's best practices for building applications. They're also useful if you're porting an application from Heroku.
We have an example buildpack app to demonstrate this workflow.
First, clone the app:
Now we can create the app:
And finally, deploy the app:
If you're using the Google Cloud Shell, you cannot use deis open as suggested by the output, because there is no available browser in that environment. But you can open the URL by copying it into your regular browser or by using curl, like so:
Because Workflow detected a Heroku-style application, we have one web process running by default on the first deploy.
You can scale up to three web processes like so:
Scaling a process type directly changes the number of Kubernetes replicas running that process. You can check this directly:
Ok, now let's change an environment variable.
This will trigger a new release.
This example app is configured to display a different message depending on the value of the POWERED_BY environment variable. So the new version of our app should have a new message.
Let's check it out:
Each time Workflow makes a deploy, it versions the release.
Here's how to see app releases:
We can revert back to the previous version easily:
Why does this say "v4"? Because our app rolled back to the same state as v2, but this counts as a new release. Release v4 is identical to v2, except for the timestamp.
We can also specify which release to roll back to:
If we check for releases now, we should see 5 versions there:
We created an application, scaled it, changed our config, redeployed it, rolled it back, and so on. And no Kubernetes knowledge was needed at all!
Workflow supports deploying applications via Dockerfiles. A Dockerfile automates the steps for crafting a Docker Image. Dockerfiles are very powerful but require some extra work to define your application runtime environment.
Let's deploy a Dockerfile based application with a backend service.
We will install the backend service with Helm.
Add the chart repository to Helm:
Fetch and install the backend chart:
Install the chart:
This should have installed a Redis cluster for us.
Check the Redis cluster is running:
Everything looks great!
Let's move on to installing the frontend.
Clone the app repository:
Create the app:
Before we continue, we need to set some environment variables so our app knows where to find the backend Redis cluster.
Now that's done, you can push the app:
The docker image was built and pushed to Workflow's Docker registry.
Now, visit this URL, using the IP address in the command output:
You should see something like this:
We deployed our frontend app with Workflow and our backend service with Helm. To learn more about this workflow, check out the official docs.
Docker Image Application
Workflow supports deploying applications via an existing Docker Image as well.
To demo this, we will reuse the same Redis backend.
We create the app with Workflow as before, but we use the --no-remote flag, as we do not need a Git remote set if we're using pre-built Docker images.
Because we're using the same backend Redis cluster, we set the same environment variables needed by our frontend app.
Now, pull the pre-built Docker image:
Now our Workflow app is using the pre-built Docker image we pulled from Quay. This is an alternative to to working with apps that need to be built from Git repositories. To learn more about this workflow, check out the official docs.
Open this URL, using your external IP address:
And you should see something like this:
Because both your frontend apps are using the same Redis backend cluster, you will see the same guestbook messages in both apps.