Deis Workflow, now in Beta!
Your PaaS. Your Rules.
Unleash your apps with the leading Kubernetes PaaS.
$ ^300 helm install deis/workflow
$ ^400 deis create
$ ^400 git push deis master ^800
$ ^400 deis scale web=10
$ ^400 deis config:set POWERED_BY=Kubernetes
Compressing, writing objects...Creating application... 100% done.
-----> sanest-lakeside deployed to Deis.
-----> Building Docker image
So What is Deis Workflow?
Deis Workflow is an open source PaaS that makes it easy to deploy and manage applications on your own servers. Workflow builds upon Kubernetes and Docker to provide a lightweight PaaS with a Heroku-inspired workflow.
Why Use Workflow?
Fast & Easy
Supercharge your team with a platform that deploys applications as fast as you can create them.
Benefit from the latest distributed systems technology thanks to a platform that is constantly evolving.
Fully Open Source
Maintain your independence with an open source platform that runs on public cloud, private cloud or bare metal.
What Are Users Saying?
Recent Blog Posts
03 May 2016
In my previous post we looked at kubectl, clusters, the control plane, namespaces, pods, services, replication controllers, and labels.
In this post we take a look at at volumes, secrets, rolling updates, and Helm.
A volume is a directory, possibly with some data in it, accessible to a container as part of its filesystem. Volumes are used used, for example, to store stateful app data.
Kubernetes supports several types of volume:
28 Apr 2016
Time keeps on slippin', slippin', slippin', into the future. But not Deis Workflow Beta releases.
The team just cut Beta 3 of Deis Workflow. We've been happy with the two-week release cadence. Keep your eyes out for Beta 4 due May 11th and our Release Candidate May 25th.
Now, for beta highlights!
27 Apr 2016
Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster.
Kubernetes provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure the state of the cluster continually matches the user's intentions.
Kubernetes enables you to respond quickly to customer demand by scaling or rolling out new features. It also allows you to make maximal use of your hardware.
- Lean: lightweight, simple, accessible
- Portable: public, private, hybrid, multi cloud
- Extensible: modular, pluggable, hookable, composable, toolable
- Self-healing: auto-placement, auto-restart, auto-replication
Kubernetes builds on a decade and a half of experience at Google running production workloads at scale, combined with best-of-breed ideas and practices from the community.
In this miniseries, we’ll cover Kubernetes from the ground up.
Let’s start with the basic components.
21 Apr 2016
When we finish, we'll have:
- A backend Redis cluster (for storage)
- Wercker for continuous deployment of your Docker image to Deis Workflow
Before we continue, you'll need a few things set up.
Firstly, you need a running Kubernetes cluster that is accessible remotely so Wercker can deploy new versions of your Docker image.
Next, you'll need Helm installed. Helm is the Kubernetes package manager developed by the Deis folks.
You can install Helm by running:
Then you'll need to install Deis Workflow. Consult the Deis docs for that!
We're also going to use Docker Hub for hosting our Docker repository. So you'll need an account there.
And finally, head over to Wercker and set up an account.
Install Redis With Helm
As a quick refresher, a chart in Helm lingo is a packaged unit of Kubernetes software. For the purposes of this demo, I wrote a chart that sets up Redis for you.
Point Helm at the demo repository by running:
Now install the chart I wrote for this demo that sets up Redis:
Create the Deis App
To work with Deis Workflow, we need to create a Deis app.
You can do that by running:
We're creating the app with
--no-remotebecause Deis only needs a Git remote when we’re using it for building Docker images. But we’re using Wercker for that.
Once that's done, the only thing we need to do is specify some environment variables so that our app knows how to contact the Redis cluster.
Run this command:
Now that's set up, we can can set up the code.
Set Up the Code
This bit's easy.
Fork my demo repo to your GitHub account.
Then, clone your fork locally:
$ git clone https://github.com/USERNAME/example-guestbook-wercker.git
Now, we can set up continuous deployment with Wercker so changes made to your fork will result in automatic deployments to your Deis Workflow.
Set Up Continuous Deployment
Log into your Docker Hub account and create a new repository. You can do that via the Create menu, then Create Repository, then call it something like
example-guestbook-wercker, or whatever you want.
Log in to your Wercker account and select Create, then Application. Connect Werker to your GitHub account and select your repository. When configuring access, select the default option: check out the code without using an SSH key.
Once the app is created, go to Settings, then Environment Variables, and create the following key-value pairs:
- DOCKER_USERNAME: Your Docker Hub or other hosted docker registry account name
- DOCKER_PASSWORD: Your Docker registry account password (you'll probably want to set this protected, which hides it from the UI)
- DOCKER_REPO: Your Docker Hub URL e.g.
- DEIS_CONTROLLER: Your Deis controller URL (see your
- DEIS_TOKEN: Your Deis user token (see your
- TAG: A tag for tagging your Docker image, e.g.
These environment variables will then be passed into the wercker.yml file before it is evaluated by Wercker.
Make some changes to your code. Then, push to GitHub.
Wercker will see this, and do the following:
- Build and tag your Docker image
- Push to your Docker registry
- Deploy your Docker image to Deis Workflow
There we have it.
Continuous deployment with Helm, Deis Workflow, and Werker.
Stay tuned for more posts like this.
14 Apr 2016
In my previous post I introduced the concept of scheduling and took a look at two basic monolithic schedulers: fleet and swarm. In summary: schedulers are responsible for distributing jobs across a cluster of nodes. However, basic monolithic schedulers, by design, have limits on performance and throughput.
In this post we take a look at how Kubernetes improves on the basic monolithic design.
Intro to Kubernetes
Kubernetes is a tool for managing Linux containers across a cluster.
Originally developed by Google, Kubernetes is lightweight, portable, and massively scalable. The design is highly decoupled and can split into two main components: a control plane and worker node services. The controle plane which takes care of assigning containers to nodes and manages cluster configuration. Worker node services, which run on the individual machines in your cluster, manage the local containers.
Within Kubernetes, we have the concept of pod. This is a group of colocated containers, like a pod of whales, or a pod of peas. Containers in the same pod share the same namespace. Namespaces are used for service discovery and segregation.
Kubernetes requires a dedicated subnet for each node—or an overlay network. This is so that each pod can get a unique IP address in the cluster. This can be achieved by deploying Kubernetes on a cluster with Flannel, OpenVSwitch, Calico, OpenContrail, or Weave. For more details about Kubernetes networking, see the docs.
Let’s take a more detailed look at all this.