Deis Workflow, now in Beta!

Your PaaS. Your Rules.

Unleash your apps with the leading Kubernetes PaaS.

 

So What is Deis Workflow?

Deis Workflow is an open source PaaS that makes it easy to deploy and manage applications on your own servers. Workflow builds upon Kubernetes and Docker to provide a lightweight PaaS with a Heroku-inspired workflow.

Explore Deis Workflow

triangle square circle
Why Use Workflow?

Fast & Easy

Supercharge your team with a platform that deploys applications as fast as you can create them.

Up-to-Date

Benefit from the latest distributed systems technology thanks to a platform that is constantly evolving.

Fully Open Source

Maintain your independence with an open source platform that runs on public cloud, private cloud or bare metal.

Explore the Features >

What Are Users Saying?

"Deis gives our developers a self-service platform backed by a strong open source community. We are excited about Deis' potential at Mozilla."

Benjamin Sternthal, Mozilla
Benjamin Sternthal

"Deis enables us to deploy Docker-based microservices on our own private PaaS within seconds without human involvement."

Fredrik Björk, TheRealReal
Fredrik Björk

Trusted By:

  • Appspark
  • Cloqworq
  • Cloudmine
  • Instore
  • HotelQuickly
  • villamedia
  • Soficom
  • Bartec Pixavi
  • Democracy OS
  • Socialradar
  • Excel Micro
  • Codaisseur

triangle square circle
Recent Blog Posts

  • Kubernetes Overview, Part Two

    03 May 2016

    In my previous post we looked at kubectl, clusters, the control plane, namespaces, pods, services, replication controllers, and labels.

    In this post we take a look at at volumes, secrets, rolling updates, and Helm.

    Volumes

    A volume is a directory, possibly with some data in it, accessible to a container as part of its filesystem. Volumes are used used, for example, to store stateful app data.

    Kubernetes supports several types of volume:

    • emptyDir
    • hostPath
    • gcePersistentDisk
    • awsElasticBlockStore
    • nfs
    • iscsi
    • glusterfs
    • rbd
    • gitRepo
    • secret
    • persistentVolumeClaim
  • Deis Workflow, Beta 3

    28 Apr 2016

    Time keeps on slippin', slippin', slippin', into the future. But not Deis Workflow Beta releases.

    The team just cut Beta 3 of Deis Workflow. We've been happy with the two-week release cadence. Keep your eyes out for Beta 4 due May 11th and our Release Candidate May 25th.

    Now, for beta highlights!

  • Kubernetes Overview, Part One

    27 Apr 2016

    Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster.

    Kubernetes provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure the state of the cluster continually matches the user's intentions.

    Kubernetes enables you to respond quickly to customer demand by scaling or rolling out new features. It also allows you to make maximal use of your hardware.

    Kubernetes is:

    • Lean: lightweight, simple, accessible
    • Portable: public, private, hybrid, multi cloud
    • Extensible: modular, pluggable, hookable, composable, toolable
    • Self-healing: auto-placement, auto-restart, auto-replication

    Kubernetes builds on a decade and a half of experience at Google running production workloads at scale, combined with best-of-breed ideas and practices from the community.

    Kubernetes supports Docker and rkt containers, with more container types to be supported in a future.

    In this miniseries, we’ll cover Kubernetes from the ground up.

    Let’s start with the basic components.

  • Continuous Deployment With Helm, Deis Workflow, and Wercker

    21 Apr 2016

    Deis Workflow is currently in beta. But what is it like to work with? Well, I created an example repository on GitHub to demo some functionality.

    Using this example, we'll build a simple, multi-tier web application using Helm, Deis Workflow, and Wercker for continuous deployment.

    When we finish, we'll have:

    • A backend Redis cluster (for storage)
    • A web frontend (installed as a Deis Workflow app) that interacts with Redis via JavaScript
    • Wercker for continuous deployment of your Docker image to Deis Workflow

    Prerequisites

    Before we continue, you'll need a few things set up.

    Firstly, you need a running Kubernetes cluster that is accessible remotely so Wercker can deploy new versions of your Docker image.

    Next, you'll need Helm installed. Helm is the Kubernetes package manager developed by the Deis folks.

    You can install Helm by running:

    curl -s https://get.helm.sh | bash
    

    Then you'll need to install Deis Workflow. Consult the Deis docs for that!

    We're also going to use Docker Hub for hosting our Docker repository. So you'll need an account there.

    And finally, head over to Wercker and set up an account.

    App Setup

    Install Redis With Helm

    As a quick refresher, a chart in Helm lingo is a packaged unit of Kubernetes software. For the purposes of this demo, I wrote a chart that sets up Redis for you.

    Point Helm at the demo repository by running:

    $ helm up
    $ helm repo add demo-charts https://github.com/deis/demo-charts
    $ helm up
    

    Now install the chart I wrote for this demo that sets up Redis:

    $ helm fetch demo-charts/redis-guestbook
    $ helm install redis-guestbook
    

    And done!

    Create the Deis App

    To work with Deis Workflow, we need to create a Deis app.

    You can do that by running:

    $ deis create guestbook --no-remote
    

    We're creating the app with --no-remote because Deis only needs a Git remote when we’re using it for building Docker images. But we’re using Wercker for that.

    Once that's done, the only thing we need to do is specify some environment variables so that our app knows how to contact the Redis cluster.

    Run this command:

    $ deis config:set GET_HOSTS_FROM=env REDIS_MASTER_SERVICE_HOST=redis-master.default REDIS_SLAVE_SERVICE_HOST=redis-slave.default -a guestbook
    

    Now that's set up, we can can set up the code.

    Set Up the Code

    This bit's easy.

    Fork my demo repo to your GitHub account.

    Then, clone your fork locally:

    $ git clone https://github.com/USERNAME/example-guestbook-wercker.git
    

    Now, we can set up continuous deployment with Wercker so changes made to your fork will result in automatic deployments to your Deis Workflow.

    Set Up Continuous Deployment

    Log into your Docker Hub account and create a new repository. You can do that via the Create menu, then Create Repository, then call it something like example-guestbook-wercker, or whatever you want.

    Log in to your Wercker account and select Create, then Application. Connect Werker to your GitHub account and select your repository. When configuring access, select the default option: check out the code without using an SSH key.

    Once the app is created, go to Settings, then Environment Variables, and create the following key-value pairs:

    • DOCKER_USERNAME: Your Docker Hub or other hosted docker registry account name
    • DOCKER_PASSWORD: Your Docker registry account password (you'll probably want to set this protected, which hides it from the UI)
    • DOCKER_REPO: Your Docker Hub URL e.g. USERNAME/example-guestbook-wercker
    • DEIS_CONTROLLER: Your Deis controller URL (see your ~/.deis/client.json file)
    • DEIS_TOKEN: Your Deis user token (see your ~/.deis/client.json file)
    • TAG: A tag for tagging your Docker image, e.g. latest

    These environment variables will then be passed into the wercker.yml file before it is evaluated by Wercker.

    Test It

    Make some changes to your code. Then, push to GitHub.

    Wercker will see this, and do the following:

    • Build and tag your Docker image
    • Push to your Docker registry
    • Deploy your Docker image to Deis Workflow

    There we have it.

    Continuous deployment with Helm, Deis Workflow, and Werker.

    Stay tuned for more posts like this.

  • Schedulers, Part 2: Kubernetes

    14 Apr 2016

    In my previous post I introduced the concept of scheduling and took a look at two basic monolithic schedulers: fleet and swarm. In summary: schedulers are responsible for distributing jobs across a cluster of nodes. However, basic monolithic schedulers, by design, have limits on performance and throughput.

    In this post we take a look at how Kubernetes improves on the basic monolithic design.

    Intro to Kubernetes

    Kubernetes is a tool for managing Linux containers across a cluster.

    Originally developed by Google, Kubernetes is lightweight, portable, and massively scalable. The design is highly decoupled and can split into two main components: a control plane and worker node services. The controle plane which takes care of assigning containers to nodes and manages cluster configuration. Worker node services, which run on the individual machines in your cluster, manage the local containers.

    Within Kubernetes, we have the concept of pod. This is a group of colocated containers, like a pod of whales, or a pod of peas. Containers in the same pod share the same namespace. Namespaces are used for service discovery and segregation.

    Kubernetes requires a dedicated subnet for each node—or an overlay network. This is so that each pod can get a unique IP address in the cluster. This can be achieved by deploying Kubernetes on a cluster with Flannel, OpenVSwitch, Calico, OpenContrail, or Weave. For more details about Kubernetes networking, see the docs.

    Let’s take a more detailed look at all this.

triangle square circle

Your PaaS. Your Rules.

See how it works >