21 Aug 2015 in Deis v1 PaaS

Thanks Keerthan Mala!

Everyone involved with Deis sends a smiling thank you to Keerthan Mala.

Keerthan joined the Engine Yard Deis team this summer as an intern. It wasn't a hard choice: he was one of the smartest scheduling and distributed systems students we could find.

Keerthan combined forces with Sivaram Mothiki to create a formidable scheduler team. Keerthan owned Kubernetes (k8s), the new container cluster management technology that has the Deis project fired up. He perservered--while the code around him changed rapidly--to bring the Kubernetes Scheduler preview to Deis v1.9, allowing k8s to be used in place of the default fleet scheduler. He also introduced flannel and etcd2 into Deis. Keerthan's always ready to help with debugging or product testing, and he has steered the future technology direction of Deis as a key member of the R&D team.

Keerthan returns to the East coast to finish off his master's degree now, but we will keep his desk warm in hopes of a triumphant return. Best of luck, and thanks again Keerthan for showing us what's next and for making the Boulder office fun!

20 Aug 2015 in Deis v1 PaaS, Tutorial, AWS

How I Deployed My First App To Deis

Have you ever felt the pain that comes when your app runs fine on development, but breaks terribly in production? Maybe your CI build has been red for days, but you haven't had time to figure out how the CI server is misconfigured?

With containers, you can easily rid yourself of such dependency woes. If the app runs in a container on one machine, it will most likely run in the same container on another.

Once you've bought into a container-based development workflow, the question soon arises: how can I get my production server to run my application in a container without the difficulty of having to provision a bare server with all of the other services, writing deploy tasks, and handling scaling issues on my own? In short, can I have a managed production environment that also supports containers?

The answer is yes. Using Deis, an open source Platform as a Service, you can host and manage your Docker-based application using your own Amazon Web Services (AWS) servers, without the hassle of configuring a bare Linux server.

I recently deployed a simple Rails app to Deis, and took notes along the way. In this post, I'll share the steps I took to set up a Deis Pro account and deploy a new application.

Read More
14 Aug 2015 in Deis v1 PaaS

Thanks Joshua Anderson!

Everyone involved with Deis extends a heartfelt thank you to Joshua Anderson.

Joshua joined the Engine Yard Deis team this summer as an intern, having already created many features and fixes as an outside contributor. He hit the ground running so fast, we could barely keep up with him.

Joshua refactored deisctl, adding some features and lots of tests. He standardized and beefed up tests throughout the project. The new, faster deis CLI written in go is due to Joshua's diligence, and somehow he also found time to fix bugs and add significant features to deis-controller, write documentation, give us a nifty git commit hook, and propose and start implementing an enhanced permissions scheme. He always asks insightful questions and is fun to work with.

Joshua heads back to school soon, and to say that we will miss him is quite an understatement. Thanks for everything Joshua, and happy trails until we meet again!

11 Aug 2015 in Legacy Apps, Perspective

Why Your App Won’t Work In The Cloud

There are two kinds of apps for the cloud: ones that work and ones that don’t. The ones that work are called Twelve Factor apps, and they work because they were written specifically for the cloud. We call the ones that don’t work legacy apps. And these are designed to run on traditional VPS hosts.

Unfortunately, most popular apps are legacy apps. They weren’t written with the cloud in mind, so they generally won’t work without modification. Legacy apps include offerings such as WordPress, Magento, and Drupal. They might also include any in-house apps you are thinking about moving to the cloud.

So what’s to be done about this? Is there any way to run legacy apps in the cloud?


Some approach this problem by deploying the legacy app to a single server, and then scaling that up to meet demand. But this is no different from traditional VPS hosting! And you’re going to get a nasty shock when Amazon retires (read: shuts off) your host...

A better solution is to make changes to the app itself, so that it’s compatible with the cloud. Being compatible means that it can be distributed across a cluster. This way, you can scale out by adding more servers. When servers get sick, you can replace them with new ones.

If you start making significant changes to an off-the-shelf legacy app, however, then you are essentially forking that code base. Not only will this require dedicated developer time, but you’ll have to continually merge in upstream changes. You’d think there’d be a better approach.

Fortunately, there is!

Read More
6 Aug 2015 in Series: App Principles, Legacy Apps

Share Nothing, Scale Everything

In the previous post in this series, we explained how the shared-nothing architecture places additional constraints on cloud app developers. We also explained how embracing these constraints enables apps to have high scalability and high availability.

In this present post, we explain how to adapt an app for the cloud by removing any dependency on the file system, in order to make it compatible with a shared-nothing architecture.

Replacing the File System

A tower of filing cabinets set against the sky

Putting your file system in the cloud is asking for trouble...

If you’re deploying an existing app to the cloud, whether it’s an internal app or an off-the-shelf app, you may find that there are some points of contention.

The most common problem we have found is that apps designed for traditional hosting environments expect the file system to behave like a database. That is, they write out a file, and then expect that this file is going to exist at some point in the future.

This is a problem for languages like PHP, where many of the off-the-shelf apps have existed since long before the cloud was popular. These apps generally assume that you are only using one server, and that the master copy of your site lives on the server.

Unfortunately, this causes some problems. This model does not work when you want to scale across multiple servers, or when you want to keep the master copy of your site in a revision control system like Git.

Let’s take one example: WordPress. The default WordPress configuration requires write access to the wp-content directory on the local file system. If you log into the WordPress administration console and make some changes, WordPress may update a file on the local file system. But if you have multiple servers, none of the other servers have this updated config.

If, on the other hand, you re-deploy from Git, your configuration changes will be overwritten! You could try to use something like gitdocs to automatically propagate changes, but what happens when you have a merge conflict, or your local state becomes particularly byzantine? Your app could fail instantly, or it may even experience hidden corruption, failing later in such a way that makes it difficult to debug.

So what’s the solution?

Read More