14 Sep 2016 in Video, CloudNative Day

CloudNative Day Videos, Part One

The Cloud Native Computing Foundation (CNCF) is a multi-vendor initiative to standardise a common set of cloud technologies.

Last month, the CNCF put on CloudNative Day to bring together leading contributors in cloud native applications and computing, containers, microservices, central orchestration processing, and related projects.

To quote their event page:

[Being] cloud native requires a broad set of components to work together and an architecture that departs from traditional enterprise application design. This is a very complicated, fragmented process and the Cloud Native Computing Foundation aims to help make it simpler to assemble these moving parts by driving alignment among technologies and platforms.

As sponsor of the videos, we're proud to share them with you on our blog. In this post, we'll share the first seven talks. In part two, we'll share the rest.

Read More
12 Sep 2016 in Docker, DAB, Kubernetes, Kompose

Push a Docker DAB to a Kubernetes Cluster in Two Commands!

Docker Distributed Application Bundles (DABs) are "an experimental open file format for bundling up all the artifacts required to ship and deploy multi-container apps." DABs contain a complete description of all the services required to run an application, along with details about which images to use, ports to expose, and networks used to link services.

A little over a month ago, I wrote about DABs and outlined how they can be used with multi-tier apps to develop locally with Docker Compose and then bundled for deployment to a Docker Swarm cluster.

In this post, I will expand on my previous post and show how DABs can be used to make an artifact that is deployable on a Kubernetes cluster.

Why? Because by doing this we can take advantage of the awesome develop experience that the Docker tools provide to deploy artifacts to a production-ready Kubernetes cluster without needing a whole bunch of Kubernetes experience. Win win.

Note: DAB files are still experimental as of Docker 1.12.

Read More
9 Sep 2016 in Kubernetes, logging, Sumo Logic, Logentries

Off-Cluster Kubernetes Logging With Sumo Logic and Logentries

One of the best parts of my job as a solutions architect for Deis is working with an amazing array of talented engineers at companies solving truly interesting problems.

I was recently working with a company on the forefront of wearable fitness trackers. Their modest but world-class engineering team had reached the outer limits of what could be done with Ansible-based Docker deployments in AWS.

While everything worked, there were areas of API entanglement, and a lack of orchestration that created duplicated effort and an inefficient use of EC2 resources. The company is clearly on a rocketship growth trajectory, so scaling and efficient systems management are forefront on everyone's mind.

Fortunately, they also recognized that the time to pivot to more efficient and scalable architecture is while they're still in an early growth phase.

Kubernetes provides the perfect fit for their use-case because it allows a more atomic service distribution, scaling, and painless service discovery. Also, when the infrastructure below the cluster is configured with autoscaling, rapid growth should be no problem.

In this blog post, I'll take a look at one aspect of the work I did with them: how I got logs shipped off-cluster to Sumo Logic. I will also draw a link to some work I did for another company to send logs to Logentries.

Read More
9 Sep 2016 in Workflow, Release, Announcement

Deis Workflow 2.5 Release

The best way to roll into the weekend is with fresh software, hot off the presses. The Deis Workflow team just merged the final charts for 2.5!

We've got a ton of functionality packed into 2.5, so hold on to your horses!

Workflow 2.5 includes initial support for Kubernetes Horizontal Pod Autoscaling. Which is not only a mouthful, but pretty neat to boot. Workflow 2.5's theme song is "Glassworks" by Philip Glass. I'm pretty sure this is what a Horizontal Pod Autoscaler would sound like if it made noise.

Cast scale at the darkness...

Setting a scaling policy for your application is straightforward. Policies are set per process-type, which allows developers to easily scale processes independently:

$ deis autoscale:set web --min=3 --max=8 --cpu-percent=75
Applying autoscale settings for process type web on scenic-icehouse... done

The Kubernetes HorizontalPodAutoscaler (HPA) does require CPU limits to be set for the application process type, so makes sure you set a limit:

$ deis limits:set web=250m -c
Applying limits... done

=== scenic-icehouse Limits

--- Memory
web     Unlimited

--- CPU
web     250m

Behind the scenes, the HorizontalPodAutoscaler (HPA) will spring into action, adding or removing pods so that the average CPU utilization of your application processes approach the CPU target.

There is a bit of nuance to the way HPAs work so spend a bit of time with the Kubernetes documentation on the algorithm.

Viewing and removing scaling policies are simple CLI commands as well:

$ deis autoscale:list
=== scenic-icehouse Autoscale

--- web:
Min Replicas: 3
Max Replicas: 8
CPU: 75%

$ deis autoscale:unset web
Removing autoscale for process type web on scenic-icehouse... done

Autoscaling in Workflow should be considered Alpha and we would love your feedback!

Build Result Caching

Thanks to community member @jeroenvisser101 the Workflow build system now caches build results. This change greatly speeds up the process on subsequent builds.

Enforce SSL Application by Application

Workflow 2.5 now allows developers to require TLS on a per-application basis. Instead of a global setting in the router (via router.deis.io/nginx.ssl.enforce), Workflow CLI has a few new tricks:

$ deis tls:enable -a spicy-icehouse
Enabling https-only requests for spicy-icehouse... done

Now, connections on port 80 for this application will be be redirected with HTTP status code 301 to the HTTPS version. Since this interaction occurs at the edge router, developers aren't required to use application middleware to enforce HTTPS.

To allow both HTTP and HTTPS traffic for an application (which is the default) use tls:disable:

$ deis tls:disable -a foo
Disabling https-only requests for foo... done

Application-specific IP Address Whitelisting

Developers and operators who need to control access to applications by IP address, Workflow 2.5 makes this process much easier!

Using the CLI developers may manage IP whitelisting per application:

$ deis whitelist:add, -a drafty-zaniness
Adding, to drafty-zaniness whitelist...done

$ deis whitelist:remove -a drafty-zaniness
Removing from drafty-zaniness whitelist... done

$ deis whitelist -a drafty-zaniness
=== drafty-zaniness Whitelisted Addresses

Adding a whitelist to an application automatically rejects connections from any un-listed address. Removing the last IP address from a whitelist returns the application to the default behavior, which accepts connections from any IP address.

Full Release Changelogs

Workflow 2.5 changes are now available in the Workflow documentation. No more crawling through GitHub repositories or past blog posts to learn about changes.

Up Next

Our next release is scheduled for September 28th, 2016. You can check out the 2.6 milestone on each of the component repositories, or take a gander at the Workflow Roadmap.

7 Sep 2016 in Helm, Kubernetes

Trusting Who's at the Helm

Last year at KubeCon in San Francisco, I first learnt about Helm—a sort of Homebrew for Kubernetes. It seemed too good to be true, so I dug deeper. Fast forward to today, and I find myself packaging applications with Helm.

In this post, I'll talk briefly about why Helm is so exciting, and then show you how to install and use one of the packages I wrote.

Why Use a Package Manager?

A team I worked with was deploying various components to Kubernetes, including: Zookeeper, etcd, Consul, Cassandra, Kafka, and Elasticsearch. Each one of these components was using a manifest file that someone on the team had written by hand, and then these manifest files had been improved over time. Each change, each improvement, reflecting some sort of knowledge or experience the team had gained.

But there are many teams across the world deploying these same components. And let's face it, most deploy situations are similar enough. So each one of these teams is, for the most part, duplicating each other's work.

But what if there was a way to avoid that? What if we could organise that collective knowledge and bring people together to collaborate on it.

Read More