Last month, the CNCF put on CloudNative Day to bring together leading contributors in cloud native applications and computing, containers, microservices, central orchestration processing, and related projects.
[Being] cloud native requires a broad set of components to work together and an architecture that departs from traditional enterprise application design. This is a very complicated, fragmented process and the Cloud Native Computing Foundation aims to help make it simpler to assemble these moving parts by driving alignment among technologies and platforms.
As sponsor of the videos, we're proud to share them with you on our blog. In this post, we'll share the first seven talks. In part two, we'll share the rest.
Docker Distributed Application Bundles (DABs) are "an experimental open file format for bundling up all the artifacts required to ship and deploy multi-container apps." DABs contain a complete description of all the services required to run an application, along with details about which images to use, ports to expose, and networks used to link services.
A little over a month ago, I wrote about DABs and outlined how they can be used with multi-tier apps to develop locally with Docker Compose and then bundled for deployment to a Docker Swarm cluster.
In this post, I will expand on my previous post and show how DABs can be used to make an artifact that is deployable on a Kubernetes cluster.
Why? Because by doing this we can take advantage of the awesome develop experience that the Docker tools provide to deploy artifacts to a production-ready Kubernetes cluster without needing a whole bunch of Kubernetes experience. Win win.
Note: DAB files are still experimental as of Docker 1.12.
One of the best parts of my job as a solutions architect for Deis is working with an amazing array of talented engineers at companies solving truly interesting problems.
I was recently working with a company on the forefront of wearable fitness trackers. Their modest but world-class engineering team had reached the outer limits of what could be done with Ansible-based Docker deployments in AWS.
While everything worked, there were areas of API entanglement, and a lack of orchestration that created duplicated effort and an inefficient use of EC2 resources. The company is clearly on a rocketship growth trajectory, so scaling and efficient systems management are forefront on everyone's mind.
Fortunately, they also recognized that the time to pivot to more efficient and scalable architecture is while they're still in an early growth phase.
Kubernetes provides the perfect fit for their use-case because it allows a more atomic service distribution, scaling, and painless service discovery. Also, when the infrastructure below the cluster is configured with autoscaling, rapid growth should be no problem.
In this blog post, I'll take a look at one aspect of the work I did with them: how I got logs shipped off-cluster to Sumo Logic. I will also draw a link to some work I did for another company to send logs to Logentries.
The best way to roll into the weekend is with fresh software, hot off the
presses. The Deis Workflow team just merged the final charts for 2.5!
We've got a ton of functionality packed into 2.5, so hold on to your horses!
Workflow 2.5 includes initial support for Kubernetes Horizontal Pod
Autoscaling. Which is not only a mouthful, but pretty neat to boot. Workflow
2.5's theme song is "Glassworks" by Philip Glass. I'm pretty sure this is what
a Horizontal Pod Autoscaler would sound like if it made noise.
Cast scale at the darkness...
Setting a scaling policy for your application is straightforward. Policies are
set per process-type, which allows developers to easily scale processes
The Kubernetes HorizontalPodAutoscaler (HPA) does require CPU limits to be set for
the application process type, so makes sure you set a limit:
Behind the scenes, the HorizontalPodAutoscaler (HPA) will spring into action, adding
or removing pods so that the average CPU utilization of your application processes
approach the CPU target.
There is a bit of nuance to the way HPAs work so spend a bit of time with the
Kubernetes documentation on the algorithm.
Viewing and removing scaling policies are simple CLI commands as well:
Autoscaling in Workflow should be considered Alpha and we would love your feedback!
Build Result Caching
Thanks to community member @jeroenvisser101
the Workflow build system now caches build results. This change greatly speeds
up the process on subsequent builds.
Enforce SSL Application by Application
Workflow 2.5 now allows developers to require TLS on a per-application basis.
Instead of a global setting in the router (via
router.deis.io/nginx.ssl.enforce), Workflow CLI has a few new tricks:
Now, connections on port 80 for this application will be be redirected with
HTTP status code 301 to the HTTPS version. Since this interaction occurs at the
edge router, developers aren't required to use application middleware to
To allow both HTTP and HTTPS traffic for an application (which is the default) use tls:disable:
Application-specific IP Address Whitelisting
Developers and operators who need to control access to applications by IP
address, Workflow 2.5 makes this process much easier!
Using the CLI developers may manage IP whitelisting per application:
Adding a whitelist to an application automatically rejects connections from any
un-listed address. Removing the last IP address from a whitelist returns the
application to the default behavior, which accepts connections from any IP
Full Release Changelogs
Workflow 2.5 changes are now available in the Workflow
documentation. No more
crawling through GitHub repositories or past blog posts to learn about changes.
Our next release is scheduled for September 28th, 2016. You can check out the 2.6
milestone on each of the component repositories, or take a gander at the
Last year at KubeCon in San Francisco, I first learnt about Helm—a sort of Homebrew for Kubernetes. It seemed too good to be true, so I dug deeper. Fast forward to today, and I find myself packaging applications with Helm.
In this post, I'll talk briefly about why Helm is so exciting, and then show you how to install and use one of the packages I wrote.
Why Use a Package Manager?
A team I worked with was deploying various components to Kubernetes, including: Zookeeper, etcd, Consul, Cassandra, Kafka, and Elasticsearch. Each one of these components was using a manifest file that someone on the team had written by hand, and then these manifest files had been improved over time. Each change, each improvement, reflecting some sort of knowledge or experience the team had gained.
But there are many teams across the world deploying these same components. And let's face it, most deploy situations are similar enough. So each one of these teams is, for the most part, duplicating each other's work.
But what if there was a way to avoid that? What if we could organise that collective knowledge and bring people together to collaborate on it.