Private Applications on Workflow
Last week, we released Workflow v2.4.
In Workflow v2.4, we added something called
deis routing. This feature allows you to add or remove an application from the routing layer. If you remove an application from the routing layer, it continues to run within the cluster while being unreachable from the outside world.
If you remove an application from the routing layer, the application is still reachable internally thanks to Kubernetes services. This allows for some pretty neat interactions where users can run internal service APIs or backing services like postgres without exposing it to the outside world.
In this post, I'll take a closer look at this new feature and show you how and why you'd want to use it with your application.
Why Disable Routing?
For some use-cases, a specific services abstraction that handles application dependencies might be a good fit. Which is good, because we have that on the roadmap. But for some other use-cases, deploying a backing service as an application with no disabled routing probably makes more sense.
- You have a public-facing application that calls a private API you wrote in-house. The app providing the private API should not be exposed to the public.
- Or, you have a public-facing application that ships data to a custom backing service you wrote in-house. That backing service may be internal to your company, so making a public service abstraction is out of the question.
In these two cases, it would make sense to deploy the application, but disable routing for the internal API or for the backing service.
Let's look at a simple scenario.
We have multiple applications, all backed by their own form of authentication. We want to bring in our own Single-Sign-On (SSO) application that is public facing. Then we want to disable all access to the other applications such that the only way to use these apps is to authenticate with the SSO app.
First, let's deploy an example Go web application:
Now we know it's running, let's make it unreachable via the router:
We can check this worked like so:
Now, let's deploy the SSO app.
For this example, we'll just deploy an nginx instance configured with basic authentication which then proxies requests back to the example Go app. I've created an example GitHub repository that does all this for you.
Clone and deploy the SSO app like so:
UPSTREAM_HOST environment variable is set to
go.go. Thanks to Kubernetes' internal service discovery, we can reach the
go service in the
UPSTREAM_HOST set, the nginx app is now set up to forward requests that attach the correct basic authentication to the application.
Now the SSO app is deployed, we can test our setup:
Great. We're not authenticated, so we don't have access.
Let's try again with authentication credentials:
Great! As you can see, once authenticated, the SSO app proxies requests to the otherwise inaccessible backend application.
Let's check the logs:
From the IP address in this log, we can see that the request to our private application originated from the SSO app.
In this post we took a brief look at how and why you might want to disable routing for one of your Deis Workflow apps.
Migrating from fleet to Kubernetes for v2.0 has allowed us to make many important changes to Deis Workflow. Being able to enable or disable the routing mesh at will is just one of them.
If you're interested in trying out Deis Workflow, check out the quick start.