Are Containers Still Relevant?

In today's ecosystem, do serverless, PaaS, and no-code solutions make containers obsolete?

Are Containers Still Relevant?
Photo by OSG Containers / Unsplash

When sitting down to start a new project, or figuring out a road map for an existing one, the options for actually running your workloads have never been more wide open. Admittedly, we live in an era where we are spoiled for choice when it comes to platforms; between open-source tools, paid services, enterprise offerings, and everything else available. This has made selecting the tools we use a bit more difficult as the chances of being paralyzed by choice grow with the number of choices available. With the growing number of options available to us these days its only fair to ask the question: are containers still a viable route for now and the future, or have we grown beyond them? To answer this lets take a look at our options and see how they compare to containers.

Serverless

The next logical step from containers is serverless. Services like AWS Lambda and Google Cloud Functions make running serverless applications easy, especially ones that are inherently event-driven. Combine this ease of operations with application frameworks like the Serverless Framework and you have a simple, developer-focused workflow that enables you to go from idea to production in very little time. This can be a great workflow for prototyping API’s or rapidly developing new features for existing products. This speed and developer-focused workflow, one of the great strengths of a serverless architecture (especially when running in a public cloud provider like AWS) is you’re only paying for the compute you actively use; that is, you’re not paying for idle compute resources. This has potential to be a major cost savings over an always-on architecture, but only in certain cases. There’s an invocation threshold where the cost of running an app in a serverless environment starts to be more expensive and less performant than if that same application were deployed on a dedicated VM where you’re not paying per-invocation but at a flat fee per hour. Let’s see how serverless stacks up to containers:

PRO:

  • Rapid development flow
  • Potential cost savings (to a point)
  • Easy to maintain

CONS:

  • Paradigm only makes sense for certain workloads
  • Often time-boxed with a maximum run time, long-running jobs special consideration
  • Cost is very usage-dependent
  • Locked in to a small subset of products, not easily portable

Platform-As-A-Service (PaaS)

Similar to serverless but distinctly different are Platform-As-A-Service (PaaS) solutions like Heroku and CloudFoundry. PaaS in general tend to differ from serverless in that they are designed for “always on” services and not event-driven ones, but they’re also similar in that they typically still retain the developer-focused workflows of serverless. Under the hood most of these types of solutions are actually running your code in containers, but a lot of that is abstracted from the user and instead they’re often presented with “buildpacks” — pre-made containers that usually have certain features or language-dependent packages installed on them. That abstraction is really the heart of what makes PaaS products compelling — all of the messy parts of running a production application are abstracted away from the user. This can also be one of the downfalls of PaaS though: it can be hard to extricate applications from PaaS platforms and move them elsewhere should the need arise. Additionally, depending on the maturity of the platform you are using, native integrations with external services — databases, caches, queues, etc. — can be limited or non-existent, severely limiting your architectural options. And all of this abstraction and ease of use usually comes with a cost: PaaS platforms tend to be more expensive than more “do it yourself” IaaS like AWS or GCP.

PROs:

  • Ease of use
  • Developer focused workflow
  • Little-to-zero operational overhead

CONs:

  • Vendor lock-in
  • Price
  • Integration availability

No-Code Solutions

No-code solutions are becoming more popular as a means of quickly getting a startup off the ground, especially among non-technical founders. The ability to create an entire web-based platform without needing to learn to code is compelling, and for those who don’t know how to code, it can be necessary. Today’s no-code solutions are blurring the lines between where business processes and software intersect; business logic that might have taken weeks to develop in the past can now be wired up in a matter of minutes or hours on a no-code platform. Again, for non-technical people or groups this can be an overwhelming success, but does something like this even belong in the software development arena? From the perspective of paradigm, no-code solutions are similar to serverless in that they both operate off of an event-driven model, but no-code solutions have one major flaw that the others in this list don’t: extreme vendor lock-in. As of right now I know of no platform that allows you to move your no-code app from one platform to another. If you decide to write your core app in Microsoft Power Automate or Zoho Creator then that’s where you’re going to be until you re-write it in something else.

PROs:

  • Very low barrier-of-entry
  • Low cost
  • Non-technical people can easily contribute

CONs:

  • Extreme vendor lock-in
  • Functionality can be limited depending on your platform
  • Inflexible paradigm

Containers

So where do containers fall in all of this? I like to think of containers as the fundamental building blocks of an architecture; the smallest unit of an application that doesn’t make sense to subdivide. Whether you are running microservices or an entire monolith inside a container is up to you, but the container should represent the smallest amount of code that can be deployed by itself and still make sense. At their most basic level containers are just a subset of Linux kernel features — features like resource isolation, network throttling, etc. are provided as part of the Linux OS and have been repackaged in a way that is simpler for people like us to take advantage of. It is perfectly possible to launch a process and manage it with cgroups, virtual networks, iptables, and other features in a way that mirrors a Docker container, but applications like Docker and Podman provide convenience around these features so we don’t have to do this manually. Its this leveraging of kernel features that give containers their portability. Provided you have a Linux machine with a relatively modern kernel you should have no problem running literally any container. This simplicity does come with a cost, however, and that cost is operational complexity. Simply deploying a single container to a bare VM is not a sound way of doing production deployments. To do it the “right” way you need some sort of container orchestration which introduces complexity and operational overhead. But where containers introduce complexity they more than make up for it in flexibility and portability.

PROs:

  • Flexibility, portablility
  • “Always on” allows for multiple paradigm types
  • Local testing and development made easy

CONs:

  • Production requires some sort of orchestration
  • “Always-on” compute can cost extra compared to serverless depending on load
  • Requires more work around it to make it production-ready

Conclusion

While your personal conclusion might depend greatly on the applications and workloads you are responsible for, my conclusion is fairly easy: containers are still very relevant and will continue to be for a long time. While they introduce the greater complexity, their portability and control allow engineers to do exactly what they need without having to worry about reverse-engineering a solution around something like Heroku.

What do you think? Are containers still relevant in your environments? Let me know in the comments?