Container-based deployment models are the modern way to develop and deliver your applications. The most common tool for building with containers is Kubernetes, an open-source container-orchestration system for automating computer application deployment, scaling, and management.
Kubernetes has helped usher in a standardized way to deploy and manage applications at scale, but it can be a sprawling, difficult beast to manage when your application becomes more mature and more complex. A company will need to have a robust DevOps team to manage a full-fledged Kubernetes-based production system.
Red Hat OpenShift is an opinionated way to get your app running in production, consolidating many of the processes required for running your own Kubernetes deployments. OpenShift abstracts away infrastructure worries and provides a GUI that makes deployment and management more manageable than traditional Kubernetes. The platform gives you situational awareness, meaning you only need to understand how to use the parts of the platform that are relevant to your deployment scenario. You can expand your understanding and use of OpenShift — for instance, to access control of pipelines and images — as your needs change.
My colleague, JJ Asghar summed it up nicely: “OpenShift provides creature comforts to talk to the Kubernetes “API”—at the same level of robustness—as long as you’re willing to use the opinions OpenShift brings.”
The good news? Those opinions are tried and tested, enterprise-ready choices with the backing and support of Red Hat.
So, what do Node.js developers need to know about OpenShift deployment? This blog post covers the “what” and “how” of deploying your Node.js application in an OpenShift environment.
Source your application for deployment on OpenShift
When deploying on OpenShift, you work with images. I’ll borrow language directly from the Kubernetes documentation to explain what an image is in this context:
A container image represents binary data that encapsulates an application and all its software dependencies. Container images are executable software bundles that can run standalone and that make very well defined assumptions about their runtime environment.
There are a number of ways to prepare your application for deployment on OpenShift:
- Use a pre-existing image container
- Source to image (commonly written as S2I)
- Use a Docker image
- Use Helm
For Node.js developerse , I recommend the S2I path because it is great for getting up and running quickly. You can basically point OpenShift at your source (typically a Git repo) and it creates an image for you that is ready to deploy. This is the fasted way to get going on OpenShift. Think of it like the “Popcorn” button on your microwave.
Learn about using Node.js, S2I, and Nodeshift in this article: Deploying Node.js applications to Kubernetes with Nodeshift and Minikube.
If you want more control over how your image is configured, you’ll want to go down the pre-existing image pathway. With pre-existing images, you lay out the details of your application in config files, create an image and push that image to a registry like Quay.io. There are a few advantages to this method. First, you have more control over the details of your application’s image and how it will be deployed and managed in a container-based environment like OpenShift. And second, your application is more easily replicated when it needs to scale or when OpenShift needs to recreate failed images and the like.
You can manage your OpenShift application using one of these methods:
- Command line interface (CLI)
- Web console
On the CLI side, the canonical choice is the OpenShift CLI, commonly referred to as
oc. This robust tool manages and deploys your OpenShift applications. If you want to try deploying a Node.js application with
oc, check out the “Getting Started with Node.js on OpenShift” course.
Another option is odo, a developer-focused OpenShift CLI tool that Red Hat describes as: “The odo CLI abstracts away complex Kubernetes and OpenShift concepts for the developer, thus allowing developers to focus on what’s most important to them: code.”
Check out this great resource if you want to learn more or try out odo: How to use odo the developer-centric CLI with OpenShift.
As Node.js developers, you can go a step further with NodeShift, which easily surfaces the parts of the OpenShift CLI that Node.js developers typically use. Nodeshift makes assumptions since it knows how Node.js apps are typically set up — for example, it makes use of the ubiquitous package.json file. You can also simply use
npx to use NodeShift without even installing it. This blog post shows just how easy it can be to deploy a Node.js application with NodeShift: Use Node.js 14 on Red Hat OpenShift.
While these tools are great, if you really want to see OpenShift in action, I highly recommend digging into the web console. The UI is pretty intuitive and gives you access and control of your application and its services in a comfortable way. A great way to check out the UI and experiment with OpenShift is through the Developer Sandbox, which gives you access to an OpenShift instance where you can experiment with the UI and deploy sample applications.
If you have a full-fledged production application, you will most likely take advantage of CI/CD options using Helm, Tekton, operators and the like. A great place to learn more about using Helm, Tekton, operators and other DevOps techniques, is the DevOps section of the Red Hat Developer Portal.
- The Cloud-Native Toolkit is an open-source collection of assets that enable application development and support teams to deliver business value quickly using Red Hat OpenShift. Learn more about Tekton and ArgoCD with the hands-on workshops using a free IBM Cloud cluster. View the workshop.
- Learn how to use the OpenShift Container Platform to build and deploy an application with a data backend and a web frontend in this hands-on, interactive, browser-based workshop. Getting started with OpenShift | Red Hat Developer.