close
Microsoft Azure Red Hat OpenShift
Star Fork Issue

#BUILD_ID# - #COMMIT_ID#

menu

Red Hat OpenShift Workshop

Azure Red Hat OpenShift (ARO) is a fully managed Red Hat OpenShift service in Azure that is jointly engineered and supported by Microsoft and Red Hat. In this lab, you’ll go through a set of tasks that will help you understand some of the concepts of deploying and securing container based applications on top of Azure Red Hat OpenShift.

You can use this guide as an OpenShift tutorial and as study material to help you get started to learn OpenShift.

Some of the things you’ll be going through:

  • Creating a project on the Azure Red Hat OpenShift Web Console
  • Deploying a MongoDB container that uses Azure Disks for persistent storage
  • Restoring data into the MongoDB container by executing commands on the Pod
  • Deploying a Node JS API and frontend app from Git Hub using Source-To-Image (S2I)
  • Exposing the web application frontend using Routes
  • Creating a network policy to control communication between the different tiers in the application

You’ll be doing the majority of the labs using the OpenShift CLI, but you can also accomplish them using the Azure Red Hat OpenShift web console.

Prerequisites

GitHub Account

You will need a personal GitHub account for today’s workshop. If you don’t already have one, you can sign up for free here.

Lab Setup

Within the web interface, we will be running commands directly against our cluster using what is known as the Web Terminal Operator. This allows us to establish a session with the cluster using the currently logged in user, and all the required tooling (e.g. oc and kubectl command line tools) is made available.

Diagram

Basic concepts

ARO CLI Demonstration & ARO example setup

In this section, we’ll give you some links that can be used after the class for independent study. We’ll demonstrate how to use some ARO commands, but due to the nature of our shared environment for this workshop, not every student will be able to log in with the full permissions required. When you test this on your own with your own Azure credentials, all the commands will work in the same manner as they have been demonstrated today.

To effectively administer a ARO instance, you’ll use a combination of “az aro,” “oc,” and “az” command line commands. Below is a concise list of some of the more commonly used ARO command-line commands which we will be demonstrating today. Later in the hands-on section of the workshop, you will be using some of the “oc” commands to deploy an application and query its status. The complete ARO command list can be found at the following location: https://docs.microsoft.com/en-us/cli/azure/aro?view=azure-cli-latest

ARO CLI Demonstration

Here are some of the more commonly used AZ & AZ ARO commands.

az login aroadmin@azure.opentlc.com / password

az version (show Azure version)

az version -o table (show Azure version in table format)

az aro list [arguments] (list ARO clusters)

az aro list -o table

az vm list (list all VMs in the Azure account)

az vm list -o table (list all VMs in a more human friendly format)

az vm list –query “[].{resource:resourceGroup, name:name}” -o table

az aro create [arguments] –help (create a cluster)

az aro delete [arguments] –help (delete a cluster)

az aro list-credentials [arguments] (list credentials of a cluster)

az aro show [arguments] (get details of a cluster)

az aro show -n

az aro update [arguments] (update a cluster)

az aro wait [arguments] (wait for a cluster to reach a desired state)

ARO example setup

Further below, we’ve also provided three links to both the official ARO video and a short unofficial video, both on YouTube, created by an independent OpenShift user that demonstrates an abbreviated version of how to set up ARO yourself with your own Azure credentials for your independent study.

Source-To-Image (S2I)

Source-to-Image (S2I) is a toolkit and workflow for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and letting the container prepare that source code for execution. By creating self-assembling builder images, you can version and control your build environments exactly like you use container images to version your runtime environments.

How it works

For a dynamic language like Ruby, the build-time and run-time environments are typically the same. Starting with a builder image that describes this environment - with Ruby, Bundler, Rake, Apache, GCC, and other packages needed to set up and run a Ruby application installed - source-to-image performs the following steps:

  1. Start a container from the builder image with the application source injected into a known directory

  2. The container process transforms that source code into the appropriate runnable setup - in this case, by installing dependencies with Bundler and moving the source code into a directory where Apache has been preconfigured to look for the Ruby config.ru file.

  3. Commit the new container and set the image entrypoint to be a script (provided by the builder image) that will start Apache to host the Ruby application.

For compiled languages like C, C++, Go, or Java, the dependencies necessary for compilation might dramatically outweigh the size of the actual runtime artifacts. To keep runtime images slim, S2I enables a multiple-step build processes, where a binary artifact such as an executable or Java WAR file is created in the first builder image, extracted, and injected into a second runtime image that simply places the executable in the correct location for execution.

For example, to create a reproducible build pipeline for Tomcat (the popular Java webserver) and Maven:

  1. Create a builder image containing OpenJDK and Tomcat that expects to have a WAR file injected

  2. Create a second image that layers on top of the first image Maven and any other standard dependencies, and expects to have a Maven project injected

  3. Invoke source-to-image using the Java application source and the Maven image to create the desired application WAR

  4. Invoke source-to-image a second time using the WAR file from the previous step and the initial Tomcat image to create the runtime image

By placing our build logic inside of images, and by combining the images into multiple steps, we can keep our runtime environment close to our build environment (same JDK, same Tomcat JARs) without requiring build tools to be deployed to production.

Goals and benefits

Reproducibility

Allow build environments to be tightly versioned by encapsulating them within a container image and defining a simple interface (injected source code) for callers. Reproducible builds are a key requirement to enabling security updates and continuous integration in containerized infrastructure, and builder images help ensure repeatability as well as the ability to swap runtimes.

Flexibility

Any existing build system that can run on Linux can be run inside of a container, and each individual builder can also be part of a larger pipeline. In addition, the scripts that process the application source code can be injected into the builder image, allowing authors to adapt existing images to enable source handling.

Speed

Instead of building multiple layers in a single Dockerfile, S2I encourages authors to represent an application in a single image layer. This saves time during creation and deployment, and allows for better control over the output of the final image.

Security

Dockerfiles are run without many of the normal operational controls of containers, usually running as root and having access to the container network. S2I can be used to control what permissions and privileges are available to the builder image since the build is launched in a single container. In concert with platforms like OpenShift, source-to-image can enable admins to tightly control what privileges developers have at build time.

Routes

An OpenShift Route exposes a service at a host name, like www.example.com, so that external clients can reach it by name. When a Route object is created on OpenShift, it gets picked up by the built-in HAProxy load balancer in order to expose the requested service and make it externally available with the given configuration. You might be familiar with the Kubernetes Ingress object and might already be asking “what’s the difference?”. Red Hat created the concept of Route in order to fill this need and then contributed the design principles behind this to the community; which heavily influenced the Ingress design. Though a Route does have some additional features as can be seen in the chart below.

routes vs ingress

NOTE: DNS resolution for a host name is handled separately from routing; your administrator may have configured a cloud domain that will always correctly resolve to the router, or if using an unrelated host name you may need to modify its DNS records independently to resolve to the router.

Also of note is that an individual route can override some defaults by providing specific configuraitons in its annotations. See here for more details: https://docs.openshift.com/dedicated/architecture/networking/routes.html#route-specific-annotations

ImageStreams

An ImageStream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a Docker image repository on a registry.

What are the benefits?

Using an ImageStream makes it easy to change a tag for a container image. Otherwise to change a tag you need to download the whole image, change it locally, then push it all back. Also promoting applications by having to do that to change the tag and then update the deployment object entails many steps. With ImageStreams you upload a container image once and then you manage it’s virtual tags internally in OpenShift. In one project you may use the dev tag and only change reference to it internally, in prod you may use a prod tag and also manage it internally. You don’t really have to deal with the registry!

You can also use ImageStreams in conjuction with DeploymentConfigs to set a trigger that will start a deployment as soon as a new image appears or a tag changes its reference.

See here for more details: https://blog.openshift.com/image-streams-faq/
OpenShift Docs: https://docs.openshift.com/container-platform/3.11/dev_guide/managing_images.html
ImageStream and Builds: https://cloudowski.com/articles/why-managing-container-images-on-openshift-is-better-than-on-kubernetes/

Builds

A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process.

OpenShift Container Platform leverages Kubernetes by creating Docker-formatted containers from build images and pushing them to a container image registry.

Build objects share common characteristics: inputs for a build, the need to complete a build process, logging the build process, publishing resources from successful builds, and publishing the final status of the build. Builds take advantage of resource restrictions, specifying limitations on resources such as CPU usage, memory usage, and build or pod execution time.

See here for more details: https://docs.openshift.com/container-platform/3.11/architecture/core_concepts/builds_and_image_streams.html

Lab 1 - Go Microservices

Before we get started working on today’s labs, let’s have a quick look at what we’ll be deploying.

Application Overview

You will be deploying a ratings application on Azure Red Hat OpenShift.

Application diagram

The application consists of 3 components:

Component Link
A public facing API rating-api GitHub repo
A public facing web frontend rating-web GitHub repo
A MongoDB with pre-loaded data Data

Once you’re done, you’ll have an experience similar to the below.

Application Application Application

Cluster Access

Connect to the cluster

The cluster web console’s URL will be listed as part of the lab Instructions provided by the instructor.

Using console URL link provided, you will be directed to the login page. Enter your login details that have also been provided by the instructor.

Login Page

From here you will be taken to the OpenShift web console landing page.

Landing Page

Create Project

Open Web Terminal in web console and create project

Once you’re logged into the Web Console, click on the Web Terminal icon at the top right. This will open a panel at the bottom prompting you to create your first project. In the Project name field, enter workshop<student#> using your student number as the suffix.

Note For example, if you are student 15, the entry would be workshop15

Web Terminal

Once you click start, the Web Terminal will automatically be connected to the project you just created. You’ll notice the Web Console updates with that project as well.

Resources

Deploy MongoDB

Create MongoDB from template

Azure Red Hat OpenShift provides many container images and templates to make creating new applications & services easy. The template provides parameter fields to define all the mandatory environment variables (user, password, database name, etc) with predefined defaults including auto-generation of password values. It will also define both a deployment configuration and a service.

For this exercise we will use the following template:

  • mongodb-persistent uses a persistent volume store for the database data which means the data will survive a pod restart. Using persistent volumes requires a persistent volume pool be defined in the Azure Red Hat OpenShift deployment.

Hint You can retrieve a list of templates using the command below. The templates are preinstalled in the openshift namespace.

oc get templates -n openshift

Create a MongoDB deployment using the mongodb-persistent template. You’re passing in the values to be replaced (username, password and database) which generates a YAML/JSON file. You then pipe it to the oc create command.

oc process openshift//mongodb-persistent \
    -p MONGODB_USER=ratingsuser \
    -p MONGODB_PASSWORD=ratingspassword \
    -p MONGODB_DATABASE=ratingsdb \
    -p MONGODB_ADMIN_PASSWORD=ratingspassword | oc create -f -

This is what you should see in your console:

oc MongoDB

If you now head back to the web console and make sure you are in the workshop<student#> project, you should see a new entry for MongoDB.

MongoDB deployment

Verify if the MongoDB pod was created successfully

Run the oc get all command to view the status of the new application and verify if the deployment of the MongoDB template was successful.

oc get all

oc get all

Retrieve MongoDB service hostname

Find the MongoDB service.

oc get svc mongodb

oc get svc

The service will be accessible at the following DNS name: mongodb.workshop<student#>.svc.cluster.local which is formed of [service name].[project name].svc.cluster.local. This resolves only within the cluster.

You can also retrieve this from the web console by toggling to the Administrator view, then navigating to Networking -> Services and selecting the mongodb service. You’ll need this hostname to configure the rating-api.

MongoDB service in the Web Console

Resources

Deploy Ratings API

The rating-api is a NodeJS application that connects to mongoDB to retrieve and rate items. Below are some of the details that you’ll need to deploy this.

Fork the application to your own GitHub repository

To be able to setup CI/CD webhooks, you’ll need to fork the application into your personal GitHub repository.

Fork

Use the OpenShift CLI to deploy the rating-api

Note You’re going to be using source-to-image (S2I) as a build strategy.

We’ll now deploy the rating-api app. Don’t miss the entry in the following command where we need you to add your GitHub username.

oc new-app nodejs:14-ubi8~https://github.com/<yourgithubusername>/rating-api --strategy=source

Create rating-api using oc cli

Configure the required environment variables

First we’ll need to get to the correct screen. Ensure you’re in the Administrator view, then navigate to Workloads -> Deployments, selecting rating-api, and move to the Environment tab.

Navigate to create MONGODB_URI environment variable

We can now create the environment variable using the NAME MONGODB_URI. The VALUE should look like mongodb://[username]:[password]@[endpoint]:27017/ratingsdb. You’ll need to replace the [username] and [password] with the ones you used when creating the database. You’ll also need to replace the [endpoint] with the hostname acquired in the previous step.

Note Don’t miss replacing your student number in the following VALUE entry.

The VALUE should look something like this: mongodb://ratingsuser:ratingspassword@mongodb.workshop<student#>.svc.cluster.local:27017/ratingsdb

Hit Save when done.

Create a MONGODB_URI environment variable

It can also be done with an OC command.

oc set env deploy/rating-api MONGODB_URI=mongodb://ratingsuser:ratingspassword@mongodb.workshop<student#>.svc.cluster.local:27017/ratingsdb

Verify that the service is running

You can navigate to the logs of the rating-api deployment by going Workloads -> Pods and selecting the rating-api pod that is currently running,

Navigate to verify mongoDB connection

If you move to the Logs tab, you should see a log message confirming the code can successfully connect to the MongoDB.

Verify mongoDB connection

Retrieve rating-api service hostname

Find the rating-api service.

oc get svc rating-api

Once you replace your student name in the following, the service will be accessible at this DNS name over port 8080: rating-api.workshop<student#>.svc.cluster.local:8080 which is formed of [service name].[project name].svc.cluster.local. This resolves only within the cluster.

Setup GitHub webhook

To trigger S2I builds when you push code into your GitHub repo, you’ll need to setup the GitHub webhook.

The easiest way to access the URL needed to create the webhook in GitHub is through the web console. If you navigate to Builds -> Build Configs and selecting rating-api, you will see the copy option for the GitHub webhook URL near the bottom of the page. You’ll use this copied URL to setup the webhook on your GitHub repository.

Rating API GitHub webhook URL

If you prefer to use the command line, start by retrieving the GitHub webhook trigger secret. This is incldued if you copy from the web console, but you’ll need to gather it separately for use the GitHub webhook URL if you use the command line.

oc get bc/rating-api -o=jsonpath='{.spec.triggers..github.secret}'

You’ll get back something similar to the below. Make note the secret key in the red box as you’ll need it in a few steps.

Rating API GitHub trigger secret

Retrieve the GitHub webhook trigger URL from the build configuration.

oc describe bc/rating-api

Rating API GitHub trigger url

Replace the <secret> placeholder with the secret you retrieved in the previous step to have a URL similar to https://api.qv4g35sq.westeurope.aroapp.io:6443/apis/build.openshift.io/v1/namespaces/workshop01/buildconfigs/rating-api/webhooks/zLKX0A_0CQs6qWNwQqpV/github. You’ll use this URL to setup the webhook on your GitHub repository.

In your GitHub repository (e.g. https://github.com/<your GitHub username>/rating-api), navigate to Settings -> Webhooks and select Add Webhook.

Rating API GitHub webhook navigation

Paste the URL output (similar to above) into the Payload URL field.

Change the Content Type from GitHub’s default application/x-www-form-urlencoded to application/json.

Click Add webhook.

GitHub add webhook

You will likely see a warning message appear about the effect of disabling SSL verification and its implications. Confirm the setting change, as this is a non-production environment.

GitHub add webhook warning

If you click back into the newly created webhook, you should see a new tab called Recent Deliveries showing a green tick.

GitHub webhook success

Note Because GitHub changes its UI regularly, you might see Recent Deliveries in a different location than shown in the screenshot above, such as at the bottom. You can also refresh the browser a few moments after saving the newly created webhook and you’ll see the green tick beside it.

GitHub webhook success2

Now whenever you push a change to your GitHub repository a new build will automatically start in OpenShift. After a successful build, a new deployment will be triggered as well.

Resources

Deploy Ratings frontend

The rating-web is a NodeJS application that connects to the rating-api. Below are some of the details that you’ll need to deploy this.

Fork the application to your own GitHub repository

To be able to setup CI/CD webhooks, you’ll need to fork the application into your personal GitHub repository.

Fork

Use the OpenShift CLI to deploy the rating-web

Note You’re going to be using source-to-image (S2I) as a build strategy.

We’ll now deploy the rating-web app. Don’t miss the entry in the following command where we need you to add your GitHub username.

oc new-app nodejs:14-ubi8~https://github.com/<yourgithubusername>/rating-web --strategy=source

Create rating-web using oc cli

Configure the required environment variables

Create the API environment variable for rating-web Deployment. The value of this variable is going to be the hostname/port of the rating-api service.

Instead of setting the environment variable through the web console, we’ll set it through the OpenShift CLI.

oc set env deploy/rating-web API=http://rating-api:8080

Expose the rating-web service using a Route

Expose the service.

oc expose svc/rating-web

Find out the created route hostname

oc get route rating-web

You should get a response similar to the below.

Retrieve the created route

Note Certain browsers are configured to default to HTTPS. Please note that the web app we are creating uses HTTP for simplicity. You may need to manually force the non-secure HTTP usage.

Please also note that the fully qualified domain name (FQDN) is comprised of the application name and project name by default. The remainder of the FQDN, the subdomain, is your Azure Red Hat OpenShift cluster specific apps subdomain.

You can also retrieve this from the web console by toggling to the Developer view, then navigating to Topology and selecting the “Open URL” icon at the top right of the Deployment. Alternatively, you can select the rating-web Deployment and find the route in the lower right corner (you may need to scroll down).

Navigate to the created route

Try the service

Open the hostname in your browser, you should see the rating app page. Play around, submit a few votes and check the leaderboard.

rating-web homepage

Setup GitHub webhook

To trigger S2I builds when you push code into your GitHib repo, you’ll need to setup the GitHub webhook.

As before, the fastest way in through the web console navigating to Builds -> Build Configs and selecting rating-web, then using the copy option for the GitHub webhook URL near the bottom of the page. You’ll once again use this copied URL to setup the webhook on your GitHub repository.

Rating Web GitHub webhook URL

The process at the command line is also the same as before. Retrieve the GitHub webhook trigger secret you’ll need need in the GitHub webhook URL using the command below:

oc get bc/rating-web -o=jsonpath='{.spec.triggers..github.secret}'

You’ll get back something similar to the below. Make note the secret key in the red box as you’ll need it in a few steps.

Rating Web GitHub trigger secret

Retrieve the GitHub webhook trigger URL from the build configuration.

oc describe bc/rating-web

Rating Web GitHub trigger url

Replace the <secret> placeholder with the secret you retrieved in the previous step to have a URL similar to https://api.qv4g35sq.westeurope.aroapp.io:6443/apis/build.openshift.io/v1/namespaces/workshop01/buildconfigs/rating-web/webhooks/VZJewR0m1E65dBAv1IYM/github. You’ll use this URL to setup the webhook on your GitHub repository.

In your GitHub repository (e.g. https://github.com/<your GitHub username>/rating-web), navigate to Settings -> Webhooks and select Add Webhook.

Paste the URL output (similar to above) into the Payload URL field.

Change the Content Type from GitHub’s default application/x-www-form-urlencoded to application/json.

Click Add webhook.

GitHub add webhook

Again, you will likely see a warning message appear about the effect of disabling SSL verification and its implications. Confirm the setting change, as this is a non-production environment.

GitHub add webhook warning

If you click back into the newly created webhook, you should see a new tab called Recent Deliveries showing a green tick.

GitHub webhook success

Note Because GitHub changes its UI regularly, you might see Recent Deliveries in a different location than shown in the screenshot above, such as at the bottom. You can also refresh the browser a few moments after saving the newly created webhook and you’ll see the green tick beside it.

Now whenever you push a change to your GitHub repository a new build will automatically start in OpenShift. After a successful build, a new deployment will be triggered as well.

Make a change to the website app and see the rolling update

Go to the https://github.com/<your GitHub username>/rating-web/blob/master/src/App.vue file in your repository on GitHub.

Edit the file, and change the background-color: #999; line to be background-color: #0071c5.

Commit the changes to the file into the master branch.

GitHub edit app

Immediately head to the Builds screen in the web console. You’ll see a new build queued up which was triggered by the push. Once this is done, it will trigger a new deployment that you can track in the Workloads -> Deployments screen. Once this completes by creating a new pod, you should see the new website color updated.

Webhook build

New rating website

Resources

Create Network Policy

Now that you have the application working, it is time to apply some security hardening. You’ll use network policies to restrict communication to the rating-api.

Switch to the Cluster Console

Navigate to Networking -> Network Policies and click Create Network Policy.

Cluster console page

Create network policy

You will create a policy that applies to any pod matching the app=rating-api label. The policy will allow ingress only from pods matching the app=rating-web label.

Use the YAML below in the editor, and make sure you’re targeting your project by changing the namespace to workshop<student#>.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-allow-from-web
  namespace: workshop<student#>
spec:
  podSelector:
    matchLabels:
      app: rating-api
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: rating-web

Click Create.

Create network policy

Resources

Lab 2 - ARO Internals

Application Overview

Resources

Note In order to simplify the deployment of the app (which you will do next) we have included all the objects needed in the above YAMLs as “all-in-one” YAMLs. In reality though, an enterprise would most likely want to have a different yaml file for each Kubernetes object.

About OSToy

OSToy is a simple Node.js application that we will deploy to Azure Red Hat OpenShift. It is used to help us explore the functionality of Kubernetes. This application has a user interface which you can:

  • write messages to the log (stdout / stderr)
  • intentionally crash the application to view self-healing
  • toggle a liveliness probe and monitor OpenShift behavior
  • read config maps, secrets, and env variables
  • if connected to shared storage, read and write files
  • check network connectivity, intra-cluster DNS, and intra-communication with an included microservice

OSToy Application Diagram

OSToy Diagram

Familiarization with the Application UI

  1. Shows the pod name that served your browser the page.
  2. Home: The main page of the application where you can perform some of the functions listed which we will explore.
  3. Persistent Storage: Allows us to write data to the persistent volume bound to this application.
  4. Config Maps: Shows the contents of configmaps available to the application and the key:value pairs.
  5. Secrets: Shows the contents of secrets available to the application and the key:value pairs.
  6. ENV Variables: Shows the environment variables available to the application.
  7. Networking: Tools to illustrate networking within the application.
  8. Shows some more information about the application.

Home Page

Application Deployment

Create new project

Create a new project called “OSToy” in your cluster.

Use the following command

oc new-project ostoy<student#>

You should receive the following response

$ oc new-project ostoy<student#>
Now using project "ostoy<student#>" on server "https://api.gz49n8jb.westeurope.aroapp.io:6443".

Hint You can add applications to this project with the ‘new-app’ command. For example, try:

oc new-app rails-postgresql-example

To build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

Equivalently, you can also create this new project using the web UI by selecting Home -> Projects, then clicking on the Create Project button on the right.

UI Create Project

Download YAML configuration

Download the Kubernetes deployment object yamls from the following locations to jump host to a directory of your choosing - just remember where you placed them for the next step.

curl -O https://raw.githubusercontent.com/RH-ANZ-Workshops/anzworkshop/main/yaml/ostoy-fe-deployment.yaml

curl -O https://raw.githubusercontent.com/RH-ANZ-Workshops/anzworkshop/main/yaml/ostoy-microservice-deployment.yaml

Feel free to open them up and take a look at what we will be deploying. For simplicity of this lab we have placed all the Kubernetes objects we are deploying in one “all-in-one” yaml file. Though in reality there are benefits to separating these out into individual yaml files.

ostoy-fe-deployment.yaml

ostoy-microservice-deployment.yaml

Deploy backend microservice

The microservice application serves internal web requests and returns a JSON object containing the current hostname and a randomly generated color string.

In your command line deploy the microservice using the following command:

oc apply -f ostoy-microservice-deployment.yaml

You should see the following response:

$ oc apply -f ostoy-microservice-deployment.yaml
deployment.apps/ostoy-microservice created
service/ostoy-microservice-svc created

Deploy the front-end service

The frontend deployment contains the node.js frontend for our application along with a few other Kubernetes objects to illustrate examples.

If you open the ostoy-fe-deployment.yaml you will see we are defining:

  • Persistent Volume Claim
  • Deployment Object
  • Service
  • Route
  • Configmaps
  • Secrets

In your command line deploy the frontend along with creating all objects mentioned above by entering:

oc apply -f ostoy-fe-deployment.yaml

You should see all objects created successfully

$ oc apply -f ostoy-fe-deployment.yaml
persistentvolumeclaim/ostoy-pvc created
deployment.apps/ostoy-frontend created
service/ostoy-frontend-svc created
route.route.openshift.io/ostoy-route created
configmap/ostoy-configmap-env created
secret/ostoy-secret-env created
configmap/ostoy-configmap-files created
secret/ostoy-secret created

Get route

Get the route so that we can access the application via oc get route

You should see the following response:

NAME          HOST/PORT                                                      PATH      SERVICES              PORT      TERMINATION   WILDCARD
ostoy-route   ostoy-route-ostoy01.apps.qv4g35sq.westeurope.aroapp.io                   ostoy-frontend-svc    <all>                   None

Copy ostoy-route-ostoy<student#>.apps.qv4g35sq.westeurope.aroapp.io from the command line and paste it into your browser and press enter. You should see the homepage of our application.

Home Page

Logging

Assuming you can access the application via the Route provided and are still logged into the CLI (please go back to part 2 if you need to do any of those) we’ll start to use this application. As stated earlier, this application will allow you to “push the buttons” of OpenShift and see how it works. We will do this to test the logs.

Click on the Home menu item and then click in the message box for “Log Message (stdout)” and write any message you want to output to the stdout stream. You can try “All is well!”. Then click “Send Message”.

Logging stdout

Click in the message box for “Log Message (stderr)” and write any message you want to output to the stderr stream. You can try “Oh no! Error!”. Then click “Send Message”.

Logging stderr

View logs directly from the pod

Go to the CLI and enter the following command to retrieve the name of your frontend pod which we will use to view the pod logs:

$ oc get pods -o name
pod/ostoy-frontend-679cb85695-5cn7x
pod/ostoy-microservice-86b4c6f559-p594d

So the pod name in this case is ostoy-frontend-679cb85695-5cn7x. Then run oc logs ostoy-frontend-679cb85695-5cn7x and you should see your messages:

$ oc logs ostoy-frontend-679cb85695-5cn7x
[...]
ostoy-frontend-679cb85695-5cn7x: server starting on port 8080
Redirecting to /home
stdout: All is well!
stderr: Oh no! Error!

You should see both the stdout and stderr messages.

Exploring Health Checks

In this section we will intentionally crash our pods as well as make a pod non-responsive to the liveness probes and see how Kubernetes behaves. We will first intentionally crash our pod and see that Kubernetes will self-heal by immediately spinning it back up. Then we will trigger the health check by stopping the response on the /health endpoint in our app. After three consecutive failures, Kubernetes should kill the pod and then recreate it.

It would be best to prepare by splitting your screen between the OpenShift Web UI and the OSToy application so that you can see the results of our actions immediately.

Splitscreen

But if your screen is too small or that just won’t work, then open the OSToy application in another tab so you can quickly switch to the OpenShift console once you click the button. To get to this deployment in the OpenShift Web Console go to:

Workloads -> Deployments and select ostoy-frontend.

Deploy Num

Go to the OSToy app, click on Home in the left menu, and enter a message in the “Crash Pod” tile (e.g. “This is goodbye!”) and press the “Crash Pod” button. This will cause the pod to crash and Kubernetes will restart the pod. After you press the button you will see:

Crash Message

Quickly switch to the Deployment screen. You will see that the pod is yellow meaning it is recovering the fallen pad and scaling back to 1. It should quickly come back up and show blue.

Pod Crash

You can also check in the pod events and further verify that the container has crashed and been restarted.

Pod Events

Keep the page from the pod events still open from the previous step. Then in the OSToy app click on the “Toggle Health” button, in the “Toggle Health Status” tile. You will see the “Current Health” switch to “I’m not feeling all that well”.

Pod Events

This will cause the app to stop responding with a “200 HTTP code”. After 3 such consecutive failures, Kubernetes will kill the pod and restart it. Quickly switch back to the pod events tab and you will see that the liveness probe failed and the pod as being restarted.

Pod Events2

Persistent Storage

In this section we will execute a simple example of using persistent storage by creating a file that will be stored on a persistent volume in our cluster and then confirm that it will “persist” across pod failures and recreation.

Inside the OpenShift web UI click on Storage -> PersistentVolumeClaims in the left menu. You will then see a list of all persistent volume claims that our application has made. In this case there is just one called “ostoy-pvc”. You will also see other pertinent information such as whether it is bound or not, size, and storage class.

StoragePVC

If you drill into ostoy-pvc, you will see ReadWriteOnce under Access Modes, which means that the volume can only be mounted to one node but the pod(s) can both read and write to that volume. The default in ARO is for Persistent Volumes to be backed by Azure Disk, but it is possible to chose Azure Files so that you can use the RWX (Read-Write-Many) access mode. (See here for more info on access modes)

In the OSToy app click on Persistent Storage in the left menu. In the “Filename” area enter a filename for the file you will create. (ie: “test-pv.txt”)

Underneath that, in the “File Contents” box, enter text to be stored in the file. (ie: “Azure Red Hat OpenShift is the greatest thing since sliced bread!” or “test” :) ). Then click “Create file”.

Create File

You will then see the file you created appear above under “Existing files”. Click on the file and you will see the filename and the contents you entered.

View File

We now want to kill the pod and ensure that the new pod that spins up will be able to see the file we created. Exactly like we did in the previous section. Click on Home in the left menu.

Click on the “Crash pod” button. (You can enter a message if you’d like).

Click on Persistent Storage in the left menu

You will see the file you created is still there and you can open it to view its contents to confirm.

Crash Message

Now let’s confirm that it’s actually there by using the CLI and checking if it is available to the container. If you remember we mounted the directory /var/demo_files to our PVC. So get the name of your frontend pod

oc get pods

then get an SSH session into the container

oc rsh <podname>

then cd /var/demo_files

if you enter ls you can see all the files you created. Next, let’s open the file we created and see the contents

cat test-pv.txt

You should see the text you entered in the UI.

$ oc get pods
NAME                                  READY     STATUS    RESTARTS   AGE
ostoy-frontend-5fc8d486dc-wsw24       1/1       Running   0          18m
ostoy-microservice-6cf764974f-hx4qm   1/1       Running   0          18m

$ oc rsh ostoy-frontend-5fc8d486dc-wsw24
/ $ cd /var/demo_files/

/var/demo_files $ ls
lost+found   test-pv.txt

/var/demo_files $ cat test-pv.txt 
Azure Red Hat OpenShift is the greatest thing since sliced bread!

Then exit the SSH session by typing exit. You will then be in your CLI.

Configuration

In this section we’ll take a look at how OSToy can be configured using ConfigMaps, Secrets, and Environment Variables. This section won’t go into details explaining each (the links are for that), but it will show you how they are exposed to the application.

Configuration using ConfigMaps

ConfigMaps allow you to decouple configuration artifacts from container image content to keep containerized applications portable.

Click on Config Maps in the left menu.

This will display the contents of the configmap available to the OSToy application. We defined this in the ostoy-fe-deployment.yaml here:

kind: ConfigMap
apiVersion: v1
metadata:
  name: ostoy-configmap-files
data:
  config.json:  '{ "default": "123" }'

We can dig a bit deeper into this in the OpenShift UI by navigating to Workloads -> ConfigMaps and selecting the “ostoy-configmap-files” link.

ConfigMap

Configuration using Secrets

Kubernetes Secret objects allow you to store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a secret is safer and more flexible than putting it, verbatim, into a Pod definition or a container image.

Click on Secrets in the left menu.

This will display the contents of the secrets available to the OSToy application. We defined this in the ostoy-fe-deployment.yaml here:

apiVersion: v1
kind: Secret
metadata:
  name: ostoy-secret
data:
  secret.txt: VVNFUk5BTUU9bXlfdXNlcgpQQVNTV09SRD1AT3RCbCVYQXAhIzYzMlk1RndDQE1UUWsKU01UUD1sb2NhbGhvc3QKU01UUF9QT1JUPTI1
type: Opaque

If you plug the contents of secret.txt from above into the Base64 decoder at the bottom of the Secrets section in OSToy, we can see it matches the output further up that page.

Configuration using Environment Variables

Using environment variables is an easy way to change application behavior without requiring code changes. It allows different deployments of the same application to potentially behave differently based on the environment variables, and OpenShift makes it simple to set, view, and update environment variables for Pods/Deployments.

Click on ENV Variables in the left menu.

This will display the environment variables available to the OSToy application. We added three as defined in the deployment spec of ostoy-fe-deployment.yaml here:

  env:
  - name: ENV_TOY_CONFIGMAP
    valueFrom:
      configMapKeyRef:
        name: ostoy-configmap-env
        key: ENV_TOY_CONFIGMAP
  - name: ENV_TOY_SECRET
    valueFrom:
      secretKeyRef:
        name: ostoy-secret-env
        key: ENV_TOY_SECRET
  - name: MICROSERVICE_NAME
    value: OSTOY_MICROSERVICE_SVC

The last one, MICROSERVICE_NAME is used for the intra-cluster communications between pods for this application. The application looks for this environment variable to know how to access the microservice in order to get the colors.

Networking and Scaling

In this section we’ll see how OSToy uses intra-cluster networking to separate functions by using microservices and visualize the scaling of pods.

Let’s review how this application is set up…

OSToy Diagram

As can be seen in the image above we see we have defined at least 2 separate pods, each with its own service. One is the frontend web application (with a service and a publicly accessible route) and the other is the backend microservice with a service object created so that the frontend pod can communicate with the microservice (across the pods if more than one). Therefore this microservice is not accessible from outside this cluster, nor from other namespaces/projects (due to ARO’s network policy, ovs-networkpolicy). The sole purpose of this microservice is to serve internal web requests and return a JSON object containing the current hostname and a randomly generated color string. This color string is used to display a box with that color displayed in the tile titled “Intra-cluster Communication”.

Networking

Click on Networking in the left menu. Review the networking configuration.

The right tile titled “Hostname Lookup” illustrates how the service name created for a pod can be used to translate into an internal ClusterIP address. Enter the name of the microservice following the format of my-svc.my-namespace.svc.cluster.local which we created in our ostoy-microservice.yaml which can be seen here:

apiVersion: v1
kind: Service
metadata:
  name: ostoy-microservice-svc
  labels:
    app: ostoy-microservice
spec:
  type: ClusterIP
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
  selector:
    app: ostoy-microservice

In this case we will enter: ostoy-microservice-svc.ostoy<student#>.svc.cluster.local

We will see an IP address returned. In our example it is 172.30.165.246. This is the intra-cluster IP address; only accessible from within the cluster.

ostoy DNS

Scaling

OpenShift allows one to scale up/down the number of pods for each part of an application as needed. This can be accomplished via changing our replicaset/deployment definition (declarative), by the command line (imperative), or via the web UI (imperative). In our deployment definition (part of our ostoy-fe-deployment.yaml) we stated that we only want one pod for our microservice to start with. This means that the Kubernetes Replication Controller will always strive to keep one pod alive.

If we look at the tile on the left we should see one box randomly changing colors. This box displays the randomly generated color sent to the frontend by our microservice along with the pod name that sent it. Since we see only one box that means there is only one microservice pod. We will now scale up our microservice pods and will see the number of boxes change.

To confirm that we only have one pod running for our microservice, run the following command, or use the web UI.

[okashi@ok-vm ostoy]# oc get pods
NAME                                   READY     STATUS    RESTARTS   AGE
ostoy-frontend-679cb85695-5cn7x       1/1       Running   0          1h
ostoy-microservice-86b4c6f559-p594d   1/1       Running   0          1h

Let’s change our microservice definition yaml to reflect that we want 3 pods instead of the one we see. Download the ostoy-microservice-deployment.yaml and save it on your local machine.

Open the file using your favorite editor. Ex: vi ostoy-microservice-deployment.yaml.

Find the line that states replicas: 1 and change that to replicas: 3. Then save and quit.

It will look like this

spec:
    selector:
      matchLabels:
        app: ostoy-microservice
    replicas: 3

Assuming you are still logged in via the CLI, execute the following command:

oc apply -f ostoy-microservice-deployment.yaml

Confirm that there are now 3 pods via the CLI (oc get pods) or the web UI (Overview > expand “ostoy-microservice”).

See this visually by visiting the OSToy app and seeing how many boxes you now see. It should be three.

UI Scale

Now we will scale the pods down using the command line. Execute the following command from the CLI:

oc scale deployment ostoy-microservice --replicas=2

Confirm that there are indeed 2 pods, via the CLI (oc get pods) or the web UI.

See this visually by visiting the OSToy App and seeing how many boxes you now see. It should be two.

Lastly let’s use the web UI to scale back down to one pod. In the project you created for this app (ie: “ostoy”) in the left menu click Overview > expand “ostoy-microservice”. On the right you will see a blue circle with the number 2 in the middle. Click on the down arrow to the right of that to scale the number of pods down to 1.

UI Scale

See this visually by visiting the OSToy app and seeing how many boxes you now see. It should be one. You can also confirm this via the CLI or the web UI

Autoscaling

Autoscaling

In this section we will explore how the Horizontal Pod Autoscaler (HPA) can be used and works within Kubernetes/OpenShift.

As defined in the Kubernetes documentation:

Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization.

We will create an HPA and then use OSToy to generate CPU intensive workloads. We will then observe how the HPA will scale up the number of pods in order to handle the increased workloads.

1. Create the Horizontal Pod Autoscaler

Run the following command to create the autoscaler. This will create an HPA that maintains between 1 and 10 replicas of the Pods controlled by the ostoy-microservice DeploymentConfig created. Roughly speaking, the HPA will increase and decrease the number of replicas (via the deployment) to maintain an average CPU utilization across all Pods of 80% (since each pod requests 50 millicores, this means average CPU usage of 40 millicores)

oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10

2. View the current number of pods

In the OSToy app in the left menu click on “Autoscaling” to access this portion of the workshop.

HPA Menu

As was in the networking section you will see the total number of pods available for the microservice by counting the number of colored boxes. In this case we have only one. This can be verified through the web UI or from the CLI.

You can use the following command to see the running microservice pods only: oc get pods --field-selector=status.phase=Running | grep microservice

HPA Main

3. Increase the load

Now that we know that we only have one pod let’s increase the workload that the pod needs to perform. Click the link in the center of the card that says “increase the load”. Please click only ONCE!

This will generate some CPU intensive calculations. (If you are curious about what it is doing you can click here).

Note: The page may become slightly unresponsive. This is normal; so be patient while the new pods spin up.

4. See the pods scale up

After about a minute the new pods will show up on the page (represented by the colored rectangles). Confirm that the pods did indeed scale up through the OpenShift Web Console or the CLI (you can use the command above).

Note: The page may still lag a bit which is normal.

Contributors

A big thank you to the team at Microsoft for providing the base content for us to work with.

The following people have contributed to this workshop. Thanks!