arrow-left
Only this pageAll pages
gitbookPowered by GitBook
1 of 53

Certified Operator Build Guide

Loading...

Loading...

Loading...

Helm Operators

Loading...

Loading...

Loading...

Loading...

Loading...

Ansible Operators

Loading...

Loading...

Loading...

Loading...

Loading...

Golang Operator Gotcha's

Loading...

OpenShift Deployment

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Troubleshooting and Resources

Loading...

Loading...

Loading...

Loading...

Appendix

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Dockerfile Requirements

The Dockerfile can be found in the main directory of your operator project. For Certified Operator Images Dockerfile requirements are as follows:

  1. You must configure the required labels (name, maintainer, vendor, version, release, summary)

  2. Software license(s)arrow-up-right must be included within the image.

circle-info

Although typically labels and licenses are not required to successfully build a running image, they are required for the Red Hat build service and scanner.

Below is an example Dockerfile for a Helm Operator which includes the aforementioned requirements:

A few things to note about the Dockerfile above:

  • The default FROM line produced by the SDK needs to be replaced with the line listed above.

  • This Dockerfile contains all of the required labels. These labels must be manually added (name, vendor, version, release, summary, and description).

  • This Dockerfile also references a licenses/ directory, which needs to be manually added to the root of the project. This directory must include the software license(s) of your project.

Your project directory structure should look similar to the hierarchy below. Note the location of the licenses directory.

Dockerfile
# Build the manager binary
FROM registry.redhat.io/openshift4/ose-helm-operator:v4.7

### Required OpenShift Labels
LABEL name="Wordpress Operator" \
      vendor="Bitnami" \
      version="v0.0.1" \
      release="1" \
      summary="This is an example of a wordpress helm operator." \
      description="This operator will deploy wordpress to the cluster."

# Required Licenses
COPY licenses /licenses

ENV HOME=/opt/helm
COPY watches.yaml ${HOME}/watches.yaml
COPY helm-charts  ${HOME}/helm-charts
WORKDIR ${HOME}
wordpress-operator
.
├── charts
│   └── mariadb
│       ├── Chart.yaml
│       ├── files
│       │   └── docker-entrypoint-initdb.d
│       │       └── README.md
│       ├── OWNERS
│       ├── README.md
│       ├── templates
│       │   ├── _helpers.tpl
│       │   ├── initialization-configmap.yaml
│       │   ├── master-configmap.yaml
│       │   ├── master-pdb.yaml
│       │   ├── master-statefulset.yaml
│       │   ├── master-svc.yaml
│       │   ├── NOTES.txt
│       │   ├── rolebinding.yaml
│       │   ├── role.yaml
│       │   ├── secrets.yaml
│       │   ├── serviceaccount.yaml
│       │   ├── servicemonitor.yaml
│       │   ├── slave-configmap.yaml
│       │   ├── slave-pdb.yaml
│       │   ├── slave-statefulset.yaml
│       │   ├── slave-svc.yaml
│       │   ├── test-runner.yaml
│       │   └── tests.yaml
│       ├── values-production.yaml
│       ├── values.schema.json
│       └── values.yaml
├── Chart.yaml
├── config
│   ├── crd
│   │   ├── bases
│   │   │   └── example.com_wordpresses.yaml
│   │   └── kustomization.yaml
│   ├── default
│   │   ├── kustomization.yaml
│   │   └── manager_auth_proxy_patch.yaml
│   ├── manager
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── role_binding.yaml
│   │   ├── role.yaml
│   │   ├── wordpress_editor_role.yaml
│   │   └── wordpress_viewer_role.yaml
│   ├── samples
│   │   ├── example_v1alpha1_wordpress.yaml
│   │   └── kustomization.yaml
│   └── scorecard
│       ├── bases
│       │   └── config.yaml
│       ├── kustomization.yaml
│       └── patches
│           ├── basic.config.yaml
│           └── olm.config.yaml
├── Dockerfile
├── helm-charts
│   └── wordpress
│       ├── charts
│       ├── Chart.yaml
│       ├── templates
│       │   ├── deployment.yaml
│       │   ├── _helpers.tpl
│       │   ├── hpa.yaml
│       │   ├── ingress.yaml
│       │   ├── NOTES.txt
│       │   ├── serviceaccount.yaml
│       │   ├── service.yaml
│       │   └── tests
│       │       └── test-connection.yaml
│       └── values.yaml
├── licenses
│   └── license.txt
├── Makefile
├── PROJECT
├── README.md
├── requirements.lock
├── requirements.yaml
├── templates
│   ├── deployment.yaml
│   ├── externaldb-secrets.yaml
│   ├── _helpers.tpl
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── pvc.yaml
│   ├── secrets.yaml
│   ├── servicemonitor.yaml
│   ├── svc.yaml
│   ├── tests
│   │   └── test-mariadb-connection.yaml
│   └── tls-secrets.yaml
├── values.schema.json
├── values.yaml
└── watches.yaml

Building and Pushing Image

hashtag
Setup Quay.io account (for testing only)

Before we build our operator image we will want to setup an account on Quay.io. We will need to push this image to quay.io so that we can test the Metadata Files discussed in the next section. To setup an account on quay.io:

  1. Go to: quay.ioarrow-up-right

  2. Click on the SIGN IN button located on the upper right side

  3. You can either Sign In with your GitHub account or click Create Account

  4. Once logged into Quay, click CREATE REPOSITORY to create a place to push your operator image for testing (example name: wordpress-operator)

hashtag
Build and Push Operator Image

Now that your project is setup with Labels and Licenses, and your quay.io account is setup we can build the operator image. In this example, replace rhc4tp with your quay.io account username:

Now that you pushed your image to quay.io you need to edit the config/manager/manager.yaml file and change the image field to reflect your image repository. (Note: line 19)

circle-info

Keep in mind this is for testing only, this does not certify your operator image. Once your image is certified you will need to got back to config/manager/manager.yaml and change the image repository. If you want to certify your image see instructions here: .

Certifying your Operatorarrow-up-right
sudo make docker-build docker-push IMG=quay.io/rhc4tp/wordpress-operator:v0.0.1
manager.yaml
apiVersion: v1
kind: Namespace
metadata:
  labels:
    control-plane: controller-manager
  name: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-operator
spec:
  replicas: 1
  selector:
    matchLabels:
      name: wordpress-operator
  template:
    metadata:
      labels:
        name: wordpress-operator
    spec:
      serviceAccountName: wordpress-operator
      containers:
        - name: wordpress-operator
          # Replace this with the built image name
          image: quay.io/rhc4tp/wordpress-operator:v0.0.1
          imagePullPolicy: Always
          env:
            - name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: OPERATOR_NAME
              value: "wordpress-operator"
            - name: RELATED_IMAGE_WORDPRESS
              value: docker.io/bitnami/wordpress:5.3.2-debian-10-10

Using a Single Image Variable (Red Hat Marketplace)

This step is Optional in terms of creating a certified operator but it is a technical requirement for the Red Hat Marketplace.

circle-info

We recommend using the following format when referencing your images within your values.yaml file. It will make it easier to refactor your operator if it needs to be provided within an air-gapped environment. This is also a technical requirement for the Red Hat Marketplace.

Instead of breaking up the image into <registry>/<image>:<tag> or some variation, use a single image variable for all three parts.

roles/main.yaml
#Instead of 
#image: "{{ image.repository }}/{{ image.image_name }}:{{ image.tag }}"

#use
image: "{{ image }}"'

If you change the role for this, make sure you also update the defaults/main.yaml and vars/main.yaml to reflect the new variable.

defaults/main.yaml
#instead of
#image
#  repository: <repo>
#  image_name: <image>
#  tag: <tag>

#use
image: <repository>/<image>:<tag>

Update the Controller Manager

By default, config/default/manager_auth_proxy_patch.yaml uses an uncertified kubebuilder image. In order to pass certification, it'll need to be changed to the certified image.‌

Edit the controller manager

vi config/default/manager_auth_proxy_patch.yaml

‌

Replace the image with the certified version, registry.redhat.io/openshift4/ose-kube-rbac-proxy

# This patch inject a sidecar container which is a HTTP proxy for the
# controller manager, it performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: controller-manager
  namespace: system
spec:
  template:
    spec:
      containers:
      - name: kube-rbac-proxy
        #use this certified image
        image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:latest

Using a Single Image Variable (Red Hat Marketplace)

This step is Optional in terms of creating a certified operator but it is a technical requirement for the Red Hat Marketplace.

circle-info

We recommend using the following format when referencing your images within your values.yaml file. It will make it easier to refactor your operator if it needs to be provided within an air-gapped environment. This is also a technical requirement for Red Hat Marketplace.

The first place we will need to make changes is within the values.yaml file referenced by the Helm Chart. Any image that is referenced with multiple image variables needs to be commented out and a new single image variable needs to be created.

vi values.yaml
values.yaml
###omitted for brevity#

##​image: 
##  registry: docker.io
##  repository: bitnami/wordpress
##  tag: 5.3.2-debian-10-r0
  image: docker.io/bitnami/wordpress:5.3.2-debian-10-r0  
## Specify a imagePullPolicy  
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'  
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images  
  pullPolicy: IfNotPresent​

###omitted for brevity###​

As you can see we condensed the image from being listed as 3 sub-variables to one single image variable. This must be done for all images listed within the values.yaml file. In this example this is the only image listed, therefore the only image that needs to have been changed. Next, we will need to identify any references to the image we modified in the templates/ directory.

This will list out all of the templates files that reference the image. Using wordpress as an example you can see that the deployment.yaml is the only file we need to change. Within this file we now need to list the image as shown below.

Next we need to edit the watches.yaml file to add the overrideValues: field

Finally before we build the ClusterServiceVersion we need to edit the manager.yaml.

Adding the image location and the last environment variable to the manager.yaml will allow you to build your CSV with these changes automatically being generated. If you have already built the CSV you will need to adjust it as well.

triangle-exclamation

This is not a required step for certification at this time. Though it may be in the future. Making the adjustments to how the image is referenced now, will save you a lot of time in the future if this feature becomes a requirement.

cd templates/
grep "image:" *
deployment.yaml
### omitted for brevity ###​

     containers:
       - name: wordpress        
         image: {{ .Values.image.image }}        
         pullPolicy: {{ .Values.image.pullPolicy }}​
         
### omitted for brevity ###
vi watches.yaml
watches.yaml
---
- version: v1alpha1  
  group: apigw.wordpress.com  
  kind: Wordpress  
  chart: helm-charts/wordpress  
  overrideValues:    
    image.image: $RELATED_IMAGE_WORDPRESS
vi config/manager/manager.yaml
manager.yaml
apiVersion: v1
kind: Namespace
metadata:
  labels:
    control-plane: controller-manager
  name: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-operator
spec:
  replicas: 1
  selector:
    matchLabels:
      name: wordpress-operator
  template:
    metadata:
      labels:
        name: wordpress-operator
    spec:
      serviceAccountName: wordpress-operator
      containers:
        - name: wordpress-operator
          # Replace this with the built image name
          image: PUBLISHED IMAGE IN CONNECT (registry.redhat.connect/etc)
          imagePullPolicy: Always
          env:
            - name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: OPERATOR_NAME
              value: "wordpress-operator"
            - name: RELATED_IMAGE_WORDPRESS
              value: docker.io/bitnami/wordpress:5.3.2-debian-10-10

Building a Helm Operator

In this section we will be going over the changes needed to build a Certified Helm Operator. This section will also cover the beginning of testing the operator.

triangle-exclamation

This guide does not go over building an operator. If you need assistance building an operator, please visit: Guide to building a Helm Based Operator using Operator SDK.arrow-up-right

Using a Single Image Variable (Red Hat Marketplace)chevron-rightDockerfile Requirementschevron-rightUpdate the Controller Managerchevron-rightBuilding and Pushing Imagechevron-right
circle-exclamation

For more information please consult the OpenShift Documentation around Helm Operatorsarrow-up-right

Introduction

Welcome to Red Hat Connect for Technology Partners. This guide provides instructions on the changes needed in your operator to become a certified operator. For more information on Building your Operator please visit: Operator-SDKarrow-up-right. The purpose of this guide is to help further develop and test the functionality of your operator. Once testing is complete you can complete certification by going to our Partner Guide.arrow-up-right

triangle-exclamation

Please note to create a certified operator you will need to have a certified container application on the Red Hat Container Catalogarrow-up-right. More information on how to certify your container application can be found herearrow-up-right.

  • Before you get started, make sure you go through the section.

You can think of the Operator Capability Level model as a visual sizing guide as to which toolkit you should use to build your operator. Once you've decided which phases (or feature sets) are being targeted, this model helps you decide which framework(s) you can use to achieve that goal.

Use Cases ideal for operator development

  • stateful applications (such as databases)

  • clustered or high-availability applications (clustered databases, key-value stores such as etcd, in-memory cache clusters such as Redis)

  • multiservice applications (an application which needs dependent services to come online first)

  • microservices (numerous small components that together make up a cohesive product or app)

Use cases less ideal for operator development

  • stateless apps (most web apps or front-ends)

  • infrastructure/host agents (monitoring agents or log forwarders)

Operators are intended for use cases where there are more complex or stateful applications. Kubernetes already handles stateless applications (with pods and deployments) and host agents (Daemonsets) rather well. With that being said, those "less than ideal" use cases can still benefit from an operator. The ideal toolkit for building an operator for stateless applications and host agents is either Helm or Ansible, both of which allow for a quick, simple development cycle using the operator-sdk. This provides you with the ability to leverage the marketplace that's built into OpenShift 4.x and provide end users with a one click install experience on OpenShift.

hashtag

Pre-Requisites
Operator Capability Level

What is an Operator?

hashtag
What is an Operator?

An Operator is a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl/oc tooling. You can think of Operators as the runtime that manages this type of application on Kubernetes.

Operators fall into a few different categories, based on the type of applications they run:

  1. Service Operators (MongoDB, Spark, Nginx)

  2. Platform Operators (aggregated logging, security scanning, namespace management)

  3. Resource Operators (CNI operators, Hardware management operators, Telco/Cellular radios)

  4. Full Solution (combination of above operators)

Conceptually, an Operator takes human operational knowledge and encodes it into software that is more easily packaged and shared with consumers. Think of an Operator as an extension of the software vendor’s engineering team that watches over a Kubernetes environment and uses its current state to make decisions in milliseconds. Advanced Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, like skipping a software backup process to save time.

hashtag
Advantages of Operators

  • Pod startup ordering: Kubernetes does not provide guarantees of container- or pod-startup ordering without sophisticated use of concepts like init containers.

  • Familiar experience on any infrastructure: Your customers have the same deployment and day 2 experience regardless of infrastructure provider - anywhere Kubernetes runs, your Operator should run.

  • Provider maintained: De-couple your OSR from the release cadence of Kubernetes/OpenShift itself, and deliver features, bug- and security-fixes to your customers as they become available.

Ease of installation: We have heard from vendors providing out-of-tree kernel modules that installations (DKMS or manual rpmbuild / similar) end up in a poor experience (even for sophisticated customers). Operators provide a one-click method for installation and operations of your hardware and software stack.
  • Standardized operational model and reduce support burden: Operators provide a way for you to standardize deployments, upgrades and configuration to only what you support. This avoids “snowflake” deployments and ensures smooth upgrades.

  • Upgrades: Operators allow you to encode best-practices (e.g. straight from your documentation), reducing configuration drift in customer deployments, increasing deployment success ratio and thereby reducing support costs.

  • Network adapter integrations “White list” of supported SR-IOV card/driver combinations for the device and CNI Multus plugins

  • Pre-Requisites

    triangle-exclamation

    This guide does not complete the certification process and is only for development and testing. We highly recommend you certify your application container image(s) through our Partner Portal. More information can be found here: Partner Guidearrow-up-right.

    This guide will cover how to build and test your operator. Before you go through the guide you will need to setup your system and environments.

    You will need 2 environments.

    1. Local Build Environment

    2. Test OpenShift Environment (Which will be discussed in )

      1. NOTE: We recommend using Fedora 31+, you can either install these packages locally or use a Fedora VM to build your operator. In this section we will only go over how to install these packages in Fedora, but you can use another OS as well.

    The following are prerequisites for your Local Build Environment:

    • Operator-SDK installed -

    • OpenShift oc installed -

    triangle-exclamation

    This guide will go over changes needed for you operator to be certified through Red Hat. For more information on how to build your operator please visit:

    Installing an OpenShift Environment
    instructionsarrow-up-right
    instructionsarrow-up-right
    Building Operators with Operator SDKarrow-up-right

    Building an Ansible Operator

    Ansible Operators use Ansible playbooks or roles as the core logic of the operator.

    circle-info

    For more information on how to build an Ansible operator: Guide to building a Ansible Based Operator using Operator SDKarrow-up-right.

    hashtag
    Why Use Ansible in an Operator?

    Ansible is useful for DevOps and infrastructure teams who are already familiar with the language. It also uses a similar pattern as Kubernetes in that playbooks/roles are written in declarative YAML. Jinja2 templating in Ansible is also easy to use, and is nearly identical in syntax to the templating engine used in Helm. This means that Helm templates can easily be converted or migrated into Ansible tasks, with only minor editing required.

    Ansible adds the ability to perform operations in order (think multiservice deployments), without relying solely on the Kubernetes concept of Init Containers, or Helm chart dependencies. Ansible is also capable of full Day 2 management of Kubernetes Applications, and can be used to configure things off-cluster, if necessary. As you may recall from the , Ansible covers the entire Operator Maturity Model.

    hashtag
    Ansible Operator at a Glance

    An Ansible Operator is a Golang operator that uses Ansible for the reconciliation logic. This logic (contained in roles or playbooks) gets mapped to a particular Custom Resource by a watches.yaml file inside the operator image. The operator go code checks watches.yaml to see which playbook or roles to execute, and then launches the playbook/role using Ansible Runner. This is illustrated and contrasted with golang operators in the following diagram (key points are in orange):

    The Ansible Operator's point of user interaction is through a Kubernetes Custom Resource. A user creates a Custom Resource yaml (or json) file and passes this to Operator using the command line oc (OpenShift) or kubectl (upstream K8s) commands, or the web console (Operator Lifecycle Manager). The variables defined in the Custom Resource object get passed to Ansible Runner as --extra-vars, and Ansible Runner invokes the playbook or role containing your customized logic using these variables.

    circle-info

    The playbook/role execution loop occurs repeatedly at a certain interval, and is called the reconciliation loop in operator terminology

    The next section outlines the basic steps for updating an Ansible Operator with certification changes. For more information on how to create an Ansible Role from a Helm Chart you can visit:

    circle-exclamation

    For more information please consult the

    Introduction
    OpenShift Ansible Operator Documentationarrow-up-right

    Update the Controller Manager

    By default, config/default/manager_auth_proxy_patch.yaml uses an uncertified kubebuilder image. In order to pass certification, it'll need to be changed to the certified image.

    Edit the controller manager

    vi config/default/manager_auth_proxy_patch.yaml

    Replace the image with the certified version, registry.redhat.io/openshift4/ose-kube-rbac-proxy

    # This patch inject a sidecar container which is a HTTP proxy for the
    # controller manager, it performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: controller-manager
      namespace: system
    spec:
      template:
        spec:
          containers:
          - name: kube-rbac-proxy
            #use this certified image
            image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:latest

    Writing to the Status Subresource

    This section covers updating the CR status in your operator code.

    circle-info

    For more information on how to build a Go operator: Guide to building a Go-based Operator using Operator SDK. arrow-up-right

    If you're building an operator in Go (using the SDK) or using some other language or framework, you'll need to make certain that you're updating the Status field of the CR with relevant info regarding the running state of the Custom Resource. When certifying your operator, the metadata scan will run the operator-sdk scorecard test against your metadata bundle. This test currently only checks that something (can't be a null value) gets written to the status field of the CR by the operator before the default 60 second timeout of the operator-sdk scorecard test. So, if the operator takes longer than 60s to write the current status to the CR status field, then it will fail the scorecard and thus the metadata scan. To populate the status, the operator must write to the "/status" API endpoint of the CR, since updates posted to the API endpoint of the CR itself will ignore changes to the status field. Please note that this isn't a developer guide, so we don't go into code examples here, but the topic is covered in depth by both the upstream and the .

    circle-exclamation

    For more information please consult the

    Kubernetes Documentationarrow-up-right
    Operator SDK Documentationarrow-up-right
    OpenShift Golang Operator Documentationarrow-up-right

    Operator Metadata

    This section covers the creation of the operator metadata bundle image, which must be submitted as part of the certification process.

    You must create metadata for your operator as part of the build process. These metadata files are the packaging format used by the Operator Lifecycle Manager (OLM) to deploy your operator onto OpenShift (OLM comes pre-installed in OpenShift 4.x). A basic metadata bundle for the operator contains the following YAML files listed below. This metadata will be hosted inside of its own container image called the bundle image.

    circle-exclamation

    Please note: for certification purposes, your metadata bundle must follow a specific structure documented herearrow-up-right

    hashtag
    Custom Resource Definition

    This file gets used to register the Custom Resource(s) being used by the operator with the Kubernetes cluster. There can be more than one CRD included in a bundle, depending on how many Custom Resources get managed by the operator. There is usually only one CRD to start, but there can be more as features get added to the operator over time.

    The CRD(s) that you will need to add to your metadata bundle are located within the config/crd/bases directory of the operator project.

    hashtag
    Cluster Service Version

    The ClusterServiceVersion or CSV is the main packaging component of the operator. It contains all of the resource manifests (such as the operator deployment and any custom resource templates) and RBAC rules required to deploy the operator onto a Kubernetes cluster.

    It's also used to populate user interfaces with information like your logo, product description, version and so forth.

    There can be multiple CSV files in a bundle, each tied to a specific version of an operator image.

    Building and Pushing Image

    hashtag
    Setup Quay.io account

    Before we build our operator image we will want to setup an account on quay.ioarrow-up-right. We will need to push this image to quay.io so that we can test the Metadata Files discussed in the next section. To setup an account on quay.io:

    1. Go to: quay.ioarrow-up-right

    2. Click on the SIGN IN button located on the upper right side

    3. You can either Sign In with your GitHub account or click Create Account

    4. Once logged into Quay, click CREATE REPOSITORY to create a place to push your operator image for testing (example name: mongodb-operator)

    hashtag
    Building and Pushing the Operator

    Build the mongodb-operator image and push it to a registry:

    hashtag
    Editing your Image and ImagePullPolicy

    Now that you have built and pushed the image to quay.io it is necessary to edit config/manager/manager.yaml and replace the following fields:

    • image - your new image on quay.io

    • imagePullPolicy - Always

    This is the edited example of deploy/operator.yaml

    Update CRDs from v1beta1

    triangle-exclamation

    OpenShift 4.9 and Kubernetes 1.22 will drop support for CRD v1beta1 from the API entirely. You must convert your CRD's to v1 to continue uninterrupted support for your operator in versions 4.9 and onward. Please refer to this blog postarrow-up-right which also covers API deprecation and required actions.

    If your operator was removed from OpenShift v4.9 please also reference this blog postarrow-up-right for additional actions that may be required.

    The operator-sdk uses CustomResourceDefinition v1 by default for all automatically generated CRD's. However, v1beta1 CRD's were required for operator certification as recently as Q2 CY21 and are being deprecated as per the above warning. Thus, if your operator was certified prior to this timeframe and you haven't yet switched, then you should do so as soon as possible to so that your operator will be listed in OpenShift 4.9.

    Edit each of your CRs as follows:

    Here's a sample CRD shown before and after conversion. The apiVersion is changed to v1, and the schema is now defined per CR version (in v1beta1, you could only define per-version schemas if they were different).

    Before:

    After:

    Managing OpenShift Versions

    This section will show you how to control the listing of your operator in the certified catalog for different versions of OpenShift.

    The com.redhat.openshift.versions field, part of the metadata in the operator bundle, is used to determine whether an operator is included in the certified catalog for a given OpenShift version. You must use it to indicate the version(s) of OpenShift supported by your operator.

    The value is set through a LABEL in the bundle.Dockerfile. Note that the letter 'v' must be used before the version, and spaces are not allowed.

    The syntax is as follows:

    • A single version indicates that the operator is supported on that version of OpenShift or later. The operator will be automatically added to the certified catalog for all subsequent OpenShift releases.

    • A single version preceded by '=' indicates that the operator is supported ONLY on that version of OpenShift. For example, using "=v4.6" will add the operator to the certified catalog for OpenShift 4.6, but not for later OpenShift releases.

    • A range can be used to indicate support only for OpenShift versions within that range. For example, using "v4.5-v4.7" will add the operator to the certified catalog for OpenShift 4.5, 4.6 and 4.7 but not for OpenShift 4.8.

    The following table provides some examples of whether an operator is included or not in the catalog for different OpenShift versions, depending on the value of com.redhat.openshift.versions.

    circle-info

    NOTE: Commas are generally not allowed, and should not be used. For historical reasons, there are two special values that are currently allowed, v4.5,v4.6 and v4.6,v4.5. Both behave as just v4.5. The operator is supported on the v4.5 version of OpenShift or later. Any other usage of commas, e.g. v4.6,v4.7, will prevent the operator from being added to the certified catalog.

    Metadata Bundle Image

    The bundle image will house your metadata including the CSV.yaml, and the CRD.yaml.

    When we ran the make bundle command earlier, part of what was generated was the metadata bundle image.

    Here you see the bundle.Dockerfile in the root directory of your operator project. This is your Dockerfile for your metadata bundle image.

    There are up to 3 labels you will need to add to the Dockerfile:

    • LABEL com.redhat.openshift.versions

    Adjusting the ClusterServiceVersion

    The operator-sdk will generate most of the pieces of information you will need to get through certification, though some manual adjustments will need to be made.

    circle-info

    In YAML, you don't need to place key names in any specific order, as long as the keys stay at the proper indentation level

    hashtag
    Editing the CSV

    Below is a list of other changes you need to make to the CSV as the command will not automatically create these necessary bits.

    make docker-build docker-push IMG=quay.io/example/mongodb-operator:v0.0.1

    4.7

    Included

    Included

    Not included

    Included

    4.8

    Included

    Included

    Not included

    Not included

    OpenShift

    "v4.5"

    "v4.6"

    "=v4.6"

    "v4.5-v4.7"

    4.5

    Included

    Not included

    Not included

    Included

    4.6

    Included

    Included

    Included

    Included

    $ vi config/crd/bases/<your CRD filename>
    my-crd.yaml
    ---
    #change the apiVersion
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: mongodbs.database.dhoover103.com
    spec:
      group: database.dhoover103.com
      names:
        kind: MongoDB
        listKind: MongoDBList
        plural: mongodbs
        singular: mongodb
      scope: Namespaced
      versions:
      - name: v1alpha1
        served: true
        storage: true
      # Pull out the per-verison schema and define it globally instead
      # Note that the "schema" line changes to "validation"
      # Make sure the new section lines up correctly - one less indent (2 fewer spaces)
      validation:
        openAPIV3Schema:
          description: MongoDB is the Schema for the mongodbs API
          properties:
            apiVersion:
              description: 'APIVersion defines the versioned schema of this representation
                of an object. Servers should convert recognized schemas to the latest
                internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
              type: string
            kind:
              description: 'Kind is a string value representing the REST resource this
                object represents. Servers may infer this from the endpoint the client
                submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
              type: string
            metadata:
              type: object
            spec:
              description: Spec defines the desired state of MongoDB
              type: object
              x-kubernetes-preserve-unknown-fields: true
            status:
              description: Status defines the observed state of MongoDB
              type: object
              x-kubernetes-preserve-unknown-fields: true
            type: object
        # The subresources stay where they are
        subresources:
          status: {}
        # Add a "version" section. This must match the name of your first listed version
        version: v1alpha1
    
    my-crd.yaml
    ---
    apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    metadata:
      name: mongodbs.database.dhoover103.com
    spec:
      group: database.dhoover103.com
      names:
        kind: MongoDB
        listKind: MongoDBList
        plural: mongodbs
        singular: mongodb
      scope: Namespaced
      versions:
      - name: v1alpha1
        schema:
          openAPIV3Schema:
            description: MongoDB is the Schema for the mongodbs API
            properties:
              apiVersion:
                description: 'APIVersion defines the versioned schema of this representation
                  of an object. Servers should convert recognized schemas to the latest
                  internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
                type: string
              kind:
                description: 'Kind is a string value representing the REST resource this
                  object represents. Servers may infer this from the endpoint the client
                  submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
                type: string
              metadata:
                type: object
              spec:
                description: Spec defines the desired state of MongoDB
                type: object
                x-kubernetes-preserve-unknown-fields: true
              status:
                description: Status defines the observed state of MongoDB
                type: object
                x-kubernetes-preserve-unknown-fields: true
            type: object
        served: true
        storage: true
        subresources:
          status: {}
    

    This lists OpenShift versions, starting with 4.5, that your operator will support. See the section Managing OpenShift Versions for syntax and rules.

  • LABEL com.redhat.delivery.operator.bundle=true

    • This just needs to be there

  • LABEL com.redhat.deliver.backport=true

    • This is used to indicate support for OpenShift versions before 4.6. If you don't specify this flag, your operator won't be listed in 4.5 or earlier.

  • Now lets build the bundle image from the bundle.Dockerfile. You can use an existing public container registry of your choice, though we recommend using quay.io. When you create a new repository make sure its publicly available.

    The next step is to build the bundle-dockerfile locally and push it to quay.io for testing which we will cover in the Deploying onto OpenShift section

    [nhartman@fedora wordpress]$ ls
    bin                charts      Dockerfile   Makefile   requirements.lock  values.schema.json
    bundle             Chart.yaml  helm-charts  PROJECT    requirements.yaml  values.yaml
    bundle.Dockerfile  config      licenses     README.md  templates          watches.yaml
    
    podman build -t quay.io/<namespace>/wordpress-operator:v0.0.1 -f bundle.Dockerfile

    hashtag
    Fields to add under metadata.annotations

    • categories - Comma separated string of thesearrow-up-right applicable category names

    • description - Short description of the operator

    • containerImage - The full location (registry, repository, name and tag) of the operator image

    • createdAt - A rough (to the day) timestamp of when the operator image was created

    • support - Name of the supporting vendor (eg: ExampleCo)

    • repository - URL of the operator's source code repository (this field is optional)

    The annotations above have been truncated to show you where the fields need to be added

    hashtag
    Fields to adjust under spec

    The Operator-SDK will generate the following fields but you will need to adjust them for certification

    • description - Long description of the operator's owned customresourcedefinitions in Markdown format. Usage instructions and relevant info for the user goes here

    • icon.base64data - A base64 encoded PNG, JPEG or SVG image will need to be added

    • icon.medatype - The corresponding MIME type of the image (eg: image/png)

    The base64data was truncated in the example above to show you placement

    hashtag
    Example CSV

    bundle.Dockerfile
    FROM scratch
    
    LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1
    LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/
    LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/
    LABEL operators.operatorframework.io.bundle.package.v1=wordpress
    LABEL operators.operatorframework.io.bundle.channels.v1=alpha
    LABEL operators.operatorframework.io.bundle.channel.default.v1=
    LABEL operators.operatorframework.io.metrics.builder=operator-sdk-v1.0.0
    LABEL operators.operatorframework.io.metrics.mediatype.v1=metrics+v1
    LABEL operators.operatorframework.io.metrics.project_layout=helm.sdk.operatorframework.io/v1
    LABEL operators.operatorframework.io.test.config.v1=tests/scorecard/
    LABEL operators.operatorframework.io.test.mediatype.v1=scorecard+v1
    
    #Add these labels
    LABEL com.redhat.openshift.versions="v4.6"
    LABEL com.redhat.delivery.operator.bundle=true
    LABEL com.redhat.delivery.backport=true
    
    COPY bundle/manifests /manifests/
    COPY bundle/metadata /metadata/
    COPY bundle/tests/scorecard /tests/scorecard/
    
    podman push quay.io/<namespace>/wordpress-operator:v0.0.1
    apiVersion: operators.coreos.com/v1alpha1
    kind: ClusterServiceVersion
    metadata:
      annotations:
        alm-examples: |-
          [
            {
              "apiVersion": "example.com/v1alpha1",
              "kind": "Wordpress",
              "metadata            "name": "wordpress-sample"
              },
              "spec": {
                "affinity": {},
                "autoscaling": {
                  "enabled": false,
            "maxReplicas": 100,
                  "minReplicas": 1,
                  "targetCPUUtilizationPercentage": 80
                },
                "fullnameOverride": "",
                "image": {
                "pullPolicy": "IfNotPresent",
                  "repository": "nginx",
                  "tag": ""
                },
                "imagePullSecrets": [],
                "ingress": {
            
          ]
        capabilities: Basic Install
        categories: "Database"
        description: A brief description about this Wordpress Operator 
        containerImage: quay.io/rhc4tp/wordpress-operator:v0.0.1
        createdAt: 2020-09-03T12:59:59Z
        support: Red Hat Connect Team
        repository: https://github.com/rch4tp/wordpress-operator
        operators.operatorframework.io/builder: operator-sdk-v1.0.0
        operators.operatorframework.io/project_layout: helm.sdk.operatorframework.io/v1
      name: wordpress.v0.0.1
      namespace: placeholder
    spec:
      apiservicedefinitions: {}
      customresourcedefinitions:
        owned:
        - kind: Wordpress
          name: wordpresses.example.com
          version: v1alpha1
      description: Description ---NEEDS TO BE UPDATED---
      displayName: Wordpress-operator
      icon:
      - base64data: "iVBORw0KGgoAAAANSUhEUgAAASwAAAEsCAIAAAGBGCm0AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAmFlJREFUeNpi/P//PwN9ARMD3QFdrZy56iCQZKRpwC7bejK3bfm7j18Z/gGtAqH/V2bTykqgTV9uH0qxYmFmYvj7j2HByT8hBswBc35+vzCbhRb22cV3TXF58lmUxWbiD0dV5lw7FqC3BDgZnx+eSJOA/fnrz8Ulhb//ogiyAb1mWmyqo0CT5MNvkIFmX93236aJkyH2AQGVA5ZZL+1QISeyyIE7//aum0GrTPL7z99aN3Q/1Gz+RcN8yaaf4arOjCa4a24hraxk1Ek9XMCBnpT+MLhaatHESibdtKMFHIwY4hMO/sZUTGnycU3tk/5z+3AeO9a8tvnyX+pY+fz1R//cKafP3F+bzdFgxojHkCBXIyxRQHxRoO5Tc+v6i025HCI8jP+I0AQsbrRjJpFpJTBpNHiyuaiRFvGu035+Oz+LnIBVtE47ks9BcgJmZPj++z859eWcluwlCexkxHfayl+bpuSQbKWkZZqmOOHAtOn8AWe/+gKm/jOYyTIZaMiSZiWLbvqaJOL8x4aURGf+4GRlzFrzK9mSZdbqQ8RauW7POUbt1EP5UJNYmBj6D/72m/0zaO7PyIU/zz7+x8vOyALTFzr/p4MarJD7D0obxj3fp4WC9LbM2Io9miEp1sQzHagOrdIBVugOU/78OD8dq85Lt56EFs249eDl4XxouQOskIHtDEgKZwAW5v/+21ipHl5Ujt3Kos5V/bN3HylHpMyslb8u7plJOGVqph4pAumymfDj/9XZxMQDNJPs27sXaB8wZQtwMFpP/OFub3JxTxpBzVzGWUD7mBgZrPp+XNnUSGxq/k8uuHzrSVBkOoNWin18F0kaGUdb6zS18vwWlPJ31+RcAs3wYmes4idWdgOlgGjv9GJkxUB0fFkHSvJZWuSEHMNoXOLB6XWT4GygNWimfX3/CkhCfalg7HLv9E4Ie+fEHGQHXtu/8sLWOceWtn1592JjS9SqSi9wMOTBFbx/dndLZ8KtI+vRfBzZs2dFuQec++TKUXYefkS+VLX03TejVMnUHcj++u65gXcqkPHn14+o3r0QBUD2jy8fgIyw9m1wU1aUuUMU+JQvWFsfrGYTiGbrvz/Qts/2vvT3T++Ed26HWvnn109RRR0g//axTe+f3g1qXPsTbPrNw+u0naOgRQYblirz398/cLaeewKWsokJGoqeRTMhQQJ0IkjozcOrEInTaye+uHUGyGDnEfj44gHQviOLmyFSQE9gmmibgChxTq+dgCa7pjYgsns39gLv8Pz60LbNQAY8GMFeXG8WWmgdUwNJnECpr+9eMLGwQkt8MENW1+bq3mUXt83l5BM2CwE1kZmYWFZWeDEyMkqoGoU0b4ApZgMKSmmYQswfGaUPQADR25dMwzZIV2w/TfPwBLbrbz8ElW0i3Ixvvv4X5OOilX3ARtqZUo5ff4CNHgY2Zob6Hb8aPdg+amYx0ciyIwUc6y/+tZoAatPuuP4v0YzVeeIPF0tN6vuPUSP1SDFKWQhs7fnP+fn65Czqj0Q4JHQfK+FA7oIBLfOay/jh5CyajHz8eH3n3382ZBFrWCOW+vYBm8VoH"
        mediatype: "img/png"
      install:
        spec:
          clusterPermissions:
          - rules:
            - apiGroups:
              - ""
              resources:
              - namespaces
              verbs:
              - get
            - apiGroups:
              - ""
              resources:
              - secrets
              verbs:
              - '*'
            - apiGroups:
              - ""
              resources:
              - events
              verbs:
              - create
            - apiGroups:
              - example.com
              resources:
              - wordpresses
              - wordpresses/status
              verbs:
              - create
              - delete
              - get
              - list
              - patch
              - update
              - watch
            - apiGroups:
              - ""
              resources:
              - pods
              - services
              - services/finalizers
              - endpoints
              - persistentvolumeclaims
              - events
              - configmaps
              - secrets
              verbs:
              - create
              - delete
              - get
              - list
              - patch
              - update
              - watch
            - apiGroups:
              - apps
              resources:
              - deployments
              - daemonsets
              - replicasets
              - statefulsets
              verbs:
              - create
              - delete
              - get
              - list
              - patch
              - update
              - watch
            - apiGroups:
              - authentication.k8s.io
              resources:
              - tokenreviews
              verbs:
              - create
            - apiGroups:
              - authorization.k8s.io
              resources:
              - subjectaccessreviews
              verbs:
              - create
            serviceAccountName: default
          deployments:
          - name: wordpress-controller-manager
            spec:
              replicas: 1
              selector:
                matchLabels:
                  control-plane: controller-manager
              strategy: {}
              template:
                metadata:
                  labels:
                    control-plane: controller-manager
                spec:
                  containers:
                  - args:
                    - --secure-listen-address=0.0.0.0:8443
                    - --upstream=http://127.0.0.1:8080/
                    - --logtostderr=true
                    - --v=10
                    image: gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0
                    name: kube-rbac-proxy
                  - name: RELATED_IMAGE_WORDPRESS
                    value: docker.io/bitnami/wordpress:5.3.2-debian-10-10
                    ports:
                    - containerPort: 8443
                      name: https
                    resources: {}
                  - args:
                    - --metrics-addr=127.0.0.1:8080
                    - --enable-leader-election
                    - --leader-election-id=wordpress
                    image: controller:latest
                    name: manager
                    resources:
                      limits:
                        cpu: 100m
                        memory: 90Mi
                      requests:
                        cpu: 100m
                        memory: 60Mi
                  terminationGracePeriodSeconds: 10
          permissions:
          - rules:
            - apiGroups:
              - ""
              resources:
              - configmaps
              verbs:
              - get
              - list
              - watch
              - create
              - update
              - patch
              - delete
            - apiGroups:
              - ""
              resources:
              - events
              verbs:
              - create
              - patch
            serviceAccountName: default
        strategy: deployment
      installModes:
      - supported: true
        type: OwnNamespace
      - supported: true
        type: SingleNamespace
      - supported: false
        type: MultiNamespace
      - supported: true
        type: AllNamespaces
      keywords:
      - cool
      - fun
      - easy
      links:
      - name: Wordpress
        url: https://wordpress.domain
      maintainers:
      - email: nhartman@redhat.com
        name: nhartman
      maturity: alpha
      provider:
        name: Provider Name
      version: 0.0.1

    Creating the Metadata Bundle

    Before we create and push the metadata bundle image we first need to create the metadata that will reside within the container image. Operator-SDK can help you with this task.

    A bundlearrow-up-right consists of manifests (CSV and CRDs) and metadata that define an Operator at a particular version. You may have also heard of a bundle image.

    An Operator Bundle is built as a scratch (non-runnable) container image that contains operator manifests and specific metadata in designated directories inside the image. Then, it can be pushed and pulled from an OCI-compliant container registry. Ultimately, an operator bundle will be used by Operator Registry and OLM to install an operator in OLM-enabled clusters.

    SDK projects are scaffolded with a Makefile containing the bundle recipe by default, which wraps generate kustomize manifests, generate bundle, and other related commands. First we want to run the following command from the root of your project:

    $ make bundle

    This will prompt a series of questions asking you for information that will added into the generated files (specifically the CSV)

    • Display name for the operator (required):

    • Description for the operator (required):

    • Provider's name for the operator (required):

    • Any relevant URL for the provider name (optional):

    By default make bundle will generate a CSV, copy CRDs, and generate metadata in the bundle format:

    Bundle metadata in bundle/metadata/annotations.yaml contains information about a particular Operator version available in a registry. OLM uses this information to install specific Operator versions and resolve dependencies. In the bundle above you can see that within the manifests directory we have the cluster role, the CSV, and the CRD. You will also see the tests directory contains information for running the scorecard. We will get to that in a later section.

    Installing an OpenShift Environment

    To test the metadata files you will need access to an OpenShift environment. If you don't have an environment setup, there are several options available depending on your requirements. Red Hat provides try.openshift.comarrow-up-right which lists several infrastructure providers with varying levels of maturity for you to choose from. You have the option to choose between OpenShift Container Platform 4 on popular cloud and virtualization providers, as well as bare metal, including a local deployment method using CodeReady Containers. The characteristics of each offering are outlined further below.

    hashtag
    OpenShift Container Platform 4

    OpenShift Container Platform 4 (OCP 4) is the enterprise-ready Kubernetes platform that’s ready for deployments on these infrastructure providers:

    • Cloud: AWS, Azure, Google Cloud

    • Virtualization: VMware vSphere, Red Hat OpenStack Platform

    • Bare Metal

    hashtag
    Requirements

    Deploying to a public cloud provider requires an account with the provider, and will incur hourly use charges for the utilized infrastructure.

    Deploying to virtual platforms and bare metal will require a paid OpenShift subscription or a partner NFR subscription for OpenShift, in addition to providing the platform and/or infrastructure for the installation.

    Applications with high or specialized resource requirements are best suited for an OCP cluster. For testing application scaling or other operational tasks, an OCP cluster is recommended. Infrastructure scalability inherent in these infrastructure providers affords OpenShift this ability in an OCP deployment.

    hashtag
    Installation

    You can install OCP 4 by going to and clicking Get Started (you will need to login with a Red Hat account), and then selecting the infrastructure provider of your choice. You can also install OCP 4 using the . You can also reference the AWS Quick Start Guide in the appendix.

    hashtag
    CodeReady Containers

    CodeReady Containers provides a minimal OpenShift 4 cluster to developers, free of cost. It consists of a single OpenShift node running as a virtual machine for offline development and testing on a laptop or desktop.

    circle-check

    For more information about CodeReady Containers please visit

    Requirements

    CodeReady Containers runs on Windows, Mac, and Linux and requires the following minimum system requirements:

    • 4 virtual CPUs (vCPUs)

    • 8 GB of memory

    • 35 GB of storage space

    As described previously, applications will be limited to the resources provided by the host. If you desire to run resource intensive applications in CodeReady Containers, you can increase the size of the VM (CPU & RAM) at runtime.

    Not all OpenShift features are available within the CodeReady Containers deployment. The following limitations exist:

    • Operators for machine-config and monitoring are disabled by default, so the corresponding functions of the web console are non-functioning.

    • Due to this, there is currently no upgrade path to newer versions of OpenShift.

    • External networking and ingress does not function identically to an OCP 4 cluster due to running in a local virtual machine.

    • Operational tasks, such as rebooting the CodeReady Containers VM are disruptive to applications. Such high availability is not achievable with a single node cluster.

    With these limitations in mind, it is suggested that final testing of containers and operators developed in CodeReady Containers should occur in OpenShift Container Platform prior to certification.

    hashtag
    Installation

    You can continue installing CodeReady Containers locally by reading on, or you can reference the official documentation at . If you use the instructions provided below, we'll assume your running on a Linux platform such as RHEL, CentOS or Fedora. You'll also need the NetworkManager, libvirt and qemu-kvm packages installed on your system.

    The first step is to download a crc release for your platform at .

    The next step is to extract the crc binary from the downloaded archive somewhere into your $PATH:

    The crc command should now be accessible from the command line. You can now install the cluster:

    circle-info

    NOTE: You will need the image pull secret for your RHN account to complete the next step. See to obtain your pull secret.

    Finally, start the cluster VM:

    After a few moments, you should be given the login credentials and URL for the cluster and a message stating that CodeReady Containers is running. To access the cluster, download the and extract into your $PATH, or run the following shell command (which will last the duration of the shell session):

    You should now be able to login as kubeadmin using the provided API URL and password. Keep in mind that you will have to run crc start again after a reboot to start the cluster. To test your operator, continue on to .

    Reviewing your Metadata Bundle

    Some testing can be preformed prior to deployment to verify that the Metadata Bundle files are properly configured and formatted.

    hashtag
    Validating the CSV

    You can validate your CSV is properly formatted by copying your file here: Validate YAMLarrow-up-right.

    hashtag
    Verifying your Bundle with Operator-SDK

    To check your metadata files, run the following command on the root directory of your project:

    This will check your metadata files to see if there are any required fields missing. It will produce an error message for each missing component. You can also run the command below for a more detail validation.

    hashtag
    Previewing your CSV on OperatorHub.io

    Navigate to the website

    You can copy and paste your CSV contents into the "Enter your operator's CSV YAML:" section as shown below. Then click the blue submit button to see how your information will render.

    Work with your project manager to make sure your icon and information display correctly in the preview. You can go in and edit the fields of your CSV, some of which we added earlier, to change how the information is displayed.

    circle-info

    The information from your CSV will show up on the embedded OperatorHub on OpenShift. That is why it is important to verify your preview.

    Here is a look at how the embedded OperatorHub on OpenShift displays the information of certified partners.

    circle-check

    Congratulations, your operator is ready for testing on OpenShift. Continue with if you haven't already.

    Creating an Ansible Role From a Helm Chart

    A walkthrough example of turning an example helm chart into an Ansible role, which can then be used to create an Anslble Operator.

    This guide will walk you through taking a helm chart, and creating an Ansible operator using memcached as an example. You can find an example helm chart and converted ansible operator at https://github.com/dhoover103/helm-ansible-memcached-example.gitarrow-up-right and see what the changes look like.

    A lot of the conversion can be handled with a useful conversion utility, available here: https://github.com/redhat-nfvpe/helm-ansible-template-exporterarrow-up-right. After installing the converter and adding it to your $PATH run

    This will mostly give you an Ansible role that works the same as you Helm chart with the templates in /<your role>/templates/ . However, there are a few things that will need to be modified. Helm template syntax is a bit different at time from Jinja2 that Ansible uses. Some common things to watch out for are:

    • The "toYaml" filter in Helm is "to_yaml" in jinja2.

    • "toJSON" likewise needs to be changed to "to_json"

    • The "b64enc" filter needs to be changed to "b64encode"

    • Any varibles generated by your _helpers.tpl file are not available. You can either define them in the role's defaut/main.yml, hardcode them, or remove them from the templates.

    • Values that come from the Chart directly instead of the Values.yaml file (ex. .Chart.Name) are also not available

    Check your templates for these syntax errors and go ahead and fix them.

    If you create new variables, remember to add them to the roles/<your role>/defaults/main.yml file (this is what replaces the Values.yaml file, and works exactly the same).

    circle-info

    In Helm, there's a lot of labels that will be consistent throughout things your operator will create. You can choose to keep some around and remove some entirely as redundant. For example, in the sample operator, we've removed the release and heritage labels/label selectors, and changed the app and chart labels to use a new values in the defaults/main.yml file.

    Helm templates tend to leave room for users to add their own annotations. Ansible considers this a security risk. While technically optional, you should find these open-ended annotations in your templates and remove them. Here's a sample of what they look like in a template:

    In the above example, we'd take out lines 8 and 9 (line 8 would stay in if we had some annotations we wanted). The same bit of yaml in an Ansible role would look like this:

    After making these changes, it is highly recommended to test the ansible role. It's impossible to account for all the filters with different syntax between Helm and Ansible, so you'll want to verify the role will run before moving on with your operator. To test the role, create a playbook.yml in the operator's root directory that looks like this:

    To run the playbook, run

    while logged in to your testing cluster. If everything comes back fine, you're all set. Otherwise, you'll need to look at the error message to see what bit of syntax is causing the problem, and change it in your templates. To test each template one at a time, go to the roles/<your role>/tasks/main.yml and comment out the templates you aren't testing yet.

    Deploying onto OpenShift

    This section is to help you verify that the operator deploys successfully from the metadata bundle onto OpenShift using the Operator Lifecycle Manager (OLM).

    hashtag
    Before getting started

    Install the operator package manager (OPM)arrow-up-right tool

    hashtag
    Creating an index image

    You can create an index image using the opm CLI.

    1. Start a new index:

    2. Push the index image to a registry:

    circle-info

    Be sure to make your quay.io repositories for the bundle image and the index image public. You can can change this in the settings in quay.io

    hashtag
    Create the CatalogSource

    1. In your OpenShift environment create a catalog source in the openshift-marketplace namespace.

    2. You can verify the catalogsource has been created by running the following command:

    You should see the following output:

    3. Once you have confirmed the catalogsource exists, run the following command to verify it's associated pod is successfully running. This verifies that the index image has been successfully pulled down from the repository.

    You should see the following output:

    4. Verify that the operator package has been successfully added:

    hashtag
    Create OperatorGroup

    1. Create a namespace for your project

    2. Create the OperatorGroup

    3. Verify you have a working operatorgroup within the namespace you created:

    You should see the following output:

    hashtag
    Create a Subscription

    Verify the subscription is created within your namespace:

    You should see the following output:

    The creation of the subscription should trigger the creation of the InstallPlan and CSV.

    You should see something similar to this output:

    Verify your operator is running in the namespace:

    hashtag
    Updating an existing index image

    You can update an existing index image with a new operator bundle version using the opm CLI.

    1. Update the existing index:

    2. Push the updated index image:

    3. After OLM polls the index image at its specified interval, we should eventually see a new installplan for our new operator bundle version

    hashtag
    Run the Scorecard test

    The Operator Scorecard is a testing utility included in the operator-sdk binary that guides users towards operator best practices by checking the correctness of their operators and CSVs. With your operator deployed via OLM, you can run the operator-sdk scorecard utility to create a CR which will trigger the operator and monitor what occurs. In order to certify you must pass the first two Basic Tests: Spec Block Exists, and Status Block Exists. Passing the third basic test (Writing into CRs has an effect) requires adding the scorecard-proxy container to the operator deployment, which is not desired in a production operator and is therefore not required for certification.

    In order to run scorecard locally against your operator deployed via OLM please refer to

    $ helmExport export <role> --helm-chart=<location of chart> --workspace=<output location> --generatreFilters=true --emitKeysSnakeCase=true

    Comma-separated list of keywords for your operator (required):

  • Comma-separated list of maintainers and their emails (e.g. 'name1:email1, name2:email2') (required):

  • Custom network operators cannot be installed because the installation is not user configurable.

  • Each binary release of CodeReady Containers expires after 30 days. You will then get certificate errors and must update the local crc binary, and then reinstall the cluster.

  • try.openshift.comarrow-up-right
    official documentationarrow-up-right
    https://developers.redhat.com/products/codeready-containers/overviewarrow-up-right
    https://code-ready.github.io/crcarrow-up-right
    https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/arrow-up-right
    https://cloud.redhat.com/openshift/install/crc/installer-provisionedarrow-up-right
    latest oc clientarrow-up-right
    Deploying onto OpenShift
    this section of the upstream documentationarrow-up-right
    bundle/
    .
    ├── manifests
    │   ├── example.my.domain_wordpresses.yaml
    │   ├── wordpress.clusterserviceversion.yaml
    │   └── wordpress-metrics-reader_rbac.authorization.k8s.io_v1beta1_clusterrole.yaml
    ├── metadata
    │   └── annotations.yaml
    └── tests
        └── scorecard
            └── config.yaml
    
    $ tar -xJvf crc-linux-amd64.tar.xz -C $HOME/bin --strip-components=1 */crc
    $ crc setup
    $ crc start
    $ eval $(crc oc-env)
    metadata:
      name: {{ template "memcached.fullname" . }}
      labels:
        app: {{ template "memcached.fullname" . }}
        chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
        release: "{{ .Release.Name }}"
        heritage: "{{ .Release.Service }}"
      annotations:
    {{ toYaml .Values.serviceAnnotations | indent 4 }}
    metadata:
      name: {{ name }} #The "name" variable was added to defaults/main.yml
      labels:
        app: {{ name }}
        chart: "{{ name }}-{{ version }}" #The "version" variable was also added to defaults/main.yml
        #The annotations field was removed entirely, since we don't have any for this object.
    playbook.yml
    - hosts: localhost
      roles:
      - <your role>
    $ ansible-playbook playbook.yml
    $ opm index add \
        --bundles quay.io/<namespace>/wordpress-operator:v0.0.1 \
        --tag quay.io/<namespace>/wordpress-operator-index:latest 
    $ podman push quay.io/<namespace>/wordpress-operator-index:latest
    test-operator-catalogsource.yaml
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: my-test-operators
      namespace: openshift-marketplace
    spec:
      sourceType: grpc
      image: quay.io/<namespace>/wordpress-operator-index:latest
      displayName: Test Operators
      publisher: Red Hat Partner
      updateStrategy:
        registryPoll:
          interval: 5m
    $ oc create -f test-operator-catalogsource.yaml 
    $ oc -n openshift-marketplace get catalogsource
    NAME                  DISPLAY               TYPE   PUBLISHER   AGE
    certified-operators   Certified Operators   grpc   Red Hat     12d
    community-operators   Community Operators   grpc   Red Hat     12d
    my-test-operators     Test Operators        grpc   Red Hat     30s
    redhat-marketplace    Red Hat Marketplace   grpc   Red Hat     12d
    redhat-operators      Red Hat Operators     grpc   Red Hat     12d
    $ oc -n openshift-marketplace get pods
    NAME                                    READY   STATUS    RESTARTS   AGE
    certified-operators-59b6bc7fbc-rzzwq    1/1     Running   0          12d
    community-operators-5f89b4787d-ljvfb    1/1     Running   0          12d
    marketplace-operator-5c84994668-927b6   1/1     Running   0          12d
    my-test-operators-##########-#####      1/1     Running   0          12d
    redhat-marketplace-86cc645bb6-8crdl     1/1     Running   0          12d
    redhat-operators-5877478c4f-4ffs2       1/1     Running   0          12d
    $ oc get packagemanifests | grep “Test Operators”
     $ oc new-project test-operator
    test-operatorgroup.yaml
    apiVersion: operators.coreos.com/v1alpha2
    kind: OperatorGroup
    metadata:
      name: my-group
      namespace: test-operator
    spec:
      targetNamespaces:
        - test-operator
    $ oc create -f test-operatorgroup.yaml
    $ oc get og
    NAMESPACE                              NAME                           AGE
    test-operator                          my-group                       30s 
    test-subscription.yaml
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: test-subscription
    spec:
      channel: alpha
      installPlanApproval: Automatic
      name: wordpress-operator
      source: my-test-operators
      sourceNamespace: openshift-marketplace
    $ oc create -f test-subscription.yaml
    $ oc get sub -n test-operator
    NAMESPACE             NAME                              PACKAGE                           SOURCE                CHANNEL
    test-operator         test-subscription                 wordpress-operator                test-operators        stable
    $ oc get installplan -n test-operator
    $ oc get csv -n test-operator
    NAMESPACE                                          NAME                             DISPLAY                                 VERSION   REPLACES   PHASE
    test-operator                                      wordpress-operator.v0.0.1        Wordpress Operator                      0.0.1                Succeeded
    $ oc get pods -n test-operator
    $ opm index add \
        --bundles quay.io/<namespace>/wordpress-operator:v0.0.2 \
        --from-index quay.io/<namespace>/wordpress-operator-index:latest \
        --tag quay.io/<namespace>/wordpress-operator-index:latest 
    $ podman push quay.io/<namespace>/wordpress-operator-index:latest
    $ oc get installplan -n test-operator
    https://operatorhub.io/previewarrow-up-right
    Installing an OpenShift Environment
    This image uses the example mongodb operator from earlier in the guide.

    Dockerfile Requirements

    The Dockerfile can be found in the root directory of your operator. For Certified Operator Image Dockerfile requirements are as follows:

    1. You must configure the required labels (name, maintainer, vendor, version, release, summary)

    2. Software license(s)arrow-up-right must be included within the image.

    circle-info

    Although typically labels and licenses are not required to successfully build a running image, they are required for the Red Hat build service and scanner.

    Below is an example Dockerfile for a Ansible Operator which includes the aforementioned requirements:

    A few things to note about the Dockerfile above:

    • The default FROM line produced by the SDK needs to be replaced with the line listed above.

    • This Dockerfile contains all of the required labels. These labels must be manually added (name, vendor, version, release, summary, and description).

    • If you are planning to use a playbook, that file will also need to be copied.

    • Lastly, this Dockerfile also references a

    Your project directory structure should look similar to the hierarchy below. Note the location of the licenses directory.

    operator-sdk bundle validate ./bundle
    operator-sdk bundle validate ./bundle --select-optional suite=operatorframework
    licenses/
    directory, which needs to be manually added to the
    root
    of the project. This directory must include the software license(s) of your project.
    Dockerfile
    FROM registry.redhat.io/openshift4/ose-ansible-operator:v4.7
    
    ### Required OpenShift Labels
    LABEL name="Mongodb Operator" \
          vendor="RHSCL" \
          version="v0.0.1" \
          release="1" \
          summary="This is an example of a mongodb ansible operator." \
          description="This operator will deploy mongodb to the cluster."
    
    COPY requirements.yml ${HOME}/requirements.yml
    RUN ansible-galaxy collection install -r ${HOME}/requirements.yml \
     && chmod -R ug+rwx ${HOME}/.ansible
    
    # Required Licenses
    COPY licenses /licenses
    
    COPY watches.yaml ${HOME}/watches.yaml
    COPY roles/ ${HOME}/roles/
    COPY playbooks/ ${HOME}/playbooks/
    
    mongodb-operator
    ├── config
    │   ├── crd
    │   │   ├── bases
    │   │   │   └── nosql.mogodb.com_mongodbs.yaml
    │   │   └── kustomization.yaml
    │   ├── default
    │   │   ├── kustomization.yaml
    │   │   └── manager_auth_proxy_patch.yaml
    │   ├── manager
    │   │   ├── kustomization.yaml
    │   │   └── manager.yaml
    │   ├── prometheus
    │   │   ├── kustomization.yaml
    │   │   └── monitor.yaml
    │   ├── rbac
    │   │   ├── auth_proxy_client_clusterrole.yaml
    │   │   ├── auth_proxy_role_binding.yaml
    │   │   ├── auth_proxy_role.yaml
    │   │   ├── auth_proxy_service.yaml
    │   │   ├── kustomization.yaml
    │   │   ├── leader_election_role_binding.yaml
    │   │   ├── leader_election_role.yaml
    │   │   ├── mongodb_editor_role.yaml
    │   │   ├── mongodb_viewer_role.yaml
    │   │   ├── role_binding.yaml
    │   │   └── role.yaml
    │   ├── samples
    │   │   ├── kustomization.yaml
    │   │   └── nosql_v1alpha1_mongodb.yaml
    │   ├── scorecard
    │   │   ├── bases
    │   │   │   └── config.yaml
    │   │   ├── kustomization.yaml
    │   │   └── patches
    │   │       ├── basic.config.yaml
    │   │       └── olm.config.yaml
    │   └── testing
    │       ├── debug_logs_patch.yaml
    │       ├── kustomization.yaml
    │       ├── manager_image.yaml
    │       └── pull_policy
    │           ├── Always.yaml
    │           ├── IfNotPresent.yaml
    │           └── Never.yaml
    ├── Dockerfile
    ├── licenses
    │   └── MIT.txt
    ├── Makefile
    ├── molecule
    │   ├── default
    │   │   ├── converge.yml
    │   │   ├── create.yml
    │   │   ├── destroy.yml
    │   │   ├── kustomize.yml
    │   │   ├── molecule.yml
    │   │   ├── prepare.yml
    │   │   ├── tasks
    │   │   │   └── mongodb_test.yml
    │   │   └── verify.yml
    │   └── kind
    │       ├── converge.yml
    │       ├── create.yml
    │       ├── destroy.yml
    │       └── molecule.yml
    ├── playbooks
    ├── PROJECT
    ├── requirements.yml
    ├── roles
    │   └── mongodb
    │       ├── defaults
    │       │   └── main.yml
    │       ├── files
    │       ├── handlers
    │       │   └── main.yml
    │       ├── meta
    │       │   └── main.yml
    │       ├── README.md
    │       ├── tasks
    │       │   └── main.yml
    │       ├── templates
    │       └── vars
    │           └── main.yml
    └── watches.yaml
    

    Using Third Party Network Operators with OpenShift

    This section outlines the requirements and steps for integrating third-party networking providers with the OpenShift installer.

    Network Operators are a special breed because they are required to be functional very early on during installation. OpenShift 4 has a facility for injecting custom objects at install time. In this case, we will use it to install a compliant network operator.

    Network operators also need to consume and update certain special objects. This is how they inform cluster components of the current network status.

    circle-info

    A critical goal of this is to be able to update and manage the networking components over time. Therefore, the new network-operator must transition to OLM ownership once the cluster is running and OLM is installed.

    hashtag
    Requirements for OpenShift-compliant network operator

    1. The network Operator needs to be certified with OpenShift 4 ()

    2. Publish the network status to downstream consumers. Cluster installation will fail to progress until this happens.

      1. Determine the currently-deployed ClusterNetwork, ServiceNetwork, and pod-to-pod MTU

    hashtag
    Steps to install third party networking operator

    hashtag
    Add network-operator to install payload.

    Make the work directory

    mkdir mycluster

    Create install-config

    openshift-install create install-config --dir=mycluster

    1. Update the Network Type in the install-config

      a) Edit mycluster/install-config.yaml

      b) Replace OpenShiftSDN with the name of your network plugin. The value doesn’t matter. You should set it something meaningful to you and not to the “Cluster Network Operator” (CNO).

    2. Create OpenShift manifests

    openshift-install create manifests --dir=mycluster

    Add your operator’s manifests to the installer

    At install-time, the installer will create any manifest files in mycluster/manifests/. So, copy all manifests needed to install your operator to that directory. See for examples.

    Create cluster:

    openshift-install create cluster --dir=mycluster

    This will deploy your cluster and apply the manifests of your CNI operator, leaving the Operator running but unmanaged.

    hashtag
    Transition your operator to OLM ownership.

    1. Create OperatorGroup in the namespace of the operator -

    2. Create subscription pointing to ISV catalog source and the desired operator -

    3. Verify that a ClusterServiceVersion object referring to your Operator is created

    4. Verify that the resources now have owner references to OLM

    Update Network.config.openshift.io/v1 cluster Status field accordingly. See Appendix B for an example.
  • Optional but recommended: React to network configuration changes

    1. Set up a watch on Network.config.openshift.io/v1 cluster

    2. Reconcile any changes to Spec to the running state of the network

    3. Publish the current state to the Status field

    4. Deployment strategy should be set to RollingUpdate.

  • Partner Guide for Red Hat OpenShiftarrow-up-right
    Appendix A - CNI Operator manifests
    Appendix C
    Appendix D

    AWS OpenShift 4 Cluster Quick Start Guide

    OpenShift 4 now comes with an installer making it easier and simpler to setup an OpenShift Cluster. Before you run the installer you must first Configure AWS CLI locally and Configure your AWS account.

    hashtag
    Configuring AWS CLI

    1. Creating Access Key for an IAM user

      1. Sign in to the AWS Management Console and open the IAM console at .

      2. In the navigation pane, choose Users.

      3. Choose the name of the user whose access keys you want to create, and then choose the Security credentials tab.

      4. In the Access keys section, choose Create access key.

      5. To view the new access key pair, choose Show. You will not have access to the secret access key again after this dialog box closes. Your credentials will look something like this: Access key ID: AKIAIOSFODNN7EXAMPLE Secret access key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

      6. To download the key pair, choose Download .csv file. Store the keys in a secure location. You will not have access to the secret access key again after this dialog box closes.

      7. After you download the .csv file, choose Close. When you create an access key, the key pair is active by default, and you can use the pair right away.

    2. Create the ~/.aws folder

      1. You should have 2 files in this folder, config and credentials. Notice your Access Key ID and Secrete Access Key will go in the credentials file. Example of these files: ~/.aws/config [default] region=us-west-2 output=json ~/.aws/credentials [default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

    More information about configuring your AWS CLI can be found:

    hashtag
    Configuring your AWS account

    hashtag
    Configuring Route53

    1. Identify your domain, or subdomain, and registrar.

    2. Create a public hosted zone for your domain or subdomain. See in the AWS documentation.

      1. Use an appropriate root domain, such as openshiftcorp.com, or subdomain, such as clusters.openshiftcorp.com

    hashtag
    AWS Account limitation

    By default, each cluster creates the following instances:

    • One bootstrap machine, which is removed after installation

    • Three master nodes

    • Three worker nodes

    Due to this cluster setup you cannot use US-EAST-1 region. More information on Supported Regions and Required Permissions can be found:

    hashtag
    Running the OpenShift Cluster Installer

    To use the default settings follow:

    The installer can be found:

    https://console.aws.amazon.com/iam/arrow-up-right
    HEREarrow-up-right
    Creating a Public Hosted Zonearrow-up-right
    Harrow-up-right
    EREarrow-up-right
    Installing a cluster quickly on AWSarrow-up-right
    HEREarrow-up-right

    What if I've already published a Community Operator?

    This section will outline the steps required to publish your OperatorHub.io or OpenShift Community Operator to the Red Hat Certified catalog listing in OpenShift OperatorHub.

    hashtag
    Certification of a Community Operator

    In order to certify a community operator, you must first sign up as a Red Hat Partnerarrow-up-right and then certify and publish both your application and operator container image(s) to the Red Hat Container Catalogarrow-up-right (RHCC). This process is covered in depth in our Partner Guidearrow-up-right.

    The following changes must be made to the operator as well as the operator metadata (CSV) prior to submission (each item is covered in depth further below):

    • The operator must consume the certified application image(s) aka operands from RHCC.

    • The service account of your operand, (eg: the service account used to run your application's pods) must have the proper Security Context Constraint (SCC) applied in order to run containers either as root or a specific UID in OpenShift. This applies specifically to OperatorHub.io/Vanilla K8s operators

    • Set the metadata.annotations.certified field to true

    • The packageName of the metadata bundle in package.yamlmust be distinct from that of your community operator.

    • The metadata bundle must be flattened and contain no sub-directories or extraneous files

    Multi-Arch Operator Certification

    This section explains certification of non-Intel/AMD architecture operators for publishing to OpenShift OperatorHub and Red Hat Marketplace.

    Requirements and Limitationschevron-rightBuilding a Multi-Arch Operator Imagechevron-rightScanning and Publishingchevron-rightUpdating the Bundle Imagechevron-right

    Appendix B - Cluster Network Status

    Your operator should consume the Spec portion of this object, and update the Status accordingly:

    apiVersion: config.openshift.io/v1
    kind: Network
    metadata:
     name: cluster
    spec:
     clusterNetwork:
     - cidr: 10.128.0.0/14
       hostPrefix: 23
     externalIP:
       policy: {}
     networkType: OpenShiftSDN
     serviceNetwork:
     - 172.30.0.0/16
    status:
     clusterNetwork:
     - cidr: 10.128.0.0/14
       hostPrefix: 23
     clusterNetworkMTU: 1450
     networkType: OpenShiftSDN
     serviceNetwork:
     - 172.30.0.0/16

    Appendix A - CNI Operator Manifests

    cluster-network-03-mysdn-namespace.yml
    apiVersion: v1
    kind: Namespace
    metadata:
     name: mysdn-operator
     annotations:
       openshift.io/node-selector: ""
     labels:
       name: mysdn-operator
       openshift.io/run-level: "0"

    cluster-network-04-mysdn-CRD.yml
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
     name: installations.operator.mysdn.io
    spec:
     group: operator.mysdn.io
     names:
       kind: Installation
       listKind: InstallationList
       plural: installations
       singular: installation
     scope: Cluster
     subresources:
       status: {}
     validation:
       openAPIV3Schema:
         properties:
           apiVersion:
             description: 'APIVersion defines the versioned schema of this representation
          of an object. Servers should convert recognized schemas to the latest
               internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
             type: string
           kind:
             description: 'Kind is a string value representing the REST resource this
               object represents. Servers may infer this from the endpoint the client
               submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
             type: string
           metadata:
             type: object
           spec:
             type: object
           status:
             type: object
     version: v1
     versions:
     - name: v1
       served: true
       storage: true

    Appendix C - Operator Group Manifest

    operator-group.yml
    apiVersion: operators.coreos.com/v1alpha2
    kind: OperatorGroup
    metadata:
      name: operatorgroup
      namespace: mysdn-operator
    spec:
      targetNamespaces:
        - mysdn-operator

    Choosing a Unique Package Name

    The package name of your metadata bundle must be unique from your community operator. This section demonstrates changing an existing package name to satisfy this requirement.

    Package name uniqueness (or lack of) can be a tripping point when certifying a community operator. Let's take a metadata bundle from a hypothetical community operator and examine the package.yaml file contents:

    packageName: example
    channels:
        - name: alpha
          currentCSV: example-operator-0.1.0

    The typical convention is simply to add -certified as a suffix to the packageName like so:

    packageName: example-certified
    channels:
        - name: alpha
          currentCSV: example-operator-0.1.0

    Now that your CSV and package yaml files are updated, you can proceed with bundling your metadata.

    Appendix D - Subscription Manifest

    subscription.yml
    kind: Subscription
    metadata:
      name: mysdn-operator
      namespace: mysdn-operator
    spec:
      channel: alpha
      name: mysdn-operator
      startingCSV: mysdn-operator.v0.0.1
      installPlanApproval: Automatic
      source: certified-operators
      sourceNamespace: openshift-marketplace
    Cluster-network-05-mysdn-deployment.yml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: mysdn-operator
     namespace: mysdn-operator
    spec:
     replicas: 1
     selector:
       matchLabels:
         name: mysdn-operator
     template:
       metadata:
         labels:
           name: mysdn-operator
       spec:
         tolerations:
           - effect: NoExecute
             operator: Exists
           - effect: NoSchedule
             operator: Exists
         serviceAccountName: mysdn-operator
         hostNetwork: true
         initContainers:
           - name: configure-security-groups
             image: quay.io/mysdn/operator-init:master
             env:
               - name: KUBELET_KUBECONFIG
                 value: /etc/kubernetes/kubeconfig
             volumeMounts:
               - mountPath: /etc/kubernetes/kubeconfig
                 name: host-kubeconfig
                 readOnly: true
         containers:
           - name: mysdn-operator
             image: quay.io/mysdn/operator:de99f8f
             command:
               - operator
               - --url-only-kubeconfig=/etc/kubernetes/kubeconfig
             imagePullPolicy: Always
             volumeMounts:
               - mountPath: /etc/kubernetes/kubeconfig
                 name: host-kubeconfig
                 readOnly: true
             env:
               - name: WATCH_NAMESPACE
                 valueFrom:
                   fieldRef:
                     fieldPath: metadata.namespace
               - name: OPENSHIFT
                 value: "true"
               - name: POD_NAME
                 valueFrom:
                   fieldRef:
                     fieldPath: metadata.name
               - name: OPERATOR_NAME
                 value: "mysdn-operator"
         volumes:
         - hostPath:
             path: /etc/kubernetes/kubeconfig
           name: host-kubeconfig
    cluster-network-06-mysdn-clusterrolebinding.yml
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
     name: mysdn-operator
    subjects:
    - kind: ServiceAccount
     name: mysdn-operator
     namespace: mysdn-operator
    roleRef:
     kind: ClusterRole
     name: mysdn-operator
     apiGroup: rbac.authorization.k8s.io
    cluster-network-07-mysdn-clusterrole.yml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
     name: mysdn-operator
    rules:
    - apiGroups:
     - ""
     resources:
     - namespaces
     - pods
     - services
     - endpoints
     - events
     - configmaps
     - secrets
     - serviceaccounts
     verbs:
     - '*'
    - apiGroups:
     - rbac.authorization.k8s.io
     resources:
     - clusterroles
     - clusterrolebindings
     - rolebindings
     verbs:
     - '*'
    - apiGroups:
     - apps
     resources:
     - deployments
     - daemonsets
     verbs:
     - '*'
    - apiGroups:
     - apiextensions.k8s.io
     resources:
     - customresourcedefinitions
     verbs:
     - '*'
    - apiGroups:
     - monitoring.coreos.com
     resources:
     - servicemonitors
     verbs:
     - get
     - create
    - apiGroups:
     - apps
     resourceNames:
     - mysdn-operator
     resources:
     - deployments/finalizers
     verbs:
     - update
    - apiGroups:
     - operator.mysdn.io
     resources:
     - '*'
     verbs:
     - '*'
    # When running mysdnSecureEnterprise, we need to manage APIServices.
    - apiGroups:
     - apiregistration.k8s.io
     resources:
     - apiservices
     verbs:
     - '*'
    # When running in openshift, we need to update networking config.
    - apiGroups:
     - config.openshift.io
     resources:
     - networks/status
     verbs:
     - 'update'
     - '*'
    - apiGroups:
     - config.openshift.io
     resources:
     - networks
     verbs:
     - 'get'
     - '*'
    - apiGroups:
     - scheduling.k8s.io
     resources:
     - priorityclasses
     verbs:
     - '*'
    cluster-network-08-mysdn-serviceaccount.yml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
     name: mysdn-operator
     namespace: mysdn-operator
    cluster-network-09-mysdn-installation.yml
    apiVersion: operator.mysdn.io/v1
    kind: Installation
    metadata:
     name: default
    spec:
     cniBinDir: "/var/lib/cni/bin"
     cniNetDir: "/etc/kubernetes/cni/net.d"
     components:
       kubeProxy:
         required: true
         image: quay.io/mysdn/kube-proxy:v1.13.6-nft-b9dfbb
       node:
         image: tmjd/node:erik-nft

    Connect Metadata Test Results

    The operator certification pipeline, whether using the local test or hosted offering, performs a series of tests all of which must pass before you can successfully publish your operator.

    Each of the tests performed by the certification pipeline is outlined below.

    hashtag
    Operator Metadata Linting

    In order to pass this test both of the commands below need to be run and yield no errors.

    hashtag
    Operator OLM Deployment

    In order to pass this test, the operator must deploy successfully from the Operator Lifecycle Manager. Keep in mind that even when an operator is successfully deployed, there could still be non-fatal errors hidden in the operator's pod log, so prior testing on OpenShift is essential.

    hashtag
    Operator Scorecard Tests

    In order to certify you must pass the first two Basic Tests: Spec Block Exists, and Status Block Exists. Passing the third basic test (Writing into CRs has an effect) requires adding the scorecard-proxy container to the operator deployment, which is not desired in a production operator and is therefore not required for certification.

    If you're building either a Helm or Ansible operator using the SDK, then the CR Status is managed automatically, and there is no extra coding required. However, if you're building an operator in Go (using the SDK) or some other framework, you'll need to make certain that you're updating the Status field of the CR with relevant info regarding the running state of the application. See the .

    Passing the OLM Tests portion of the scorecard is not a requirement for certification, as currently the CR Spec and Status Descriptor fields don't populate the OpenShift UI.

    triangle-exclamation

    The scorecard does not test the operator's functionality besides ensuring that it updates the CR status field. You are required to fully test your operator in OpenShift 4.

    hashtag
    Operator Image Source

    You must have a published operator image source in order to publish your operator. The base image must be RHEL 7 or UBI. Refer to and for more information on the Universal Base Image (UBI).

    Red Hat Marketplace Requirements

    A walk through of the changes required to enable your operator to work in an offline environment. This is also a technical requirement for Red Hat Marketplace.

    hashtag
    Helm Operators

    You can find an example Helm Operator herearrow-up-right.

    hashtag
    Update your Helm chart

    Make sure your Helm chart only references the values file for images. Each image should be a single value (it can't be split into repository and tag, for example).

    A great way to find all the parts of your helm chart that will need to be updated is to recursively search your project's template folder for "image:"

    Here we see that the image is split into two values - repository and tag. This won't work because we need to have a single variable to override. Replace this line with a new, single variable:

    Don't forget to add this new value to your Values.yaml file!

    hashtag
    Override the image variable

    In the watches.yaml file, add a field for overrideValues. It should contain each variable from the values.yaml file that corresponds to an image, and should be set to the environment variable you want to use.

    NOTE: the variable name MUST follow the pattern RELATED_IMAGE_. There is code looking for that string in your operator.

    Using operator-sdk version 0.14.0 or later, build the updated operator image, and skip down to the

    hashtag
    Ansible Operators

    You can find an .

    hashtag
    Update your Ansible role

    Make sure your role is using environment variables for images instead of hard-coded or regular variables. If it's not, update it:

    Above you can see a reference to a image that's defined in the role's defaults/main.yaml file with an option to override it in values/main.yaml. Instead, use Ansible's lookup module to reference an environment variable and remove the variable from defaults/main.yaml to avoid confusion:

    NOTE: Your environment variables need to follow the RELATED_IMAGE_ format, as there is code looking for this pattern.

    Build your updated operator image, and skip down to the

    hashtag
    Golang Operators

    You can find an .

    Make sure your code is using environment variables for any images your operator uses (any image except the operator itself).

    Build your updated operator image, and skip down to the

    hashtag
    All Operators (Helm, Ansible or Golang)

    hashtag
    Define the environment variables

    In the CSV and operator.yaml files, declare the variable and set to a default value.

    circle-info

    You may now use external image registries if desired. You are not required to host all images in the Red Hat Connect internal image registry as was the previous practice for Red Hat Marketplace integration.

    hashtag
    Entitled Registry

    If you want to use the entitled registry for Red Hat Marketplace, your images must be hosted in .

    You will need to scan your operator with the docker images using , replacing all your image references to use this docker registry.

    From there, apply an ImageContentSourcePolicy to point to . This will allow the marketplace operator to use the entitled registry and properly replace the image strings.

    hashtag
    Other things to watch for

    If you're in the habit of using the "latest" tag for your images, make sure you specify it. Because of how the automation is written that picks up these images, we need a tag to be present.

    Make sure you aren't overwriting the image in your CR (and by extension in the alm-examples field of the CSV). The best way to handle this is removing the field from the CR/alm-examples.

    Security Context Constraints

    This section covers how to add SCCs to your operator for running privileged app or agent workloads on OpenShift.

    hashtag
    What are Security Context Constraints?

    A Security Context Constraint is an OpenShift extension to the Kubernetes security model, that restricts which Security Contexts can be applied to a pod. This both prevents unprivileged (non cluster-admin) users from being able to apply Security Contexts that would allow a pod definition to forcefully override which UID (possibly as root / UID 0), GID, or fsGroup that a pod can run with, as well as which host volumes and node networking interfaces can be accessed by a pod.

    To summarize, SCCs prevent unprivileged users from being able to run privileged containers on an OpenShift cluster, by restricting which user and/or group that a pod is able to run as on the worker nodes in the cluster. SCC's also prevent pods from gaining access to local resources on the nodes. SCCs restrict all of these things by default, unless explicitly overridden by a cluster-admin user or service account.

    By default, OpenShift runs all pods with the restricted SCC. This causes pods to run with a randomized UID in a very high numerical range (100000+) and disregards the USER or UID specified in the container image Dockerfile (unless explicitly set to root, in which the pod will be prevented from running at all). The next section will describe alternative SCCs and when to use them.

    circle-info

    Security Context Constraints are covered in depth in the official OpenShift documentation at .

    hashtag
    Identifying Which SCC to Use

    Given the nature of the restricted SCC, it's quite commonplace that a pod might need to run with a certain UID (as with many databases), or sometimes even as root, when the use case justifies doing so (as with host monitoring agents). This sort of classifies container workloads into three cases:

    • Containers that are completely agnostic as to what UID or GID they run with (most operators)

    • Containers that simply need to run as a certain UID (even root)

    • Containers that require privileged, host-level access in order to run

    The first case is handled easily by the default, restricted SCC. The second case requires the anyuid SCC, which will allow the container to run with the USER specified in the Dockerfile (even root). The third, and most special case is one that absolutely requires not only running as root, but access to node resources at the host level, and is handled by the privileged SCC. As such, the privileged SCC should be used with care and where it's justified, as mentioned previously. OpenShift does provide several other SCCs which allow more granular access, but SCC's aren't stackable, therefore you can't make a concoction of, say anyuid, hostaccess, and hostnetwork SCCs to grant each set of permissions to your pod. You can assign multiple SCC's to a service account, but the SCC with the least amount of privileges will take precedence over all the others. This makes the other SCC's less useful in common practice.

    It's not common for an operator to require a non-default SCC, since they only make API calls back to the K8s environment where they're being run. However, it's quite common for an operand (pod or pods deployed and managed by an operator) to require a non-default SCC.

    circle-info

    There are ways to get around issues with container filesystem permissions with the restricted SCC, but doing so would require making changes to the container's Dockerfile, and therefore rebuilding the container image to allow read and/or write access to certain files/directories by the root group (GID 0).

    hashtag
    Apply an SCC to your Operator or Operand

    If required, the last step is to apply either the anyuid or the privileged SCC to your operator or operand. This is done by modifying the set of RBAC rules being shipped with the operator. This will soon be managed directly by shipping a ClusterRole manifest in a containerized operator bundle, but today this requires modifying the ClusterServiceVersion from your metadata bundle to add a clusterPermissions field with the necessary RBAC rules.

    For example, to add the anyuid SCC to a service account named my-operand , the following block would get added to the ClusterServiceVersion of your operator bundle:

    You can see where we define the anyuid SCC under resourceNames, and we set the serviceAccountName that this ClusterRole will apply to as my-operand. To set the privileged SCC instead, set the field under resourceNames to privileged. Note that hyphens are required in the specified places, as these data types are enforced by the K8s API as arrays, which are denoted by a hyphen in the YAML syntax.

    circle-info

    A full, albeit commented out example of including a clusterPermissions field into a ClusterServiceVersion definition can be found on our GitHub project:

    Applying Security Context Constraints

    Security Context Constraints (SCC's) must be applied in order to run privileged or setuid containers on OpenShift, which is a distinct requirement over that of vanilla Kubernetes.

    hashtag
    Adding an SCC to the Operator Metadata

    SCC's must be applied to the service account which will run the application/operand pods that get managed by the operator. This is done by editing the CSV yaml file from the metadata bundle of your community operator.

    Below is an example SCC applied to a named service account in a hypothetical CSV yaml file:

    apiVersion: operators.coreos.com/v1alpha1
    kind: ClusterServiceVersion
    metadata:
    ...
    spec:
      ...
      install:
        ...
        spec:
          deployments:
            ...
          permissions:
            ...
          clusterPermissions:
          - rules:
            - apiGroups:
              - security.openshift.io
              resources:
              - securitycontextconstraints
              resourceNames:
              - anyuid
              verbs:
              - use
            serviceAccountName: example-application

    In the bottom half of the yaml snippet above, in the clusterPermissions field, the SCC named anyuid is applied to the service account named example-application. These two fields are the only things you'd need to change accordingly, depending on what service account name you're using, and the desired SCC name (see the list of names in the ). In your case, the service account could simply be the default service account in the current namespace, which would be the case if you didn't create or specify a named service account for your operand in the deployment, pod spec, or whatever K8s object the operator is managing.

    hashtag
    Managing SCCs for Multiple Service Accounts

    It's worth noting that the clusterPermissions field is an array, so you can list multiple service accounts with a corresponding SCC (or SCCs) applied to each service account. See the below example:

    Assembling the Metadata Bundle

    The format of the metadata bundle is distinctly different from the convention used to submit community operators.

    Depending on the format of your operator's metadata bundle, you may have to flatten any nested subdirectories, and possibly also remove any extraneous files from inclusion into the metadata archive. Skipping this step will result in your metadata failing the certification pipeline due to being invalid. The operator-courierarrow-up-right tool is what's used by the certification pipeline to lint and validate metadata submissions.

    Below is an example tree layout of a "broken" metadata bundle for an operator named example:

    └── example
        ├── example.crd.yaml
        ├── example-operator.v0.1.0.clusterserviceversion.yaml
        ├── example-operator.v0.2.0.clusterserviceversion.yaml
        ├── example.package.yaml
        ├── some-extraneous-file.txt
        └── somedir
            └── some-other-file.yaml

    You can see in the above that there are the usual operator metadata files (1 CRD, 2 CSVs and 1 package.yaml) with a couple of extra files and a directory thrown in. Before submitting an operator metadata bundle to Red Hat Connect, you need to archive only the relevant files that are part of the operator metadata (any CRDs, CSVs, and package.yaml files), omitting any extraneous files or directories. One caveat is that you can't just zip up the bundle directory itself and upload that, since the yaml files would be nested in a subdirectory within the archive.

    Using the example bundle directory above, a valid metadata archive can be created using the linux zip utility as follows:

    $ cd /path/to/bundle/example
    $ zip example-metadata *.yaml

    In the above command, we simply changed to the metadata bundle directory, as might originally be cloned from the GitHub repo. Then the zip command was run, with archive name as the first argument (sans .zip file extension) and a file glob matching only yaml files in the current directory. This works fine, as long as there aren't important metadata files nested in any sub directories inside the bundle directory being archived, which shouldn't be the case if using your metadata bundle as submitted previously to GitHub. A quick way to verify the contents is to use the unzip tool:

    With your metadata bundle ready, you can now

    Consuming Applications from RHCC

    The intent of a Red Hat Certified Operator is to deploy and manage certified ISV application images from the Red Hat Container Catalog or a private registry. See below for examples.

    For Ansible or Helm based operators, this is done simply by setting the image location in the K8s manifest templates contained within your Ansible playbook/role or Helm chart to RHCC. For golang operators, the Go struct(s) representing the Kubernetes manifest(s) of your application must have the image source set to RHCC. In the following examples, replace the example image source string with that of your published application image.

    hashtag
    Ansible

    To the RHCC image source in Ansible, we'll use an example tasks file from a role. Let's say this file is found at roles/example/tasks/main.yaml within the operator project, and contains the following:

    - name: deploy operand
      k8s:
        definition:
          apiVersion: apps/v1
          kind: Deployment
          metadata:
          ...
          spec:
            ...
            spec:
              containers:
              - name: example-app
                image: registry.connect.redhat.com/example-partner/example-app:1.0.0
                ...

    You can see where the image is set on the next to last line in the yaml snippet above.

    hashtag
    Helm

    In this helm example, lets assume the following deployment template exists at helm_charts/example/templates/example-deployment.yaml:

    The image is shown in the next to last line in the above gotpl snippet.

    hashtag
    Go

    Below example in Golang (snippet of pkg/apis/example/v1alpha1/types.go from a hypothetical operator-sdk project):

    In the above, the Spec field contents of the operator CR are defined, with the image field of the manifest being set by theExampleSpec.Image field in the above example. Note that no value is actually set in the struct, though the field is shown to be user-configurable by setting the spec.image field when creating the CR. Ideally, the image source would not be configurable, since we want to guarantee that the certified application images get pulled from RHCC. The default value for ExampleSpec.Image gets set in pkg/apis/example/v1alpha1/defaults.go as in the below example:

    You can see the image source value in the next to last line of code in the golang snippet above.

    Frequently Asked Questions (FAQ)

    hashtag
    What is the recommended version for Operator-SDK?

    The recommended version is the 1.0.0+ this brings significant changes but it will allow you to build the bundle images using the new format.

    hashtag
    What happens to the previous operator project? Do we just upload the actual operator image there?

    That's correct. The previous operator project holds the operator container image. Not the bundle image. You will need to continue to update the operator image through that project, and the bundle metadata image to the new project.

    hashtag
    How is a new version released? Just uploading and publishing the image in the bundle project is enough?

    Once you update your operator and are ready for a new version, you will need to update the operator image and once that is published you can submit the bundle image. A new version is released when you push your metadata bundle image to the bundle project. Once you publish the Bundle Image it will take about 1-2 hours for the new version to show up on embedded OperatorHub.

    The upgrade path remains the same. The CSV behaves the same way. The difference is that now each new version will have its metadata registered in a container image too. So the operator container image itself remains the same way. The metadata is what changes. It's now delivered per version in a single container image as a metadata storage. More information regarding the certification workflow and the updated files structure can be found here: .

    hashtag
    What labels should be used for future OpenShift releases? Currently I see only 4.5 and 4.6

    The label that says 4.5 implicitly takes care of all previous versions of OpenShift. The newer labels from 4.7 and beyond will be automatically handled in the backend process. In the future we will publish new documentation in case you don't want your operator to show up in a specific OpenShift version.

    Bundle Maintenance After Migration

    This section of the guide is intended for partners who have completed the bundle migration process. If you have not been reached out to regarding this, you can disregard this section of the guide.

    With the adoption of OpenShift 4.5 we are introducing a new way to manage your metadata. Previously there was no standard way to associate and transmit operator manifests and metadata between clusters, or to associate a set of manifests with one or more runnable container images. The changes we are putting into place will remedy this situation and change the way a partner creates and maintains the metadata associated with your certified operator.

    Updated Operator Certification Workflow:

    We have sent you a zip file that contains your metadata and a Dockerfile for an image to hold it. That Dockerfile replaces the zip you used to upload for scanning. We have already submitted the bundle image to the scan test and everything has been successfully converted to use the new format. There are some changes to how you go about working with this new format and this section of the guide is designed to help you understand the new format and how to continue forward using this format. Below you can see an example of the new format of metdata that is shipped inside of the container image provided.

    To create a new update for your operator, you'll need to create a new version directory, and place your crds and csv inside. You can make any updates to these files as normal.

    Here's an example what the structure should look like when you're done:

    circle-exclamation

    Don't forget to update your package yaml, too! It's not in one of the version sub-directories because the package determines which operator version is used.

    Move into the bundle directory. You can now use opm to create the annotations.yaml and Dockerfile. Keep in mind that the new Dockerfile will be created in the directory you run the command, and it includes COPY commands. It'll also slightly re-scaffold the project.

    Next, you'll need to add some LABELs to the Dockerfile.

    When you're done, the finished Dockerfile should look something like this

    Now you can build the new image and submit it to the pipeline. You'll have a new project in Connect for your bundle image that we created for you as part of migration with the same name as your operator project, but with "-bundle" on the end. Click "Push Image Manually" for instructions on uploading your metadata bundle for testing.

    circle-info

    You can find more information and answers to your questions in our

    operator-sdk bundle validate ./bundle
    Bundle Maintenance After Migrationarrow-up-right

    Community Operators

    With access to community Operators, developers and cluster admins can try out Operators at various maturity levels that work with any Kubernetes. Check out the community Operators on OperatorHub.ioarrow-up-right.

    Instructions for submitting your operator to be a Community Operator can be found here: Community Operatorsarrow-up-right.

    last section
    example Ansible operator herearrow-up-right
    last section
    example Go operator herearrow-up-right
    last section
    registry.connect.redhat.comarrow-up-right
    registry.marketplace.redhat.com/rhmarrow-up-right
    registry.marketplace.redhat.com/rhmarrow-up-right
    registry.connect.redhat.comarrow-up-right
    https://docs.openshift.com/container-platform/latest/authentication/managing-security-context-constraints.htmlarrow-up-right
    https://github.com/RHC4TP/operators/blob/master/examples/operator_sdk/helm/mariadb-operator/bundle/mariadb.v0.0.4.clusterserviceversion.yaml#L269-#L280arrow-up-right
    official OCP docsarrow-up-right
    $ unzip -l example-metadata.zip
    Archive:  example-metadata.zip
      inflating: example-operator-v0.1.0.clusterserviceversion.yaml  
      inflating: example-operator.v0.2.0.clusterserviceversion.yaml
      inflating: example.crd.yaml
      inflating: example.package.yaml
    community-operatorsarrow-up-right
    upload the metadata to Connect.arrow-up-right
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    ...
    spec:
      ...
      spec:
        containers:
        - name: example-app
          image: registry.connect.redhat.com/example-partner/example-app:1.0.0
          ... 
    values.yaml
      containers:
      - name: {{ template "etcd.fullname" . }}
        image: "{{ .Values.image.image }}"
        imagePullPolicy: "{{ .Values.image.pullPolicy }}"
    $ grep -r 'image:' ./helm-charts/etcd/templates/*
      ./helm-charts/etcd/templates/statefulset.yaml:        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
    $ grep -r 'image:' ./helm-charts/etcd/templates/*
      ./helm-charts/etcd/templates/statefulset.yaml:        image: "{{ .Values.image.image }}"
    watches.yaml
    overrideValues:
      image.image: $RELATED_IMAGE_STATEFULSET
    tasks/main.yaml
    containers:
    - name: mongodb
      image: "{{ 'dbImage' | quote }}"
    tasks/main.yaml
            containers:
            - name: mongodb
              image: "{{ lookup('env','RELATED_IMAGE_DB') | quote }}"
    func newPodsForCR(cr *noopv1alpha1.UBINoOp) []*corev1.Pod {
    	ubiImg := os.Getenv("RELATED_IMAGE_UBI_MINIMAL")
    	}
    clusterserviceversion.yaml
                spec:
                  containers:
                  - env:
                    - name: WATCH_NAMESPACE
                      valueFrom:
                        fieldRef:
                          fieldPath: metadata.annotations['olm.targetNamespaces']
                    - name: POD_NAME
                      valueFrom:
                        fieldRef:
                          fieldPath: metadata.name          
                    - name: OPERATOR_NAME
                      value: ectd-helm
                      
                  #This is the new environment variable
                    - name: RELATED_IMAGE_STATEFULSET
                      value: k8s.gcr.io/etcd-amd64:3.2.26
                  #The operator image itself doesn't change    
                    image: quay.io/dhoover103/etcd-helm-operator:v0.0.1
    
    spec:
      install:
        spec:
          clusterPermissions:
          - rules:
            - apiGroups:
              - security.openshift.io
              resources:
              - securitycontextconstraints
              resourceNames:
              - anyuid
              verbs:
              - use
            serviceAccountName: my-operand
    apiVersion: operators.coreos.com/v1alpha1
    kind: ClusterServiceVersion
    metadata:
    ...
    spec:
      ...
      install:
        ...
        spec:
          deployments:
            ...
          permissions:
            ...
          clusterPermissions:
          - rules:
            - apiGroups:
              - security.openshift.io
              resources:
              - securitycontextconstraints
              resourceNames:
              - anyuid
              verbs:
              - use
            serviceAccountName: example-app1
          - rules:
            - apiGroups:
              - security.openshift.io
              resources:
              - securitycontextconstraints
              resourceNames:
              - anyuid
              verbs:
              - use
            serviceAccountName: example-app2
    type ExampleSpec struct {
    	NodeSelector     map[string]string   `json:"nodeSelector,omitempty"`
    	Tolerations      []corev1.Toleration `json:"tolerations,omitempty"`
    	WaitReadySeconds *uint16             `json:"waitReadySeconds,omitempty"`
    	// Application image
    	Image string `json:"image,omitempty"`
    func SetDefaults_ExampleSpec(obj *ExampleSpec) {
    	if obj.WaitReadySeconds == nil {
    		obj.WaitReadySeconds = new(uint16)
    		*obj.WaitReadySeconds = 300
    	}An example for Helm:
    apiVersion: apps/v1kind: Deploymentmetadata:...spec:  ...  spec:    containers:    - name: example-app      image: registry.connect.redhat.com/example-partner/example-app:1.0.0      ... 
    
    	if obj.Image == "" {
    		obj.Image = "registry.connect.redhat.com/example-partner/example-app:1.0.0"
    	}
    Golang Gotchas section on Writing to the Status Subresource
    this blogarrow-up-right
    the FAQarrow-up-right
    An example showing an operator that has passed all tests, and is ready to be published
    FAQs Sectionarrow-up-right
    We created the 3.0.0 directory here, and made new metadata files inside.

    Glossary of Terms

    This page defines the terminology used throughout this section for further clarity.

    Term
    Definition

    Red Hat Partner Connect

    The Red Hat Technology Partner portal website resolved by .

    Red Hat Ecosystem Catalog

    The official Red Hat software catalog for containers, operators and other software from both Red Hat and Technology Partners, as resolved by .

    Red Hat Container Catalog / RHCC Registry

    Often abbreviated as the RHCC registry or simply RHCC, this is the container image registry provided by Red Hat for hosting partner-provided, certified container images. It is resolved by .

    External Registry

    This is any container image registry that is not hosted by Red Hat, such as , docker.io or any other third party or private image registry.

    circle-info

    Did we miss anything? If you find a confusing term anywhere in this section, please let us know and we'll get it added to the list above.

    Updating the Bundle Image

    To actually publish your operator so that non-Intel/AMD architectures can consume it, the bundle image must be updated with the externally-hosted manifest lists for all images. We'll assume you're updating an existing bundle, so you'll want to regenerate a new bundle directory using your kustomize templates (applicable to post-1.0 SDK), or simply copy your latest bundle directory and work from that.

    Using the latter method as an example, let's say we have a bundle directory named 0.0.1/ and we want to bump this to 0.0.2/. A simple recursive copy will get us started:

    $ cp -r bundle/0.0.1 bundle/0.0.2

    In the above case, we would work from the newly created bundle/0.0.2 directory, and make the necessary changes to the ClusterServiceVersion (CSV) yaml file within the manifests/ subdirectory of the bundle.

    As part of the version bump, we need to rename the CSV, since the version number is part of the file name:

    Now we can proceed with updating the CSV. The following field paths will need to be updated:

    • metadata.annotations.alm-examples[] (as applicable, if image pull specs are used in the CR's)

    • metadata.annotations.containerImage

    • metadata.name

    These are the same fields that should be updated for a typical release/upgrade cycle, say when an image tag/digest changes and you must update those references and bump the operator (CSV) version accordingly. Mainly, the point is here to ensure that the image manifest-lists are utilized instead of the single architecture Intel/AMD64 images used previously.

    circle-info

    To enable support for offline or disconnected environments, which is also a requirement for the Red Hat Marketplace, you must use digests in place of tags for all image references. In the case of certification, however, we convert all image tags found in the bundle CSV to their respective digests automatically.

    Please see the official for more info.

    Lastly, there is one final edit that must be made to enable publishing your operator to other hardware platforms. You must add a label under metadata.labels for architecture supported by the operator.

    For our previous example operator, ubi-noop-go, we would claim support for amd64, ppc64le, and s390x architectures:

    You can refer to the for more info on setting these labels.

    After updating the CSV, follow to build a new bundle image. Once the new bundle image is built, you can test it in your own environment, and .

    Operator-Scorecard-Results
    ===== Test: operator-scorecard-tests =====
    
     Basic Tests:
    	Spec Block Exists: 1/1
    	Status Block Exists: 1/1
    	Writing into CRs has an effect: 0/1
    OLM Tests:
    	Provided APIs have validation: 0/0
    	Owned CRDs have resources listed: 2/2
    	CRs have at least 1 example: 1/1
    	Spec fields with descriptors: 8/8
    	Status fields with descriptors: 1/1
    
    Total Score: 74%
    $ mkdir bundle/3.0.0
    $ cp <latest csv> bundle/3.0.0
    $ cp <crd> bundle/3.0.0
    $ cd bundle
    $ opm alpha bundle generate -d ./3.0.0/ -u ./3.0.0/
    LABEL com.redhat.openshift.versions="v4.5,v4.6"
    LABEL com.redhat.delivery.backport=true
    LABEL com.redhat.delivery.operator.bundle=true
    $ mv bundle/0.0.2/manifests/example-operator.v0.0.1.clusterserviceversion.yaml \
    bundle/0.0.2/manifests/example-operator.v0.0.2.clusterserviceversion.yaml

    Scan Registry

    The scan registry is the Red Hat internal registry or staging area used during the container image scanning process. It is kept behind the scenes, and is not a point of user interaction for multi-arch images.

    Project

    A project is analogous to a container image repository and exists within the context of the Red Hat Partner Connect website. You typically need a project created for every container image that you intend to certify.

    Operator

    In this section, the term operator generally refers to the operator's container (or controller) image.

    Operand

    The operand is the operator's workload container image. There can be, and often are multiple operand images for any particular operator.

    Bundle

    The directory containing the operator deployment manifests/ and metadata/ directories, and optionally the tests/ directory for additional custom tests. It is usually named for the release version it deploys (such as 1.0.0/). This is what gets submitted to GitHubarrow-up-right per the operator release workflow.

    Digest

    The SHA 256arrow-up-right digest of a particular container image (which is actually the digest of the manifest for that image).

    Manifest

    The metadata or descriptor file of a container image. This contains the labels applied when the image was created, a list of the layers within the image along with their respective digests, and other descriptive information. This file should conform to the OCI specificationarrow-up-right.

    Manifest List

    This is a type of manifest which is nothing more than a list of other manifests (by their respective digest and architecture). The list usually contains one manifest for each supported CPU architecture. The intent is to support a single multi-arch image that could match the corresponding architecture of the machine it's being run on. This is also defined in the OCI specarrow-up-right.

    connect.redhat.comarrow-up-right
    catalog.redhat.comarrow-up-right
    registry.connect.redhat.comarrow-up-right
    quay.ioarrow-up-right

    spec.install.spec.deployments[].spec.template.spec.containers[].image

  • spec.install.spec.deployments[].spec.template.spec.containers[].env[] (update RELATED_IMAGE environment variables for Red Hat Marketplace)

  • spec.replaces

  • spec.version

  • OpenShift Documentationarrow-up-right
    OpenShift Documentationarrow-up-right
    Creating the Metadata Bundle
    submit the bundle for scanningarrow-up-right
    metadata:
      labels:
        operatorframework.io/arch.s390x: supported
        operatorframework.io/arch.ppc64le: supported 
        operatorframework.io/arch.amd64: supported

    Scanning and Publishing

    circle-exclamation

    Please note that it is no longer strictly required to scan every single architecture image individually when scanning a multi-arch container image. You may opt to scan only the multi-arch manifest list instead, but this will prevent the architectures from being listed properly on catalog.redhat.comarrow-up-right once published.

    Each architecture-specific image for each component (operator or operand) must be scanned within the respective project and tagged according to the architecture. Take, for example an operator image named example-operator:v1.0.0 which has two associated architectures: one for Intel/AMD64 and another for IBM Z/s390x:

    • Both the amd64 and s390x images should be scanned separately in the Operator Controller project

    • Since the images can't share a tag name, they should be tagged according to the architecture.

    • The amd64 image could be tagged as v1.0.0-amd64 or simply left as v1.0.0, assuming it's already been published

    • Therefore, the Z/s390x image could be tagged as v1.0.0-s390x to distinguish it from the Intel/AMD64 image

    Let's go through each of these steps to scan the s390x image for an externally-hosted operator controller.

    circle-info

    If you don't have an operator image built for Z (s390x) or Power (ppc64le) Systems, please refer to .

    First, login to as a Technology Partner by clicking Log in for technology partners on the left-side of the login page:

    Go to your projects list by clicking the header drop down for Product Certification > Manage projects or via this :

    Select the operator project (not the bundle project) from the list of projects, and click the Images header to view the associated images. Click Scan new image and you should see the following prompt:

    Enter the pull spec containing the sha256 digest (not the tag) of the specific image that you wish to scan, along with the destination repository name and tag:

    Wait for the scan to complete. Once completed, Publish the image by clicking the chevron > on the left side and then click Publish beside the corresponding tag.

    Repeat the above steps for each architecture image (one image for each CPU architecture) that you wish to be listed in the under this particular project/listing. Then perform the same process for any other projects associated with the operator (excluding the bundle project).

    Building a Multi-Arch Operator Image
    Red Hat Partner Connectarrow-up-right
    direct linkarrow-up-right
    Red Hat Ecosystem Catalogarrow-up-right

    Requirements and Limitations

    hashtag
    Supported Architectures

    The full list of supported hardware platforms and associated architecture names is in the following table:

    Platform

    Architecture

    Intel/AMD

    amd64

    *The ARM platform is currently in Tech Preview for OpenShift, as .

    hashtag
    Core Requirements

    There are a number of technical and non-technical requirements that must be met before being able to certify or publish a non-Intel/AMD operator. The list of requirements are as follows:

    • A Certified Intel/AMD architecture operator and any applicable operands (container workloads deployed/managed by the operator controller)

    • At least three (3) projects created and published in Red Hat Partner Connect:

      • An Operator / Bundle project

    Below is an illustration to help explain the relationship between the operator components and the various registries (RHCC, external, and scan). Please keep in mind that there could be more than one operand project/image whereas the diagram below only shows a single operand:

    hashtag
    Current Limitations

    There are a few limitations as well, which dictate the requirements:

    • Manifest lists are not yet fully supported by Red Hat Partner Connect. You can specify a manifest image by digest for scanning and mark it published in the project (should the image pass and become certified), but currently only architecture specific images are supported by for platform identification. Automatic platform identification by parsing the manifest list contents is not supported, so you may still wish to scan each architecture image individually if you'd like the platform to be listed correctly on .

    • Using the RHCC (Red Hat Container Catalog) registry and in turn the Red Hat distribution method will not work due to lacking support for manifest lists

    • There is no automated certification pipeline or test infrastructure in place for non-Intel/AMD architectures at this time

    Project type: Operator Bundle
  • Distribution method: Red Hat

  • An Operator Controller image project

    • Project type: Container

    • Distribution method: External

  • One or more Operand images (as applicable)

    • Project type: Container

    • Distribution method: External

  • All container images related to the operator must be hosted in an external registry

  • Your operator and operands must both support the CPU architectures that you intend to run on. In other words, you can't build an operator for another platform (such as IBM Z) if your operand only runs on AMD64 / Intel x86. The product (operands) must also support the new platform.

  • Each architecture image must be contained within a manifest list, with one manifest list for the operator and one for each of the operands. For example, a minimal multi-arch image layout might look like below (image and platform names are hypothetical):

    • An operator manifest list named/tagged as operator-image:v1.0 would be a manifest referring to these images:

      • operator-image:v1.0-amd64

      • operator-image:v1.0-othercpu

    • A single operand manifest list named/tagged as example-app:v1.0, and would refer to two other images matching the same architecture:

      • example-app:v1.0-amd64

      • example-app:v1.0-othercpu

  • Non-Intel/AMD operators cannot be certified or published on their own, and must be published along with an Intel/AMD operator

    IBM Z

    s390x

    IBM Power

    ppc64le

    ARM*

    arm64

    recently announcedarrow-up-right
    catalog.redhat.comarrow-up-right
    catalog.redhat.comarrow-up-right

    Building a Multi-Arch Operator Image

    This section will guide through all of the steps involved with building, and hosting a multi-arch operator image.

    triangle-exclamation

    Please note that the following build process is for demonstration purposes only and is NOT supported by Red Hat due to the use of emulation software to enable cross-compilation and building.

    You should always build containers and application binaries on the native host platform that you intend to support. Emulation software should only be used to create non-production builds for testing.

    It should also be noted that this procedure will NOT work in RHEL because the required emulation software is intentionally unavailable. Please see .

    Here we showcase how to build an operator controller image targeting other (non-Intel/AMD) CPU architectures for hosting in an external registry. It also covers assembling the various architecture images into a single manifest list and tag. This section assumes that you've already made the required changes to the operator's Dockerfile (labels and licenses) when certifying for Intel/AMD.

    circle-exclamation

    Building an operand image for other CPU architectures is not covered in this guide. However, you can use the same --arch flag with to force the use of a specific architecture, and the GOARCH variable would also still apply for container applications written in Go.

    If you already have a multi-arch operator and operands hosted in Quay or Docker Hub, please proceed with .

    hashtag
    Building with Legacy Operator SDK Releases (pre-1.0)

    circle-info

    If you don't have an Operator SDK project to work from, you can clone this git repository and follow along: The operator project is located within the ubi-noop-go/ directory of the git repo.

    For older releases of the Operator SDK, where image building is handled by the operator-sdk binary, the process is fairly simple. There are no specific requirements to be made to the Dockerfile source for the operator controller. All modifications are made as flags or variables when calling operator-sdk build.

    Specifically, you must set the GOARCH variable and --arch flag to match the CPU architecture that you're targeting. According to the CPU architecture table, the correct architecture (arch) for IBM Z would be s390x.

    For cross compilation and container builds to work, you must install the qemu-user-static package, which is not available in RHEL (Fedora was used here):

    Let's build an image locally for the s390x architecture. Keep in mind that we're using in docker emulation mode as the image build tool:

    Similarly, to build an operator image for IBM Power:

    OPTIONAL: If you do not yet have an Intel/AMD operator image (a requirement for multi-arch operator certification) and want to follow along, then build the amd64 image. Regardless, you must set the arch flag to amd64 or podman will default to using the most recent image/architecture (ppc64le) for future commands:

    With all of the images built, the next step is to create a manifest list and push it to an external registry.

    hashtag
    Build and Push the Manifest List

    In this section, we use podman exclusively for creating and pushing the manifest list. The syntax for docker is similar, with the exception of using a single argument when pushing the manifest to a registry.

    Create a new manifest list, which will reference each of the underlying architecture-specific images:

    circle-info

    You could also populate the index by listing each sub-architecture image as an additional argument when creating the manifest list. However, if anything happens (typo/mispelling), then you're left with a partially populated manifest list.

    Add each of the architecture specific images to the manifest list, in no particular order. To make sure that podman pulls the images from local container storage, we use the containers-storage:localhost/ prefix. Let's start with the Intel/AMD image:

    Next, add the Power image to the manifest list:

    Add the Z image to complete the manifest list:

    The last step is to push the new manifest list to an external registry for hosting. Quay.io is suggested here, but you can use any registry which supports manifest lists, such as Docker Hub:

    Once you have the manifest list hosted for your operator and its operands (which aren't covered here), you can proceed with .

    https://access.redhat.com/solutions/5654221arrow-up-right
    podmanarrow-up-right
    Scanning and Publishing
    https://github.com/jsm84/rhm-env-examplearrow-up-right
    podmanarrow-up-right
    Scanning and Publishing
    $ sudo dnf install -y qemu-user-static
    $ GOARCH=s390x operator-sdk build ubi-noop-go:v0.0.1-s390x --image-build-args="--arch=s390x"
    $ GOARCH=ppc64le operator-sdk build ubi-noop-go:v0.0.1-ppc64le --image-build-args="--arch=ppc64le"
    $ operator-sdk build ubi-noop-go:v0.0.1-amd64 --image-build-args="--arch=amd64"
    $ podman manifest create ubi-noop-go:v0.0.1
    $ podman manifest add ubi-noop-go:v0.0.1 containers-storage:localhost/ubi-noop-go:v0.0.1-amd64
    $ podman manifest add ubi-noop-go:v0.0.1 containers-storage:localhost/ubi-noop-go:v0.0.1-ppc64le
    $ podman manifest add ubi-noop-go:v0.0.1 containers-storage:localhost/ubi-noop-go:v0.0.1-s390x
    $ podman manifest push ubi-noop-go:v0.0.1 quay.io/exampleco/ubi-noop-go:v0.0.1