Certified Operator Build Guide
  • Introduction
  • What is an Operator?
  • Pre-Requisites
  • Helm Operators
    • Building a Helm Operator
      • Using a Single Image Variable (Red Hat Marketplace)
      • Dockerfile Requirements
      • Update the Controller Manager
      • Building and Pushing Image
  • Ansible Operators
    • Building an Ansible Operator
      • Using a Single Image Variable (Red Hat Marketplace)
      • Dockerfile Requirements
      • Update the Controller Manager
      • Building and Pushing Image
  • Golang Operator Gotcha's
    • Writing to the Status Subresource
  • OpenShift Deployment
    • Operator Metadata
      • Update CRDs from v1beta1
      • Creating the Metadata Bundle
      • Adjusting the ClusterServiceVersion
      • Reviewing your Metadata Bundle
      • Metadata Bundle Image
        • Managing OpenShift Versions
    • Installing an OpenShift Environment
    • Deploying onto OpenShift
  • Troubleshooting and Resources
    • Creating an Ansible Role From a Helm Chart
    • Security Context Constraints
    • Connect Metadata Test Results
    • Red Hat Marketplace Requirements
  • Appendix
    • What if I've already published a Community Operator?
      • Consuming Applications from RHCC
      • Applying Security Context Constraints
      • Choosing a Unique Package Name
      • Assembling the Metadata Bundle
    • Community Operators
    • AWS OpenShift 4 Cluster Quick Start Guide
    • Using Third Party Network Operators with OpenShift
      • Appendix A - CNI Operator Manifests
      • Appendix B - Cluster Network Status
      • Appendix C - Operator Group Manifest
      • Appendix D - Subscription Manifest
    • Bundle Maintenance After Migration
    • Frequently Asked Questions (FAQ)
    • Multi-Arch Operator Certification
      • Glossary of Terms
      • Requirements and Limitations
      • Building a Multi-Arch Operator Image
      • Scanning and Publishing
      • Updating the Bundle Image
Powered by GitBook
On this page
  1. Appendix
  2. Multi-Arch Operator Certification

Updating the Bundle Image

PreviousScanning and Publishing

Last updated 3 years ago

To actually publish your operator so that non-Intel/AMD architectures can consume it, the bundle image must be updated with the externally-hosted manifest lists for all images. We'll assume you're updating an existing bundle, so you'll want to regenerate a new bundle directory using your kustomize templates (applicable to post-1.0 SDK), or simply copy your latest bundle directory and work from that.

Using the latter method as an example, let's say we have a bundle directory named 0.0.1/ and we want to bump this to 0.0.2/. A simple recursive copy will get us started:

$ cp -r bundle/0.0.1 bundle/0.0.2

In the above case, we would work from the newly created bundle/0.0.2 directory, and make the necessary changes to the ClusterServiceVersion (CSV) yaml file within the manifests/ subdirectory of the bundle.

As part of the version bump, we need to rename the CSV, since the version number is part of the file name:

$ mv bundle/0.0.2/manifests/example-operator.v0.0.1.clusterserviceversion.yaml \
bundle/0.0.2/manifests/example-operator.v0.0.2.clusterserviceversion.yaml

Now we can proceed with updating the CSV. The following field paths will need to be updated:

  • metadata.annotations.alm-examples[] (as applicable, if image pull specs are used in the CR's)

  • metadata.annotations.containerImage

  • metadata.name

  • spec.install.spec.deployments[].spec.template.spec.containers[].image

  • spec.install.spec.deployments[].spec.template.spec.containers[].env[] (update RELATED_IMAGE environment variables for Red Hat Marketplace)

  • spec.replaces

  • spec.version

These are the same fields that should be updated for a typical release/upgrade cycle, say when an image tag/digest changes and you must update those references and bump the operator (CSV) version accordingly. Mainly, the point is here to ensure that the image manifest-lists are utilized instead of the single architecture Intel/AMD64 images used previously.

To enable support for offline or disconnected environments, which is also a requirement for the Red Hat Marketplace, you must use digests in place of tags for all image references. In the case of certification, however, we convert all image tags found in the bundle CSV to their respective digests automatically.

Please see the official for more info.

Lastly, there is one final edit that must be made to enable publishing your operator to other hardware platforms. You must add a label under metadata.labels for architecture supported by the operator.

For our previous example operator, ubi-noop-go, we would claim support for amd64, ppc64le, and s390x architectures:

metadata:
  labels:
    operatorframework.io/arch.s390x: supported
    operatorframework.io/arch.ppc64le: supported 
    operatorframework.io/arch.amd64: supported

You can refer to the for more info on setting these labels.

After updating the CSV, follow to build a new bundle image. Once the new bundle image is built, you can test it in your own environment, and .

OpenShift Documentation
OpenShift Documentation
Creating the Metadata Bundle
submit the bundle for scanning