Table of contents

CloudBees Core on modern cloud platforms installation guide

Installing CloudBees Core on Azure Kubernetes Service (AKS)


Helm is the recommended method for installing and managing CloudBees Core. If your current CloudBees Core installation wasn’t installed using Helm, you can migrate your existing CloudBees Core installation to use Helm. For Helm installation instructions, see Installing CloudBees Core on Kubernetes using Helm.

Please be aware that as of CloudBees Core version, the CloudBees Core upgrade process has changed.

This document explains the cluster requirements, points you to the Microsoft Azure documentation you will need to create a cluster and explains how to install CloudBees Core in your Kubernetes cluster.

The Azure Kubernetes Service (AKS) cluster requirements must be satisfied before CloudBees Core can be installed.

AKS Cluster Requirements

The CloudBees Core installer requires:

  • On your local computer or a bastion host:

    • Kubernetes client 1.x, starting with 1.10, installed and configured (kubectl)

  • An AKS cluster running Kubernetes 1.x, starting with 1.10, as long as it is actively supported by the Kubernetes distribution provider and generally available

    • With nodes that have at least 2 CPUs, 4 GiBs of memory (so nodes have 1 full CPU / 1 GiB available after running a master with default settings)

    • Must use instance type that allows premium disks (example: Standard_D4s_v3)

    • Must have network access to container images (public Docker Hub or a private Docker Registry)

  • The NGINX Ingress Controller installed in the cluster.

    • Load balancer configured and pointing to the NGINX Ingress Controller

    • A DNS record that points to the Azure Load balancer

    • TLS certificates (needed when you deploy CloudBees Core)

  • A namespace in the cluster (provided by your admin) with permissions to create Role and RoleBinding objects

  • Kubernetes cluster Default Storage Class defined and ready to use

Creating your AKS cluster

To create a Kubernetes cluster using Azure Kubernetes Service (AKS), refer to Create an Azure Container Service (AKS) cluster on the Microsoft Azure website.

More information on administering an AKS cluster is available from the full documentation.

More information on Kubernetes concepts is available from the Kubernetes site, including:

Installing Ingress Controller

CloudBees Core requires the use of the NGINX Ingress Controller. This section walks through the installation using Helm.

If you are not able to use Helm, you may install the Ingress Controller manually. Then skip to the DNS Record section.

Installing Helm

This section illustrates setting up your cluster’s ingress ingress using Helm. Helm is a package manager for Kubernetes. Helm has two parts: the helm client and the tiller server-side component. Before we can proceed, the Helm client and Tiller server-side component need to be installed. If you already have Helm client and Tiller server-side component installed, you can skip to Creating the Ingress Controller.

Installing the Helm client

The Helm project readme provides detail instructions on installing the Helm client on various operating systems in the installation section. Please refer to this documentation to install the Helm client on your workstation.

Installing Tiller

Tiller is the Kubernetes cluster’s server-side component of Helm. Perform the following to install Tiller:

  • Create a service account for Tiller

    kubectl create serviceaccount --namespace kube-system tiller
  • Give that service account admin capabilities

    kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
  • Install Tiller on Kubernetes cluster

    helm init --service-account tiller

Creating the Ingress Controller

  • Create the Ingress Controller

    helm install --namespace ingress-nginx --name nginx-ingress stable/nginx-ingress \
                 --set controller.service.externalTrafficPolicy=Local \
                 --set controller.scope.enabled=true \
                 --set rbac.create=true \
                 --set controller.scope.namespace=cloudbees-core

Creating DNS Record

Creating the Ingress Controller results in the creation of the corresponding service, along with its corresponding Load Balancer, both of which will take a few moments. You may then execute the following command to retrieve the external IP address to be used for the CloudBees Core cluster domain name. In the following example, the IP address that you want to note is

$ kubectl get services -n ingress-nginx
NAME                            TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                      AGE
nginx-ingress-controller        LoadBalancer   80:30396/TCP,443:31290/TCP   3m

Create a DNS record, for the domain you want to use for CloudBees Core, pointing to the external IP address. In our example, it is

For example, if the CloudBees Core domain name is, then its A Record entry should be

To continue with the instructions in this document, as this time, create the environmental variable, DOMAIN_NAME:


Run installer

CloudBees Core runs on a Kubernetes cluster. Kubernetes cluster installations are configured with YAML files. The CloudBees Core installer provides a cloudbees-core.yml file that is modified for each installation.

  • Download installer

  • Unpack installer

    $ export INSTALLER=cloudbees-core_2.176.1.4_kubernetes.tgz
    $ sha256sum -c $INSTALLER.sha256
    $ tar xzvf $INSTALLER


  • Prepare shell variables for your installation. Replace with your domain name.

  • Edit the cloudbees-core.yml file for your installation

    $ cd cloudbees-core_2.176.1.4_kubernetes
    $ sed -e "s,,$DOMAIN_NAME,g" \
          -e "s/# storageClassName:.*/storageClassName: managed-premium/" \
          -e "s#-Dcb.IMProp.warProfiles.cje=kubernetes.json#-Dcb.IMProp.warProfiles.cje=kubernetes.json -Dcom.cloudbees.masterprovisioning.kubernetes.KubernetesMasterProvisioning.storageClassName=managed-premium#" \
     < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml

These commands also set the storage class to use managed-premium for both CJOC and master provisioning.

  • Run the installer

    $ kubectl apply -f cloudbees-core.yml
    serviceaccount "cjoc" created
    role "master-management" created
    rolebinding "cjoc" created
    configmap "cjoc-config" created
    configmap "cjoc-configure-jenkins-groovy" created
    statefulset "cjoc" created
    service "cjoc" created
    ingress "cjoc" created
    ingress "default" created
    serviceaccount "jenkins" created
    role "pods-all" created
    rolebinding "jenkins" created
    configmap "jenkins-agent" created
  • Wait until CJOC is rolled out

    $ kubectl rollout status sts cjoc
  • Read the admin password

    $ kubectl exec cjoc-0 -- cat /var/jenkins_home/secrets/initialAdminPassword

Open Operations Center

CloudBees Core is now installed, configured, and ready to run. Open the CloudBees Core URL and log in with the initial admin password. Install the CloudBees Core license and the recommended plugins.

See Administering CloudBees Core for further information about the operation and features of CloudBees Core.

Adding Client Masters

Occasionally administrators need to connect existing masters to a CloudBees Core cluster. Existing masters connected to a CloudBees Core cluster are called "Client Masters" to distinguish them from Managed Masters. A master running on Windows is one example that requires a Client Master.

Configure Ports

  1. Confirm Operations Center is ready to answer internal JNLP requests

    $ kubectl exec -ti cjoc-0 curl localhost:50000
    Jenkins-Agent-Protocols: Diagnostic-Ping, JNLP4-connect, OperationsCenter2, Ping
    Jenkins-Session: 3fa70d75
    Client: 0:0:0:0:0:0:0:1
    Server: 0:0:0:0:0:0:0:1
    Remoting-Minimum-Version: 3.4
  2. Open the JNLP port (50000) in the Kubernetes cluster

  3. Copy/paste the following to a new file, replace cloudbees-core with your namespace, and save it as tcp-services.yaml

    apiVersion: v1
    kind: ConfigMap
      name: tcp-services
      namespace: ingress-nginx
      50000: "cloudbees-core/cjoc:50000"
  4. Copy/paste the following to a new file and save it as deployment-patch.yaml

            - name: nginx-ingress-controller
              - containerPort: 50000
                name: 50000-tcp
                protocol: TCP
  5. Copy/paste the following to a new file and save it as service-patch.yaml

      - name: 50000-tcp
        port: 50000
        protocol: TCP
        targetPort: 50000-tcp
  6. Last, apply these fragments using the following commands

    kubectl apply -f tcp-services.yaml
    kubectl patch deployment nginx-ingress-controller -n ingress-nginx -p "$(cat deployment-patch.yaml)"
    kubectl patch service ingress-nginx -n ingress-nginx -p "$(cat service-patch.yaml)"

Test connection

You can confirm that Operations Center is ready to receive external JNLP requests with the following command:

$ curl $DOMAIN_NAME:50000
Jenkins-Agent-Protocols: Diagnostic-Ping, JNLP4-connect, MultiMaster, OperationsCenter2, Ping
Jenkins-Session: b02dc475
Remoting-Minimum-Version: 2.60

Continue installation

Once ports and security are correctly configured in your cloud and on your Client Master, continue the instructions in Adding Client Masters.

Upgrading CloudBees Core

This section describes the upgrade process for CloudBees Core on Kubernetes installations.

Before you upgrade

  1. Helm is the preferred method of managing CloudBees Core. CloudBees provides a Helm Chart for CloudBees Core, and recommends migrating existing installations to use the Helm Chart. For instructions on how to migrate your non-Helm installation to Helm, see Migrating existing Kubernetes CloudBees Core installations to Helm.

  2. The CloudBees Core YAML has changed, because it is now generated from the CloudBees Core Helm Chart. Both the format and the order of the file have changed. If you intend to use the YAML file as your basis for the upgrade (instead of Helm), you must manually update the field values in cloudbees-core.yml to match the values in your previous installation.

  3. As a best practice, CloudBees recommends backing up your Operations Center and JENKINS_HOME prior to upgrading. For more information, see the Backup and Restore guide.

Upgrading CloudBees Core

If you previously attempted to upgrade CloudBees Core and got an error message reading The StatefulSet "cjoc" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden, follow this upgrade process to resolve the issue.

To upgrade CloudBees Core:

  1. Download the installer from the Downloads site.

  2. In a local directory on your workstation, unpack the installer:

    $ tar xvf name-of-installer.tgz
  3. Modify the extracted cloudbees-core.yml to match the values of your previous installation. There are two options for this step:

    1. Use Helm to create a custom YAML file for your installation, or

    2. Manually edit cloudbees-core.yml values to match the values from the previous installation (IMPORTANT: the YAML file format has changed).

  4. Delete the cjoc statefulset (this does not remove the associated volume or data):

    $ kubectl delete sts cjoc
  5. Confirm that the CJOC statefulset has been deleted:

    $ kubectl get sts cjoc
    Error from server (NotFound): statefulsets.apps "cjoc" not found (1)
    1. This error confirms that the cjoc statefulset has been deleted.

  6. Run the installer:

    $ kubectl apply -f cloudbees-core.yml
  7. Check the status of the Operations Center roll-out:

    $ kubectl rollout status sts cjoc
  8. Once the new Operations Center version has been rolled out, log into Operations Center and upgrade your Managed Masters.

Removing CloudBees Core

If you need to remove CloudBees Core from Kubernetes, use the following steps:

  • Delete all masters from Operations Center

  • Stop Operations Center

    kubectl scale statefulsets/cjoc --replicas=0
  • Delete CloudBees Core

    kubectl delete -f cloudbees-core.yml
  • Delete remaining pods and data

    kubectl delete pod,statefulset,pvc,ingress,service -l com.cloudbees.cje.tenant
  • Delete services, pods, persistent volume claims, etc.

    kubectl delete svc --all
    kubectl delete statefulset --all
    kubectl delete pod --all
    kubectl delete ingress --all
    kubectl delete pvc --all

Additional topics

Manual Installation of Ingress Controller

If you are not able to use Helm, you will need to manually install the NGINX Ingress Controller. See the NGINX controller install guide for installation instruction.

These instructions will deploy the NGINX Ingress Controller to a namespace named ingress-nginx.

Go to DNS Record section to continue with CloudBees Core installation.

More information on the NGINX Controller installation.

Using Kaniko with CloudBees Core

Introducing Kaniko

Kaniko is a utility that creates container images from a Dockerfile. The image is created inside a container or Kubernetes cluster, which allows users to develop Docker images without using Docker or requiring a privileged container.

Since Kaniko doesn’t depend on the Docker daemon and executes each command in the Dockerfile entirely in the userspace, it enables building container images in environments that can’t run the Docker daemon, such as a standard Kubernetes cluster.

The remainder of this chapter provides a brief overview of Kaniko and illustrates using it in CloudBees Core with a Declarative Pipeline.

How does Kaniko work?

Kaniko looks for the Dockerfile file in the Kaniko context. The Kaniko context can be a GCS storage bucket, an S3 storage bucket, or local directory. In the case of either a GCS or S3 storage bucket, the Kaniko context must be a compressed tar file. Next, if the context contains a compressed tar file, then Kaniko expands it. Otherwise, it starts to read the Dockerfile.

Kaniko then extracts the filesystem of the base image using the FROM statement in the Dockerfile. It then executes each command in the Dockerfile. After each command completes, Kaniko captures filesystem differences. Next, it applies these differences, if there are any, to the base image and updates image metadata. Lastly, Kaniko publishes the newly created image to the desired Docker registry.


Kaniko runs as an unprivileged container. Kaniko still needs to run as root to be able to unpack the Docker base image into its container or execute RUN Dockerfile commands that require root privileges.

Primarily, Kaniko offers a way to build Docker images without requiring a container running with the privileged flag, or by mounting the Docker socket directly.

Additional security information can be found under the Security section of the Kaniko documentation. Also, this blog article on unprivileged container builds provides a deep dive on why Docker build needs root access.

Kaniko parameters

Kaniko has two key parameters. They are the Kaniko context and the image destination. Kaniko context is the same as Docker build context. It is the path Kaniko expects to find the Dockerfile in and any supporting files used in the creation of the image. The destination parameter is the Docker registry where the Kaniko will publish the images. Currently, Kaniko supports, GCR, and ECR as the Docker registry.

In addition to these parameters, Kaniko also needs a secret containing the authorization details required to push the newly created image to the Docker registry.

Kaniko debug image

The Kaniko executor image uses scratch and doesn’t contain a shell. The Kaniko project also provides a debug image,, this image consists of the Kaniko executor image with a busybox shell.

For more details on using the Debug Image, see Debug Image section of the Kaniko documenation.

Pipeline example

This example illustrates using Kaniko to build a Docker image from a Git repository and pushing the resulting image to a private Docker registry.


To run this example, you need the following:

  • A Kubernetes cluster with an installation of CloudBees Core

  • A Docker account or another private Docker registry account

  • Your Docker registry credentials

  • Ability to run kubectl against your cluster

  • CloudBees Core account with permission to create the new pipeline


These are the high-level steps for this example:

  1. Create a new Kubernetes Secret.

  2. Create the Pipeline.

  3. Run the Pipeline.

Create a new Kubernetes secret

The first step is to provide credentials that Kaniko uses to publish the new image to the Docker registry. This example uses kubectl and a account.

If you are using a private Docker registry, you can use it instead of Just create the Kubernetes secret with the proper credentials for the private registry.

Kubernetes has a create secret command to store the credentials for private Docker registries.

Use the create secret docker-registry kubectl command to create this secret:

Kubernetes create secret command
 $ kubectl create secret docker-registry docker-credentials \ (1)
    --docker-username=<username>  \
    --docker-password=<password> \
  1. The name of the new Kubernetes secret.

Create the Pipeline

Create a new pipeline job in CloudBees Core. In the pipeline field, paste the following Declarative Pipeline:

Sample Scripted Pipeline
def label = "kaniko-${UUID.randomUUID().toString()}"

podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
  name: kaniko
  - name: kaniko
    imagePullPolicy: Always
    - /busybox/cat
    tty: true
      - name: jenkins-docker-cfg
        mountPath: /kaniko/.docker
  - name: jenkins-docker-cfg
      - secret:
          name: docker-credentials (1)
            - key: .dockerconfigjson
              path: config.json
""") {
  node(label) {
    stage('Build with Kaniko') {

       git ''
        container(name: 'kaniko', shell: '/busybox/sh') {
           withEnv(['PATH+EXTRA=/busybox']) {
            sh '''#!/busybox/sh
            /kaniko/executor --context `pwd` --destination <docker-username>/hello-kaniko:latest (2)
  1. This is where the docker-credentials secret, created in the previous step, is mounted into the Kaniko Pod under /kaniko/.docker/config.json.

  2. Replace destination with your Docker username such as hello-kaniko.

Save the new Pipeline job.

Run the new Pipeline

The sample Pipeline is complete. Run the Pipeline to build the Docker image. When the pipeline is successful, a new Docker image should exist in your Docker registry. The new Docker image can be accessed via standard Docker commands such as docker pull and docker run.


Kaniko does not use Docker to build the image, thus there is no guarantee that it will produce the same image as Docker would. In some cases, the number of layers could also be different.

Kaniko supports most Dockerfile commands, even multistage builds, but does not support all commands. See the list of Kaniko Issues to determine if there is an issue with a specific Dockerfile command. Some rare edge cases are discussed in the Limitations section of the Kaniko documentation.


There are many tools similar to Kaniko. These tools build container images using a variety of different approaches.

There is a summary of these tools and others in the comparison with other tools section of the Kaniko documentation.

Here are links to a few of them:


This chapter is only a brief introduction into using Kaniko. In addition to the Kaniko documentation, the following is a list of helpful articles and tutorials:

Using self-signed certificates in CloudBees Core

This optional component of CloudBees Core allows to use self-signed certificates or custom root CA (Certificate Authority). It works by injecting a given set of files (certificate bundles) into all containers of all scheduled pods.


Cluster Requirement

Kubernetes 1.10 or later, with admission controller MutatingAdmissionWebhook enabled.

In order to check whether it is enabled for your cluster, you can run the following command:

kubectl api-versions | grep

The result should be:

In addition, the MutatingAdmissionWebhook and ValidatingAdmissionWebhook admission controllers should be added and listed in the correct order in the admission-control flag of kube-apiserver.

Network Requirements

The sidecar injector listens to HTTPS requests on port 443. This firewall rules of that port must be configured accordingly:

From To Port Description

Kubernetes Master(s)

Kubernetes Nodes


Allow incoming requests from sidecar-injector pod(s)

From To Port Description

Kubernetes Nodes

Kubernetes Master(s)


Allow kubernetes master to communicate with sidecar-injector pod(s)


This procedure requires a context with cluster-admin privilege in order to create the MutatingWebhookConfiguration.

In the CloudBees Core binary bundle, you will find a directory named sidecar-injector. The following instructions assume this is the working directory.

Create a certificate bundle

In the following instructions, we assume you are working in the namespace where CloudBees Core is installed, and the certificate you want to install is named mycertificate.pem.

For a self-signed certificate, add the certificate itself. If the certificate has been issued from a custom root CA, add the root CA itself.

# Copy reference files locally
kubectl cp cjoc-0:etc/ssl/certs/ca-certificates.crt ./ca-certificates.crt
kubectl cp cjoc-0:etc/ssl/certs/java/cacerts ./cacerts
# Add root CA to system certificate bundle
cat mycertificate.pem >> ca-certificates.crt
# Add root CA to java cacerts
keytool -import -noprompt -keystore cacerts -file mycertificate.pem -storepass changeit -alias service-mycertificate;
# Create a configmap with the two files above
kubectl create configmap --from-file=ca-certificates.crt,cacerts ca-bundles

Setup injector

  1. Browse to the directory where CloudBees Core archive has been unpacked, then go to sidecar-injector folder.

  2. Create a namespace to deploy the sidecar injector.

    kubectl create namespace sidecar-injector
    The following instructions assume the deployment is performed in the sidecar-injector namespace. If the target namespace has a different name, a global replacement needs to be done in the sidecar-injector.yaml file before proceeding
  3. Create a signed cert/key pair and store it in a Kubernetes secret that will be consumed by sidecar deployment.

    ./ \
     --service sidecar-injector-webhook-svc \
     --secret sidecar-injector-webhook-certs \
     --namespace sidecar-injector
  4. Patch the MutatingWebhookConfiguration by set caBundle with correct value from Kubernetes cluster

    cat sidecar-injector.yaml | \
        ./ > \
    In some Kubernetes deployments, it is possible that the resulting caBundle is an empty string. This means the deployment doesn’t support certificate-based authentication, but it won’t prevent using this feature.
  5. Switch to sidecar-injector namespace

    kubectl config set-context $(kubectl config current-context) --namespace=sidecar-injector
  6. Deploy resources

    kubectl create -f sidecar-injector-ca-bundle.yaml
  7. Verify everything is running

    The sidecar-inject-webhook pod should be running

    # kubectl get pods
    NAME                                                  READY     STATUS    RESTARTS   AGE
    sidecar-injector-webhook-deployment-bbb689d69-882dd   1/1       Running   0          5m
    # kubectl get deployment
    NAME                                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    sidecar-injector-webhook-deployment   1         1         1            1           5m

Configure namespace

  1. Label the namespace where CloudBees Core is installed with sidecar-injector=enabled

    kubectl label namespace mynamespace sidecar-injector=enabled
  2. Check

    # kubectl get namespace -L sidecar-injector
    default       Active    18h
    mynamespace   Active    18h       enabled
    kube-public   Active    18h
    kube-system   Active    18h


  1. Deploy an app in Kubernetes cluster, take sleep app as an example

    # cat <<EOF | kubectl create -f -
    apiVersion: extensions/v1beta1
    kind: Deployment
      name: sleep
      replicas: 1
            app: sleep
          - name: sleep
            image: tutum/curl
            command: ["/bin/sleep","infinity"]
  2. Verify injection has happened

    # kubectl get pods -o 'go-template={{range .items}}{{}}{{"\n"}}{{range $key,$value := .metadata.annotations}}* {{$key}}: {{$value}}{{"\n"}}{{end}}{{"\n"}}{{end}}'
    * com.cloudbees.sidecar-injector/status: injected


You are now all set to use your custom CA across your Kubernetes cluster.

To pick up the new certificate bundle, restart Operations Center and running Managed Masters. When scheduling new build agents, they will also pick up the certificate bundle and allow connection to remote endpoints using your certificates.