EKS Cluster

To create a Amazon Kubernetes cluster (EKS) refer to the official Amazon documentation EKS Documentation

More information on Kubernetes concepts is available from the Kubernetes site, including:

The Kubernetes cluster requirements must be satisfied before CloudBees Core can be installed.

CloudBees Core Prerequisites

The CloudBees Core installer requires:

  • On your local computer or a bastion host:

    • Kubernetes client 1.10 (or newer) installed and configured (kubectl)

  • An AWS EKS cluster running Kubernetes 1.10 (or newer) installed and configured

    • With nodes that have at least 2 CPUs, 4 Gbs of memory (so nodes have 1 full CPU / 1 Gb available after running a master with default settings)

    • N+1 worker nodes, where N is the number of Masters, and the +1 is for our Operations Center (see guide on estimating your requirements for Masters for how many Masters you would need)

    • Must have network access to container images (public Docker Hub or a private Docker Registry)

  • The NGINX Ingress Controller installed in the cluster (v0.9.0 minimum)

    • Load balancer configured and pointing to the NGINX Ingress Controller

    • A DNS record that points to the NGINX Ingress Controllers Load balancer

    • SSL certificates (needed when you deploy CloudBees Core

  • A namespace in the cluster (provided by your admin) with permissions to create Role and RoleBinding objects

  • Kubernetes cluster Default Storage Class defined and ready to use

NGINX Ingress Controller

CloudBees Core requires an NGINX Ingress Controller. Once the Kubernetes cluster is up and running, your next step should be to install the NGINX Ingress Controller. Read the NGINX Ingress Controller Installation Guide for instructions on how to install the controller choose the RBAC option, as your newly created Kubernetes cluster uses RBAC.

You must use either L4 or L7 networking for the NGINX Ingress Controller. If you plan to use external masters or agents, it is recommended you use L4 networking due to a Kubernetes limitation preventing mixed L4/L7 listeners on ELBs.

Kubectl Install

The following commands install the NGINX Ingress Controller for AWS with a L4 Load Balancer:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml

kubectl patch service ingress-nginx -p '{"spec":{"externalTrafficPolicy":"Local"}}' -n ingress-nginx

DNS Record

Create a DNS record for the domain you want to use for CloudBees Core, pointing to your NGINX ingress controller load balancer.

As the AWS ELB has a redundant set-up with multiple IPs, it is recommended that you create a CNAME record pointing to the ELB DNS.


TLS Termination at NGINX level

To configure the NGINX ingress controller to support SSL termination, see the TLS Termination at Ingress chapter of the EKS Reference Architecture.

TLS Termination at ELB level

As an alternative, SSL termination can be set up at the AWS Load Balancer level.

This can be done by adding the following annotations to the NGINX ingress controller service. The ARN reference must be replaced by the actual ARN of the ACM certificate used to do the TLS offloading.

# Apply the SSL settings to the port named 'https'
kubectl annotate service ingress-nginx service.beta.kubernetes.io/aws-load-balancer-ssl-ports="https"
# Reference of the ACM certificate to apply to the listener
kubectl annotate service ingress-nginx service.beta.kubernetes.io/aws-load-balancer-ssl-cert="arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"
# Use only modern TLS ciphers - https://aws.amazon.com/about-aws/whats-new/2017/02/elastic-load-balancing-support-for-tls-1-1-and-tls-1-2-pre-defined-security-policies/
kubectl annotate service ingress-nginx service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy="ELBSecurityPolicy-TLS-1-2-2017-01"

Redirect HTTP to HTTPS

If you only want to serve traffic through HTTPS (which you should), any incoming traffic through HTTP should be redirected to HTTPS. This is a convenience of users which often don’t type the protocol when entering a hostname in their browser.

By following the previous procedure, CloudBees Core is accessible either through HTTP or HTTPS. The following steps redirect HTTP traffic to HTTPS.

  1. Copy/paste the following to a new file and save it as nginx-configuration-configmap-patch.yaml

      use-proxy-protocol: "true"
      http-snippet: |
        map '' $pass_access_scheme {
           default          https;
        map '' $pass_port {
            default          443;
        server {
          listen 8080 proxy_protocol;
          return 301 https://$host$request_uri;
  2. Apply this patch using the command:

    kubectl patch configmap nginx-configuration -p "$(cat nginx-configuration-configmap-patch.yaml)"

    Then, patch the service to forward the traffic to port 8080.

  3. Copy/paste the following to a new file and save it as nginx-service-patch.yaml

      - name: http
        port: 80
        protocol: TCP
        targetPort: 8080
      - name: https
        port: 443
        targetPort: http

    Then, patch the service using the following command:

    kubectl patch service ingress-nginx -n ingress-nginx -p "$(cat nginx-service-patch.yaml)"

    This procedure does several things:

    • It makes NGINX listen to port 8080, in addition to existing ports. Any traffic coming to port 8080 is redirected to https protocol.

    • It overrides several computed variables to instruct upstream that the traffic is being served in https on port 443.

    • The load balancer is reconfigured so that incoming traffic on port 80 is forwarded on port 8080 for redirection.

CloudBees Core Namespace

By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.

Assuming you have a fresh cluster, you can inspect the available namespaces by doing the following:

$ kubectl get namespaces
NAME            STATUS    AGE
default         Active    13m
ingress-nginx   Active    8m
kube-public     Active    13m
kube-system     Active    13m

It is recommended to use a CloudBees Core specific namespace in the cluster with permissions to create Role and RoleBinding objects. For example to create a 'cje' namespace

echo "apiVersion: v1
kind: Namespace
    name: cje
  name: cje" > cje-namespace.yaml

Create the 'cje' namespace using kubectl:

$ kubectl create  -f cje-namespace.yaml

Switch to the newly created namespace.

$ kubectl config set-context $(kubectl config current-context) --namespace=cje

Run installer

CloudBees Core runs on a Kubernetes cluster. Kubernetes cluster installations are configured with YAML files. The CloudBees Core installer provides a cloudbees-core.yml file that is modified for each installation.

  • Download installer

  • Unpack installer

    $ export INSTALLER=cloudbees-core_2.121.3.1_kubernetes.tgz
    $ sha256sum -c $INSTALLER.sha256
    $ tar xzvf $INSTALLER
  • Prepare shell variables for your installation. Replace cloudbees-core.example.com with your domain name.


    If you do not have an available domain, you can use the ELB name directly. However you won’t be able to deploy CloudBees Core in separate namespaces until you use domain names. The example below shows how you can get the domain name of your ELB via kubectl:

    $ DOMAIN_NAME=$(kubectl -n ingress-nginx get svc ingress-nginx -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
  • Edit the cloudbees-core.yml file for your installation to replace the hostname cloudbees-core.example.com with the domain that you will use for CloudBees Core. You can do this with your favorite text editor, or you can use sed. The example below shows how you can use the sed command to edit the configuration and change the domain name to $DOMAIN_NAME.

    $ cd cloudbees-core_2.121.3.1_kubernetes
    $ sed -e s,cloudbees-core.example.com,$DOMAIN_NAME,g < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml
  • Disable SSL redirection if you do not have SSL certificates (the terms SSL and TLS are interchangeable in this document). If you are not going to be running on HTTPS, you must change the URLs from https to http. The example sed commands below show how you can edit the configuration file to make these changes.

    $ sed -e s,https://$DOMAIN_NAME,http://$DOMAIN_NAME,g < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml
    $ sed -e s,ssl-redirect:\ \"true\",ssl-redirect:\ \"false\",g < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml
If you wish to terminate TLS at the Ingress, then you must uncomment the lines in cloudbees-core.yml that specify tls and provides the name of the Kubernetes secret that holds the TLS certificates. For more information, refer to the section Ingress TLS Termination in the CloudBees Core Reference Architecture - Kubernetes on AWS or CloudBees Core Reference Architecture - Kubernetes on AWS EKS if you are using EKS.
  • Run the installer

    $ kubectl apply -f cloudbees-core.yml
    serviceaccount "cjoc" created
    role "master-management" created
    rolebinding "cjoc" created
    configmap "cjoc-config" created
    configmap "cjoc-configure-jenkins-groovy" created
    statefulset "cjoc" created
    service "cjoc" created
    ingress "cjoc" created
    ingress "default" created
    serviceaccount "jenkins" created
    role "pods-all" created
    rolebinding "jenkins" created
    configmap "jenkins-agent" created
  • Wait until CJOC is rolled out

    $ kubectl rollout status sts cjoc
  • Read the admin password

    $ kubectl exec cjoc-0 -- cat /var/jenkins_home/secrets/initialAdminPassword

Open Operations Center

CloudBees Core is now installed, configured, and ready to run. Open the CloudBees Core URL and log in with the initial admin password. Install the CloudBees Core license and the recommended plugins.

See Administering CloudBees Core for further information.

Adding Client Masters

Occasionally administrators need to connect existing masters to a CloudBees Core cluster. Existing masters connected to a CloudBees Core cluster are called "Client Masters" to distinguish them from Managed Masters. A master running on Windows is one example that requires a Client Master.


The Kubernetes cluster requirements must be satisfied before a Client Master can be successfully added.

Before adding a Client Master, the following are required:

  • Install the NGINX Ingress Controller in the cluster as stated in the CloudBees Core Prerequisites.

  • Configure the NGINX Ingress Controller to use tcp-services ConfigMap for exposing TCP ports as detailed in the Ingress Nginx’s user guide on exposing tcp and udp services.

  • Operations Center must be able to accept JNLP requests.

    • TCP port for JNLP on the Operations Center must be enabled (and should be 50000, if not, replace it with your port number).

    • At least one JNLP protocol must enabled (JNLP4 is recommended).

Configure ports

  1. Confirm Operations Center is ready to answer internal JNLP requests

    $ kubectl exec -ti cjoc-0 curl localhost:50000
    Jenkins-Agent-Protocols: Diagnostic-Ping, JNLP4-connect, OperationsCenter2, Ping
    Jenkins-Session: 3fa70d75
    Client: 0:0:0:0:0:0:0:1
    Server: 0:0:0:0:0:0:0:1
    Remoting-Minimum-Version: 3.4
  2. Open the JNLP port (50000) in the Kubernetes cluster

  3. Copy/paste the following to a new file and save it as tcp-services.yaml

    apiVersion: v1
    kind: ConfigMap
      name: tcp-services
      namespace: ingress-nginx
      50000: "cje/cjoc:50000:PROXY"
  4. Copy/paste the following to a new file and save it as deployment-patch.yaml

            - name: nginx-ingress-controller
              - containerPort: 50000
                name: 50000-tcp
                protocol: TCP
  5. Copy/paste the following to a new file and save it as service-patch.yaml

      - name: 50000-tcp
        port: 50000
        protocol: TCP
        targetPort: 50000-tcp
  6. Apply these fragments using the following commands

    kubectl apply -f tcp-services.yaml
    kubectl patch deployment ingress-nginx -n ingress-nginx -p "$(cat deployment-patch.yaml)"
    kubectl patch service ingress-nginx -n ingress-nginx -p "$(cat service-patch.yaml)"
  7. Last, annotate the service using the following command to increase default ELB timeouts.

    kubectl annotate -n ingress-nginx ingress-nginx service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout="3600"

Test Connection

You can confirm that Operations Center is ready to receive external JNLP requests with the following command:

$ curl $DOMAIN_NAME:50000
Jenkins-Agent-Protocols: Diagnostic-Ping, JNLP4-connect, MultiMaster, OperationsCenter2, Ping
Jenkins-Session: b02dc475
Remoting-Minimum-Version: 2.60

Continue installation

Once ports and security are correctly configured in your cloud and on your Client Master, continue the instructions in Adding Client Masters.

Adding JNLP Agents

The procedure is similar to the one allowing connection of external masters to Operations Center.

  1. Being with this command:

    curl -I https://$DOMAIN_NAME/$MASTER_NAME/ 2>/dev/null | grep X-Jenkins-CLI-Port

    This returns the advertised port for JNLP traffic.

  2. If you already configured tcp-services before, you will need to retrieve the current configmap using kubectl get configmap tcp-services -n ingress-nginx -o yaml > tcp-services.yaml and edit it accordingly.

    Example of a new file: replace occurrences of $JNLP_MASTER_PORT by the port determined just before, and $MASTER_NAME by the master name, then save it to tcp-services.yaml:

    apiVersion: v1
    kind: ConfigMap
      name: tcp-services
      namespace: ingress-nginx
  3. Copy/paste the following to a new file and save it as deployment-patch.yaml:

            - name: nginx-ingress-controller
              - containerPort: $JNLP_MASTER_PORT
                name: $JNLP_MASTER_PORT-tcp
                protocol: TCP
  4. Copy/paste the following to a new file and save it as service-patch.yaml:

      - name: $JNLP_MASTER_PORT-tcp
        port: $JNLP_MASTER_PORT
        protocol: TCP
        targetPort: $JNLP_MASTER_PORT-tcp
  5. Apply these fragments using the following commands:

    kubectl apply -f tcp-services.yaml
    kubectl patch deployment ingress-nginx -n ingress-nginx -p "$(cat deployment-patch.yaml)"
    kubectl patch service ingress-nginx -n ingress-nginx -p "$(cat service-patch.yaml)"
  6. Finally, annotate the service using the following command to increase default ELB timeouts

    kubectl annotate -n ingress-nginx ingress-nginx service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout="3600"

Test Connection

You can confirm that Master is ready to receive external JNLP requests with the following command:

Jenkins-Agent-Protocols: Diagnostic-Ping, JNLP4-connect, OperationsCenter2, Ping
Jenkins-Session: f4e6410a
Client: 0:0:0:0:0:0:0:1
Server: 0:0:0:0:0:0:0:1
Remoting-Minimum-Version: 3.4

Continue installation

Once the jnlp port is correctly configured in your cloud, you can then create a new 'node' in your master under 'Manage Jenkins → Manage Nodes'.

NOTE that the node should be configured with: - Launch method: 'Launch agent via Web Start'

Auto-scaling nodes

Auto-scaling of nodes can be achieved by installing the kubernetes cluster autoscaler

Auto-scaling considerations

While scaling up functionality is straightforward, scaling down is potentially more problematic. Scaling down involve moving workload to different nodes if the node to reclaim has still some utilization but is below the reclamation threshold. Moving agent workload would potentially mean build interruption (failed build) and moving Operations Center/Managed Master workload would mean downtime.

Distinct node pools

One way to deal with scaling down is to treat each workload differently by using separate node pools and thus apply different logic to control the scaling down.

Managed Master and Operations Center workload

By assigning Managed Master and Operations Center workload to a dedicated pool, the scaling down of nodes can be prevented by restricting eviction of Managed Master or Operations Center deployments. Scale up will happen normally when resources need to be increased in order to deploy additional Managed Masters, but scale down will only happen when the nodes are free of Operations Center or Managed Master workload. This might be acceptable since masters are meant to be stable and permanent, meaning that they are not ephemeral but long lived.

This is achieved by adding the following annotation to Operations Center and Managed Masters: "cluster-autoscaler.kubernetes.io/safe-to-evict": "false"

For Operations Center, the annotation is added to the cloudbees-core.yml in the CJOC "StatefulSet" definition under "spec - template - metadata - annotations"

apiVersion: "apps/v1beta1"
kind: "StatefulSet"
  name: cjoc
    com.cloudbees.cje.type: cjoc
    com.cloudbees.cje.tenant: cjoc
  serviceName: cjoc
  replicas: 1
    type: RollingUpdate
          cluster-autoscaler.kubernetes.io/safe-to-evict: "false"

For Managed Master, the annotation is added in the configuration page under the 'Advanced Configuration - YAML' parameter. The YAML snippet to add would look like:

kind: StatefulSet
          cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
Agent workload

By assigning Jenkins agent workload to a dedicated pool, the scaling could be handled by the default logic. Since agents are Pods that are not backed by a Kubernetes controller, they prevent scale down of nodes until no pods are running on a particular node. This prevents nodes to be reclaimed while agents are running and agent to be interrupted even though the autoscaler is below its reclamation threshold.

In order to create a dedicated pool for agent workload, we need to prevent other types of workload to be deployed on the dedicated pool nodes. This is accomplished by tainting the dedicated pool nodes. Then to allow scheduling of agent workload on the dedicated pool nodes, the agent pod will use a corresponding taint tolerations and a node selector.

When nodes are created dynamically by the Kubernetes autoscaler, they need to be created with the proper taint and label.

With EKS, the taint and label can be specified in the Kubernetes kubelet service defined in the UserData section of the AWS autoscaling group LaunchConfiguration.

Following the AWS EKS documentation, the nodes are created by a CloudFormation template. Download the worker node template (see EKS documentation 'launch your worker nodes' ) and add in the UserData section the node-labels and register-with-taints to the kubelet service:

      "sed -i '/bin\\/kubelet/a --node-labels=workload=build \\\\'  /etc/systemd/system/kubelet.service" , "\n",
      "sed -i '/bin\\/kubelet/a --register-with-taints=nodeType=build:NoSchedule \\\\'  /etc/systemd/system/kubelet.service" , "\n",

The autoscaling group LaunchConfiguration will look something like:

    Type: AWS::AutoScaling::LaunchConfiguration
      AssociatePublicIpAddress: 'true'
      IamInstanceProfile: !Ref NodeInstanceProfile
      ImageId: !Ref NodeImageId
      InstanceType: !Ref NodeInstanceType
      KeyName: !Ref KeyName
      - !Ref NodeSecurityGroup
          Fn::Join: [
              "#!/bin/bash -xe\n",
              "CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki", "\n",
              "MODEL_DIRECTORY_PATH=~/.aws/eks", "\n",
              "MODEL_FILE_PATH=$MODEL_DIRECTORY_PATH/eks-2017-11-01.normal.json", "\n",
              "mkdir -p $CA_CERTIFICATE_DIRECTORY", "\n",
              "mkdir -p $MODEL_DIRECTORY_PATH", "\n",
              "curl -o $MODEL_FILE_PATH https://s3-us-west-2.amazonaws.com/amazon-eks/1.10.3/2018-06-05/eks-2017-11-01.normal.json", "\n",
              "aws configure add-model --service-model file://$MODEL_FILE_PATH --service-name eks", "\n",
              "aws eks describe-cluster --region=", { Ref: "AWS::Region" }," --name=", { Ref: ClusterName }," --query 'cluster.{certificateAuthorityData: certificateAuthority.data, endpoint: endpoint}' > /tmp/describe_cluster_result.json", "\n",
              "cat /tmp/describe_cluster_result.json | grep certificateAuthorityData | awk '{print $2}' | sed 's/[,\"]//g' | base64 -d >  $CA_CERTIFICATE_FILE_PATH", "\n",
              "MASTER_ENDPOINT=$(cat /tmp/describe_cluster_result.json | grep endpoint | awk '{print $2}' | sed 's/[,\"]//g')", "\n",
              "INTERNAL_IP=$(curl -s", "\n",
              "sed -i s,MASTER_ENDPOINT,$MASTER_ENDPOINT,g /var/lib/kubelet/kubeconfig", "\n",
              "sed -i s,CLUSTER_NAME,", { Ref: ClusterName }, ",g /var/lib/kubelet/kubeconfig", "\n",
              "sed -i s,REGION,", { Ref: "AWS::Region" }, ",g /etc/systemd/system/kubelet.service", "\n",
              "sed -i s,MAX_PODS,", { "Fn::FindInMap": [ MaxPodsPerNode, { Ref: NodeInstanceType }, MaxPods ] }, ",g /etc/systemd/system/kubelet.service", "\n",
              "sed -i s,MASTER_ENDPOINT,$MASTER_ENDPOINT,g /etc/systemd/system/kubelet.service", "\n",
              "sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service", "\n",
              "DNS_CLUSTER_IP=", "\n",
              "if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=; fi", "\n",
              "sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g  /etc/systemd/system/kubelet.service", "\n",
              "sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig" , "\n",
              "sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g  /etc/systemd/system/kubelet.service" , "\n",
              "sed -i '/bin\\/kubelet/a --node-labels=workload=build \\\\'  /etc/systemd/system/kubelet.service" , "\n",
              "sed -i '/bin\\/kubelet/a --register-with-taints=nodeType=build:NoSchedule \\\\'  /etc/systemd/system/kubelet.service" , "\n",
              "systemctl daemon-reload", "\n",
              "systemctl restart kubelet", "\n",
              "/opt/aws/bin/cfn-signal -e $? ",
              "         --stack ", { Ref: "AWS::StackName" },
              "         --resource NodeGroup ",
              "         --region ", { Ref: "AWS::Region" }, "\n"

The first parameter node-labels will automatically add the label workload=build to the newly created nodes. This label will then be used as the NodeSelector for the agent. The second parameter register-with-taints will automatically add the nodeType=build:NoSchedule taint to the node.

Follow the 'launch your worker nodes' EKS documentation but use the modified template to create the agent pool.

Security group Ingress settings

The security group of the default worker node pool will need to be modified to allow ingress traffic from the newly created pool security group in order to allow agents to communicate with Managed Masters running in the default pool.

The agent template will then need to add the corresponding 'toleration' to allow the scheduling of agent workload on those nodes.

agent toleration selector

For Pipelines 'toleration' can be added to podTemplate using the yaml parameter as follows:

    def label = "mypodtemplate-${UUID.randomUUID().toString()}"
    def nodeSelector = "workload=build"
    podTemplate(label: label, yaml: """
    apiVersion: v1
    kind: Pod
      - key: nodeType
        operator: Equal
        value: build
        effect: NoSchedule
    """, nodeSelector: nodeSelector, containers: [
      containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat')
    ]) {
      node(label) {
        stage('Run maven') {
          container('maven') {
            sh 'mvn --version'

IAM policy

The worker running the cluster autoscaler needs access to certain resources and actions.

A minimum IAM policy would look like:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": "*"

If the current NodeInstanceRole defined for the EKS cluster nodes does not have the policy actions required for the autoscaler, create a new 'eks-auto-scaling' policy as outlined above and then attach this policy to the NodeInstanceRole.

Install cluster autoscaler

Examples for deployment of the cluster autoscaler in AWS can be found here: AWS cluster autoscaler

As an example let’s use the single auto-scaling group example.

A few things need to be modified to match your EKS cluster setup. Here is a sample extract of the autoscaler deployment section:

apiVersion: extensions/v1beta1
kind: Deployment
  name: cluster-autoscaler
  namespace: kube-system
    app: cluster-autoscaler
  replicas: 1
      app: cluster-autoscaler
        app: cluster-autoscaler
      serviceAccountName: cluster-autoscaler
        - image: k8s.gcr.io/cluster-autoscaler:v1.1.0
          name: cluster-autoscaler
              cpu: 100m
              memory: 300Mi
              cpu: 100m
              memory: 300Mi
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --nodes=1:10:acme-eks-worker-nodes-NodeGroup-FD1OD4CZ0J77
            - name: AWS_REGION
              value: us-west-2
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-bundle.crt
              readOnly: true
          imagePullPolicy: "Always"
        - name: ssl-certs
            path: "/etc/ssl/certs/ca-bundle.crt"
  1. If the EKS is using Kubernetes v 1.9.2 or above use version 1.1.0 for the autoscaler

  2. Update the '--nodes=' command parameter. The syntax is 'ASG_MIN_SIZE:ASG_MAX_SIZE:ASG_NAME'. Multiple '--nodes' parameter can be defined to have the autoscaler autoscale multiple AWS auto-scaling groups.

  3. Update the env AWS_REGION to match the EKS cluster region

  4. If using AWS Linux 2 AMIs for the nodes, set the ssl cert paths to '/etc/ssl/certs/ca-bundle.crt'

To install the autoscaler:

$ kubectl create -f cluster-autoscaler-one-asg.yaml

Cloud ready Artifact Manager for AWS

Jenkins has historically provided multiple ways to save build products, otherwise known as artifacts.

Some plugins permit you to upload artifact files to repository managers like Artifactory, and Nexus Artifact Uploader, and other plugins send artifacts to remote shared filesystems like Publish Over FTP, Publish Over CIFS and Publish Over SSH. Jenkins itself stores artifact files in the Jenkins home filesystem. In 2012, CloudBees released the Fast Archiver Plugin, which optimizes the default artifact transmission but uses the same storage location.

Unfortunately, a number of these solutions are not cloud-ready, and it is awkward and difficult to use them with CloudBees Core on modern cloud platforms. Some solutions, like S3 publisher are well suited for use in a cloud environment, but require special build steps within Pipelines.

CloudBees is developing a series of cloud-ready artifact manager plugins. The first of these is Artifact Manager on S3 plugin. This plugin permits you to archive artifacts in an S3 Bucket, where there is less need to be concerned about the disk space used by artifacts.

Easy to configure

To configure Artifact Manager on S3:

  1. Go to Manage Jenkins/Configure System.

  2. In the Artifact Managment for Builds section, select the Cloud Provider Amazon S3:

    cloud provider configured
  3. Return to Manage Jenkins/Amazon Web Services Configuration to configure your AWS credentials for access to the S3 Bucket.

  4. For your AWS credentials, use the IAM Profile configured for the Jenkins instance, or configure a regular key/secret AWS credential in Jenkins. Note that your AWS account must have permissions to access the S3 Bucket, and must be able to list, get, put, and delete objects in the S3 Bucket.

    configure credentials
  5. Save or apply the credentials configuration, and move on to configure your S3 bucket settings.

    bucket settings
  6. We recommend validating your configuration. If the validation succeeds, you’ve completed the configuration for Artifact Manager on S3.

    validation success
  7. For more details about Artifact Manager for S3, see the plugin documentation here: Artifact Manager on S3 plugin.

Uploading and downloading artifacts

The Artifact Manager on S3 plugin is compatible with both Pipeline and FreeStyle jobs. To archive, unarchive, stash or unstash, use the default Pipeline steps.

FreeStyle jobs

For FreeStyle jobs, use a post-build action of Archive the Artifacts to store your Artifacts into the S3 Bucket.

fsj step archive

To copy artifacts between projects:

  1. Make sure the Copy Artifact Plugin is installed.

  2. Use a build step to copy artifacts from the other project:

    copy artefacts

Pipeline jobs

For Pipeline jobs, use an archiveArtifacts step to archive artifacts into the S3 Bucket:

node() {
    //you build stuff
    stage('Archive') {
        archiveArtifacts "my-artifacts-pattern/*"

To retrieve artifacts that were previously saved in the same build, use an unarchive step that retrieves the artifacts from S3 Bucket. Set the mapping parameter to a list of pairs of source-filename and destination-filename:

node() {
    //you build stuff
    stage('Unarchive') {
        unarchive mapping: ["my-artifacts-pattern/": '.']

To save a set of files for use later in the same build (generally on another node/workspace) use a stash step to store those files on the S3 Bucket:

node() {
    //you build stuff
    stash name: 'stuff', includes: '*'

To retrieve files saved with a stash step, use an unstash step, which retrieves previously stashed files from the S3 Bucket and copies them to the local workspace:

node() {
    //you build stuff
    unstash 'stuff'

To copy artifacts between projects:

  1. Make sure the Copy Artifact Plugin is installed.

  2. Use a copyArtifacts step to copy artifacts from the other project:

      //you build stuff
      copyArtifacts(projectName: 'downstream', selector: specific("${built.number}"))


Artifact Manager on S3 manages security using Jenkins permissions. This means that unless users or jobs have permission to read the job in Jenkins, the user or job cannot retrieve the download URL.

Download URLs are temporary URLs linked to the S3 Bucket, with a duration of one hour. Once that hour has expired, you’ll need to request a new temporary URL to download the artifact.

Agents use HTTPS (of the form https://my-bucket.s3.xx-xxxx-x.amazonaws.com/*) and temporary URLs to archive, unarchive, stash, unstash and copy artifacts. Agents do not have access to either the AWS credentials or the whole S3 Bucket, and are limited to get and put operations.


A major distinction between the Artifact Manager for S3 plugin and other plugins is in the load on the master and the responsiveness of the master-agent network connection. Every upload/download action is executed by the agent, which means that the master spends only the time necessary to generate the temporary URL: the remainder of the time is allocated to the agent.

The performance tests detailed below compare the CloudBees Fast Archiving Plugin and the Artifact Manager on S3 plugin.

Performance tests were executed in a Jenkins 2.121 environment running on Amazon EC2, with JENKINS_HOME configured on an EBS volume. Three different kinds of tests were executed from the GitHub repository at Performance Test, with samples taken after the tests had been running for one hour:

  • Archive/Unarchive big files: Store a 1GB file and restore it from the Artifact Manager System.

  • Archive/Unarchive small files: Store 100 small files and restore them from the Artifact Manager System. Small files are approximately 10 bytes in size, with 100 files stored and times averaged

  • Stash/Unstash on a pipeline: Execute stash and unstash steps. The Fast Archive Plugin stash/unstash operations used the default stash/unstash implementation.

As can be seen from the results, the Artifact Manager on S3 Plugin provides a measurable performance improvement on both big and small files, with the improvement measured in minutes for big files and in seconds for small files.

Artifact Manager on S3 plugin performance

Plugin link: Artifact Manager on S3

 Big Files
Time in Milliseconds Archive Unarchive










S3 archive big file 00
 Small Files
Time in Milliseconds Archive Unarchive










S3 archive small files 00
Time in Milliseconds Archive Unarchive










S3 archive stash 00

CloudBees Fast Archiving Plugin performance

Big files
Time in Milliseconds Archive Unarchive










fast archive big file 00
 Small Files
Time in Milliseconds Archive Unarchive










fast archive small files 00
Time in Milliseconds Archive Unarchive










fast archive stash 00

Upgrading CloudBees Core

To upgrade to a newer version of CloudBees Core, follow the same process as the installation process.

  • Download installer

  • Unpack installer

  • Edit the cloudbees-core.yml file for your installation to match the previous changes made during initial installation

  • Run the installer

    $ kubectl apply -f cloudbees-core.yml
  • Wait until CJOC is rolled out

    $ kubectl rollout status sts cjoc

Once the new version of Operations Center is rolled out, you can log in to Operations Center again and upgrade the managed masters. See Upgrading Managed Masters for further information.

Removing CloudBees Core

If you need to remove CloudBees Core from Kubernetes, use the following steps:

  • Delete all masters from Operations Center

  • Stop Operations Center

    kubectl scale statefulsets/cjoc --replicas=0
  • Delete CloudBees Core

    kubectl delete -f cloudbees-core.yml
  • Delete remaining pods and data

    kubectl delete pod,statefulset,pvc,ingress,service -l com.cloudbees.cje.tenant
  • Delete services, pods, persistent volume claims, etc.

    kubectl delete svc --all
    kubectl delete statefulset --all
    kubectl delete pod --all
    kubectl delete ingress --all
    kubectl delete pvc --all

Additional topics

Using Kaniko with CloudBees Core

Introducing Kaniko

Kaniko is a utility that creates container images from a Dockerfile. The image is created inside a container or Kubernetes cluster, which allows users to develop Docker images without using Docker or requiring a privileged container.

Since Kaniko doesn’t depend on the Docker daemon and executes each command in the Dockerfile entirely in the userspace, it enables building container images in environments that can’t run the Docker daemon, such as a standard Kubernetes cluster.

The remainder of this chapter provides a brief overview of Kaniko and illustrates using it in CloudBees Core with a Declarative Pipeline.

How does Kaniko work?

Kaniko looks for the Dockerfile file in the Kaniko context. The Kaniko context can be a GCS storage bucket, an S3 storage bucket, or local directory. In the case of either a GCS or S3 storage bucket, the Kaniko context must be a compressed tar file. Next, if the context contains a compressed tar file, then Kaniko expands it. Otherwise, it starts to read the Dockerfile.

Kaniko then extracts the filesystem of the base image using the FROM statement in the Dockerfile. It then executes each command in the Dockerfile. After each command completes, Kaniko captures filesystem differences. Next, it applies these differences, if there are any, to the base image and updates image metadata. Lastly, Kaniko publishes the newly created image to the desired Docker registry.


Kaniko runs as an unprivileged container. Kaniko still needs to run as root to be able to unpack the Docker base image into its container or execute RUN Dockerfile commands that require root privileges.

Primarily, Kaniko offers a way to build Docker images without requiring a container running with the privileged flag, or by mounting the Docker socket directly.

Additional security information can be found under the Security section of the Kaniko documentation. Also, this blog article on unprivileged container builds provides a deep dive on why Docker build needs root access.

Kaniko parameters

Kaniko has two key parameters. They are the Kaniko context and the image destination. Kaniko context is the same as Docker build context. It is the path Kaniko expects to find the Dockerfile in and any supporting files used in the creation of the image. The destination parameter is the Docker registry where the Kaniko will publish the images. Currently, Kaniko supports hub.docker.com, GCR, and ECR as the Docker registry.

In addition to these parameters, Kaniko also needs a secret containing the authorization details required to push the newly created image to the Docker registry.

Kaniko debug image

The Kaniko executor image uses scratch and doesn’t contain a shell. The Kaniko project also provides a debug image, gcr.io/kaniko-project/executor:debug, this image consists of the Kaniko executor image with a busybox shell.

For more details on using the Debug Image, see Debug Image section of the Kaniko documenation.

Pipeline example

This example illustrates using Kaniko to build a Docker image from a Git repository and pushing the resulting image to a private Docker registry.


To run this example, you need the following:

  • A Kubernetes cluster with an installation of CloudBees Core

  • A Docker account or another private Docker registry account

  • Your Docker registry credentials

  • Ability to run kubectl against your cluster

  • CloudBees Core account with permission to create the new pipeline


These are the high-level steps for this example:

  1. Create a new Kubernetes Secret.

  2. Create the Pipeline.

  3. Run the Pipeline.

Create a new Kubernetes secret

The first step is to provide credentials that Kaniko uses to publish the new image to the Docker registry. This example uses kubectl and a docker.com account.

If you are using a private Docker registry, you can use it instead of docker.com. Just create the Kubernetes secret with the proper credentials for the private registry.

Kubernetes has a create secret command to store the credentials for private Docker registries.

Use the create secret docker-registry kubectl command to create this secret:

Kubernetes create secret command
 $ kubectl create secret docker-registry docker-credentials \ (1)
    --docker-username=<username>  \
    --docker-password=<password> \
  1. The name of the new Kubernetes secret.

Create the Pipeline

Create a new pipeline job in CloudBees Core. In the pipeline field, paste the following Declarative Pipeline:

Sample Scripted Pipeline
def label = "kaniko-${UUID.randomUUID().toString()}"

podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
  name: kaniko
  - name: kaniko
    image: gcr.io/kaniko-project/executor:debug
    imagePullPolicy: Always
    - /busybox/cat
    tty: true
      - name: jenkins-docker-cfg
        mountPath: /kaniko/.docker
  - name: jenkins-docker-cfg
      - secret:
          name: docker-credentials (1)
            - key: .dockerconfigjson
              path: config.json
""") {
  node(label) {
    stage('Build with Kaniko') {

       git 'https://github.com/cb-jeffduska/simple-docker-example.git'
        container(name: 'kaniko', shell: '/busybox/sh') {
           withEnv(['PATH+EXTRA=/busybox']) {
            sh '''#!/busybox/sh
            /kaniko/executor --context `pwd` --destination <docker-username>/hello-kaniko:latest (2)
  1. This is where the docker-credentials secret, created in the previous step, is mounted into the Kaniko Pod under /kaniko/.docker/config.json.

  2. Replace destination with your Docker username such as hello-kaniko.

Save the new Pipeline job.

Run the new Pipeline

The sample Pipeline is complete. Run the Pipeline to build the Docker image. When the pipeline is successful, a new Docker image should exist in your Docker registry. The new Docker image can be accessed via standard Docker commands such as docker pull and docker run.


Kaniko does not use Docker to build the image, thus there is no guarantee that it will produce the same image as Docker would. In some cases, the number of layers could also be different.

Kaniko supports most Dockerfile commands, even multistage builds, but does not support all commands. See the list of Kaniko Issues to determine if there is an issue with a specific Dockerfile command. Some rare edge cases are discussed in the Limitations section of the Kaniko documentation.


There are many tools similar to Kaniko. These tools build container images using a variety of different approaches.

There is a summary of these tools and others in the comparison with other tools section of the Kaniko documentation.

Here are links to a few of them:


This chapter is only a brief introduction into using Kaniko. In addition to the Kaniko documentation, the following is a list of helpful articles and tutorials:

Using self-signed certificates in CloudBees Core

This optional component of CloudBees Core allows to use self-signed certificates or custom root CA (Certificate Authority). It works by injecting a given set of files (certificate bundles) into all containers of all scheduled pods.


When using Amazon EKS, the required admission controllers are enabled by default. Please refer to the Amazon documentation to get the full list of admission controllers available on your platform.


This procedure requires a context with cluster-admin privilege in order to create the MutatingWebhookConfiguration.

In the CloudBees Core binary bundle, you will find a directory named sidecar-injector. The following instructions assume this is the working directory.

Create a certificate bundle

In the following instructions, we assume you are working in the namespace where CloudBees Core is installed, and the certificate you want to install is named mycertificate.pem.

For a self-signed certificate, add the certificate itself. If the certificate has been issued from a custom root CA, add the root CA itself.

# Copy reference files locally
kubectl cp cjoc-0:etc/ssl/certs/ca-certificates.crt .
kubectl cp cjoc-0:etc/ssl/certs/java/cacerts .
# Add root CA to system certificate bundle
cat mycertificate.pem >> ca-certificates.crt
# Add root CA to java cacerts
keytool -import -noprompt -keystore cacerts -file mycertificate.pem -storepass changeit -alias service-mycertificate;
# Create a configmap with the two files above
kubectl create configmap --from-file=ca-certificates.crt,cacerts ca-bundles

Setup injector

  1. Browse to the directory where CloudBees Core archive has been unpacked, then go to sidecar-injector folder.

  2. Create a namespace to deploy the sidecar injector.

    kubectl create namespace sidecar-injector
    The following instructions assume the deployment is performed in the sidecar-injector namespace. If the target namespace has a different name, a global replacement needs to be done in the sidecar-injector.yaml file before proceeding
  3. Create a signed cert/key pair and store it in a Kubernetes secret that will be consumed by sidecar deployment.

    ./webhook-create-signed-cert.sh \
     --service sidecar-injector-webhook-svc \
     --secret sidecar-injector-webhook-certs \
     --namespace sidecar-injector
  4. Patch the MutatingWebhookConfiguration by set caBundle with correct value from Kubernetes cluster

    cat sidecar-injector.yaml | \
        webhook-patch-ca-bundle.sh > \
    In some Kubernetes deployments, it is possible that the resulting caBundle is an empty string. This means the deployment doesn’t support certificate-based authentication, but it won’t prevent using this feature.
  5. Switch to sidecar-injector namespace

    kubectl config set-context $(kubectl config current-context) --namespace=sidecar-injector
  6. Deploy resources

    kubectl create -f sidecar-injector-ca-bundle.yaml
  7. Verify everything is running

    The sidecar-inject-webhook pod should be running

    # kubectl get pods
    NAME                                                  READY     STATUS    RESTARTS   AGE
    sidecar-injector-webhook-deployment-bbb689d69-882dd   1/1       Running   0          5m
    # kubectl get deployment
    NAME                                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    sidecar-injector-webhook-deployment   1         1         1            1           5m

Configure namespace

  1. Label the namespace where CloudBees Core is installed with sidecar-injector=enabled

    kubectl label namespace mynamespace sidecar-injector=enabled
  2. Check

    # kubectl get namespace -L sidecar-injector
    default       Active    18h
    mynamespace   Active    18h       enabled
    kube-public   Active    18h
    kube-system   Active    18h


  1. Deploy an app in Kubernetes cluster, take sleep app as an example

    # cat <<EOF | kubectl create -f -
    apiVersion: extensions/v1beta1
    kind: Deployment
      name: sleep
      replicas: 1
            app: sleep
          - name: sleep
            image: tutum/curl
            command: ["/bin/sleep","infinity"]
  2. Verify injection has happened

    # kubectl get pods -o 'go-template={{range .items}}{{.metadata.name}}{{"\n"}}{{range $key,$value := .metadata.annotations}}* {{$key}}: {{$value}}{{"\n"}}{{end}}{{"\n"}}{{end}}'
    * com.cloudbees.sidecar-injector/status: injected


You are now all set to use your custom CA across your Kubernetes cluster.

To pick up the new certificate bundle, restart Operations Center and running Managed Masters. When scheduling new build agents, they will also pick up the certificate bundle and allow connection to remote endpoints using your certificates.