Installing CloudBees Core on Google Kubernetes Engine (GKE)
This document explains the cluster requirements, points you to the Google Cloud Platform documentation you will need to create a cluster and explains how to install CloudBees Core in your Kubernetes cluster.
Important
|
The GKE cluster requirements must be satisfied before CloudBees Core can be installed. |
GKE Cluster Requirements
The CloudBees Core installer requires:
-
On your local computer or a bastion host:
-
Kubernetes client 1.10 (or newer) installed and configured (
kubectl
) -
gcloud (See Installing Google Cloud SDK for instructions)
-
-
A GKE cluster running Kubernetes 1.10 (or newer) installed and configured
-
With nodes that have 1 full CPU / 1 Gb available, so nodes need at least 2 CPUs, 4 Gbs of memory
-
Must have network access to container images (public Docker Hub or a private Docker Registry)
-
-
The NGINX Ingress Controller installed in the cluster (v0.9.0 minimum)
-
Load balancer configured and pointing to the NGINX Ingress Controller
-
A DNS record that points to the NGINX Ingress Controllers Load balancer
-
SSL certificates (needed when you deploy CloudBees Core
-
-
A namespace in the cluster (provided by your admin) with permissions to create
Role
andRoleBinding
objects -
Kubernetes cluster Default Storage Class defined and ready to use.
-
Refer to the Reference Architecture for GKE - Storage Requirements for more information.
-
Important
|
Kubernetes beta releases are not supported. Use production releases. |
Creating your GKE Cluster
To create a Google Kubernetes Engine (GKE) cluster refer to the official Google documentation Create a GKE cluster.
More information on administering a Google Kubernetes cluster is available from the Kubernetes Engine How-to Guides.
More information on Kubernetes concepts is available from the Kubernetes site, including:
Cluster Admin Permissions
If the kubeconfig was created automatically by gcloud then the user lacks the required permissions.
Bind the user account to the 'cluster-admin' role using:
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)
Cluster-admin (full) permission is only needed during installation, services will run using the created roles with limited privileges.
Installing Ingress Controller
CloudBees Core does not support the GKE ingress controller at this point but instead, requires the use of the NGINX Ingress Controller. This section walks through the installation using Helm.
If you are not able to use Helm, you may install the Ingress Controller manually. Then skip to the DNS Record section.
Installing Tiller
Tiller is the Kubernetes cluster’s server-side component of Helm. Perform the following to install Tiller:
-
Create a service account for Tiller
kubectl create serviceaccount --namespace kube-system tiller
-
Give that service account admin capabilities
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
-
Install Tiller on Kubernetes cluster
helm init --service-account tiller
Creating Ingress Controller
-
Create the Ingress Controller
helm install --namespace ingress-nginx --name nginx-ingress stable/nginx-ingress \ --set controller.service.externalTrafficPolicy=Local \ --set controller.scope.enabled=true \ --set controller.scope.namespace=cje
Creating DNS Record
Creating the Ingress Controller results in the creation of the corresponding service,
along with its corresponding Load Balancer, both of which will take a few moments.
You may then execute the following command to retrieve the external IP address to be
used for the CloudBees Core cluster domain name. In the following example, the IP address
that you want to note is 35.203.153.152
.
$ kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer 10.3.244.187 35.203.153.152 80:30396/TCP,443:31290/TCP 3m
Create a DNS record, for the domain you want to use for CloudBees Core, pointing to the external IP address. In our
example, it is 35.203.153.152
.
For example, if the CloudBees Core domain name is cloudbees-core.example.com
, then its A Record entry should be 35.203.153.152
.
To continue with the instructions in this document, as this time, create the environmental variable, DOMAIN_NAME
:
export DOMAIN_NAME=cloudbees-core.example.com
CloudBees Core Namespace
A Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.
Assuming you have a fresh cluster, you can inspect the available namespaces by doing the following:
$ kubectl get namespaces
NAME STATUS AGE
default Active 13m
ingress-nginx Active 8m
kube-public Active 13m
kube-system Active 13m
It is recommended to use a CloudBees Core specific namespace in the cluster with permissions to create Role and RoleBinding objects. For example, to create a 'cje' namespace, perform the following:
-
Create the namespace
cje
kubectl create namespace cje
-
Attach a label to that namespace
kubectl label namespace cje name=cje
-
Make namespace
cje
the default namespace for the kubectl contextkubectl config set-context $(kubectl config current-context) --namespace=cje
Installing CloudBees Core
SSD persistent storage considerations
Once a JENKINS_HOME volume is created, its storage type cannot be changed. To use the 'ssd' storage class for Operations Center, you will need to uncomment and set the storageClassName definition under 'volumeClaimTemplates' to 'ssd' in the cloudbees-core.yml file prior to installation.
volumeClaimTemplates:
- metadata:
name: jenkins-home
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
storageClassName: ssd
To configure Managed Masters to use SSD disks by default, update the storage class in the cloudbees-core.yml file. Search for the commented-out section
# To allocate masters using a non-default storage class, add the following
# -Dcom.cloudbees.masterprovisioning.kubernetes.KubernetesMasterProvisioning.storageClassName=some-storage-class
Change it so that the storage class is now ssd
.
-Dcom.cloudbees.masterprovisioning.kubernetes.KubernetesMasterProvisioning.storageClassName=ssd
Regional persistent disks
Google Kubernetes v1.10 provides beta access to regional persistent disks.
Regional persistent disks replicate data between two zones in the same region, and can be used similarly to regular persistent disks. In the event of a zonal outage, Kubernetes can failover workloads using the volume of the other zone. You can use regional persistent disks to build highly available solutions for Operations Center and Managed Masters' stateful workloads. Users must ensure that both the primary and failover zones are configured with enough resource capacity to run the workload.
See Google Persistent Volumes documentation for more information on how to configure a StorageClass to use regional persistent disks.
CloudBees Core installation
CloudBees Core runs on a Kubernetes cluster.
Kubernetes cluster installations are configured with YAML files.
The CloudBees Core installer provides a cloudbees-core.yml
file that is modified for each installation.
-
Unpack installer
$ export INSTALLER=cloudbees-core_2.121.3.1_kubernetes.tgz $ sha256sum -c $INSTALLER.sha256 $ tar xzvf $INSTALLER
-
Prepare shell variables for your installation. Replace
cloudbees-core.example.com
with your domain name.$ DOMAIN_NAME=<YOUR_DOMAIN_NAME_FOR_CLOUDBEES_CORE>
If you do not have an available domain, you can use
xip.io
combined with the IP of the Ingress controller.$ CLOUDBEES_CORE_IP=$(kubectl -n ingress-nginx get svc ingress-nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}") $ DOMAIN_NAME="jenkins.$CLOUDBEES_CORE_IP.xip.io"
-
Edit the cloudbees-core.yml file for your installation
$ cd cloudbees-core_2.121.3.1_kubernetes $ sed -e s,cloudbees-core.example.com,$DOMAIN_NAME,g < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml
-
Disable SSL redirection if you do not have SSL certificates.
$ sed -e s,https://$DOMAIN_NAME,http://$DOMAIN_NAME,g < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml $ sed -e s,ssl-redirect:\ \"true\",ssl-redirect:\ \"false\",g < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml
-
Run the installer
$ kubectl apply -f cloudbees-core.yml serviceaccount "cjoc" created role "master-management" created rolebinding "cjoc" created configmap "cjoc-config" created configmap "cjoc-configure-jenkins-groovy" created statefulset "cjoc" created service "cjoc" created ingress "cjoc" created ingress "default" created serviceaccount "jenkins" created role "pods-all" created rolebinding "jenkins" created configmap "jenkins-agent" created
-
Wait until CJOC is rolled out
$ kubectl rollout status sts cjoc
-
Read the admin password
$ kubectl exec cjoc-0 -- cat /var/jenkins_home/secrets/initialAdminPassword h95pSNDaaMJzz7r2GxxCjrGQ3t
Open Operations Center
CloudBees Core is now installed, configured, and ready to run. Open the CloudBees Core URL and log in with the initial admin password. Install the CloudBees Core license and the recommended plugins.
See Administering CloudBees Core for further information.
Auto-scaling nodes
Google Kubernetes Engine supports node auto-scaling by enabling the option in the GKE console.
Go to GKE console - Select your cluster - click on 'EDIT' - Under "Node Pools" set "Autoscaling" to "on" - Adjust autoscaling limits by setting "Minimum size" and "Maximum size"
Auto-scaling considerations
While scaling up functionality is straightforward, scaling down is potentially more problematic. Scaling down involve moving workload to different nodes if the node to reclaim has still some utilization but is below the reclamation threshold. Moving agent workload would potentially mean build interruption (failed build) and moving Operations Center/Managed Master workload would mean downtime.
Distinct node pools
One way to deal with scaling down is to treat each workload differently by using separate node pools and thus apply different logic to control the scaling down.
Managed Master and Operations Center workload
By assigning Managed Master and Operations Center workload to a dedicated pool, the scaling down of nodes can be prevented by restricting eviction of Managed Master or Operations Center deployments. Scale up will happen normally when resources need to be increased in order to deploy additional Managed Masters, but scale down will only happen when the nodes are free of Operations Center or Managed Master workload. This might be acceptable since masters are meant to be stable and permanent, meaning that they are not ephemeral but long lived.
This is achieved by adding the following annotation to Operations Center and Managed Masters:
"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
For Operations Center, the annotation is added to the cloudbees-core.yml in the CJOC "StatefulSet" definition
under "spec - template - metadata - annotations"
apiVersion: "apps/v1beta1"
kind: "StatefulSet"
metadata:
name: cjoc
labels:
com.cloudbees.cje.type: cjoc
com.cloudbees.cje.tenant: cjoc
spec:
serviceName: cjoc
replicas: 1
updateStrategy:
type: RollingUpdate
template:
metadata:
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
For Managed Master, the annotation is added in the configuration page under the 'Advanced Configuration - YAML' parameter. The YAML snippet to add would look like:
kind: StatefulSet
spec:
template:
metadata:
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
Agent workload
By assigning Jenkins agent workload to a dedicated pool, the scaling could be handled by the default logic. Since agents are Pods that are not backed by a Kubernetes controller, they prevent scale down of nodes until no pods are running on a particular node. This prevents nodes to be reclaimed while agents are running and agent to be interrupted even though the autoscaler is below its reclamation threshold.
In order to create a dedicated pool for agent workload, we need to prevent other types of workload to be deployed on the
dedicated pool nodes. This is accomplished by tainting
the dedicated pool nodes. Then to allow scheduling of agent
workload on the dedicated pool nodes, the agent pod will use a corresponding taint tolerations
and a node selector
.
When nodes are created dynamically by the Kubernetes autoscaler, they need to be created with the proper taint
and label
.
In the Google console, the taint
and label
can be specified when creating the NodePool:

The first parameter will automatically add the label workload=build
to the newly created nodes.
This label will then be used as the NodeSelector
for the agent.
The second parameter will automatically add the nodeType=build:NoSchedule
taint to the node.
The agent template will then need to add the corresponding 'toleration' to allow the scheduling of agent workload on those nodes.

For Pipelines, 'toleration' can be added to podTemplate
using the yaml
parameter as follows:
def label = "mypodtemplate-${UUID.randomUUID().toString()}" def nodeSelector = "workload=build" podTemplate(label: label, yaml: """ apiVersion: v1 kind: Pod spec: tolerations: - key: nodeType operator: Equal value: build effect: NoSchedule """, nodeSelector: nodeSelector, containers: [ containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat') ]) { node(label) { stage('Run maven') { container('maven') { sh 'mvn --version' } } } }
Upgrading CloudBees Core
To upgrade to a newer version of CloudBees Core, follow the same process as the installation process.
-
Download installer
-
Unpack installer
-
Edit the cloudbees-core.yml file for your installation to match the previous changes made during initial installation
-
Run the installer
$ kubectl apply -f cloudbees-core.yml
-
Wait until CJOC is rolled out
$ kubectl rollout status sts cjoc
Once the new version of Operations Center is rolled out, you can log in to Operations Center again and upgrade the managed masters. See Upgrading Managed Masters for further information.
Removing CloudBees Core
If you need to remove CloudBees Core from Kubernetes, use the following steps:
-
Delete all masters from Operations Center
-
Stop Operations Center
kubectl scale statefulsets/cjoc --replicas=0
-
Delete CloudBees Core
kubectl delete -f cloudbees-core.yml
-
Delete remaining pods and data
kubectl delete pod,statefulset,pvc,ingress,service -l com.cloudbees.cje.tenant
-
Delete services, pods, persistent volume claims, etc.
kubectl delete svc --all kubectl delete statefulset --all kubectl delete pod --all kubectl delete ingress --all kubectl delete pvc --all
Additional Topics
Manual Installation of Ingress Controller
If you are not able to use Helm, you will need to manually install the NGINX Ingress Controller. See the NGINX controller install guide for installation instruction.
These instructions will deploy the NGINX Ingress Controller to a namespace named ingress-nginx
.
-
Deploy NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
-
Deploy the service creating the Load Balancer
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
Go to DNS Record section to continue with CloudBees Core installation.
Tip
|
More information on the NGINX Controller installation. |
HTTPS Setup
To setup the NGINX ingress controller to support SSL termination, see the GKE Reference Architecture TLS Termination at Ingress chapter.
HTTPS Load Balancer
As an alternative, SSL termination can be setup at the Google Load Balancer level. In order to do that, a new/additional load balancer needs to be created since the load balancer created during the installation of the NGINX ingress controller is a TCP load balancer and does not support HTTPS termination.
Information about Setting up HTTP(S) load balancing can be found at HTTP(S) Load Balancing
1) Get the NGINX controller service port
The new load balancer will be connected to the NGINX controller. We need first to get information about the current controller.
Get the service port number for TCP port 80. (nginx_service_port_80 = 30622 in the example below)
$ kubectl get svc -n ingress-nginx ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx LoadBalancer 10.23.243.53 35.196.134.177 80:30622/TCP,443:30216/TCP,50000:31462/TCP 27d
2) Create a new load balancer
To create a new load balancer, go to the GCE Network services console
-
Click on 'Create a load balancer'
-
Select 'HTTPS Load Balancer'
-
Give it a name
-
Select 'Backend configuration' → 'Backend services' → 'Create a backend service'
-
Give it a name
-
Select the instance group of your cluster
-
Set the port number to the ingress controller service port (nginx_service_port_80 we got previously)
-
Under 'Healthchecks', select the healthcheck for the ingress controller service port (nginx_service_port_80)
-
Click on 'Create'
-
-
Select 'Frontend configuration'
-
Give it a name
-
Select Protocol HTTPS
-
Under 'IP Address' select 'Create IP Address' to create a new static IP for the load balancer
-
Under 'Certificates' Select your domain certificate if already uploaded or create a new certificate
-
If creating a new certificate, upload the various parts of the certificate (information on how to create an SSL certificates )
-
-
Click 'Done'
-
-
Click 'Create'
-
3) Remove load balancing for port 80 and 443 from NGINX load balancer (Optional)
Now that HTTPS access has been configured, you can remove access to the CJE cluster for port 80 and 443 via the NGINX load balancer.
Go to the GCE Network services console
-
Select the NGINX load balancer
-
Click 'Edit'
-
Select 'Frontend configuration'
-
Delete the frontend configuration for port 80,443
-
-
Click 'Update'
Adding Client Masters
Occasionally administrators need to connect existing masters to a CloudBees Core cluster. Existing masters connected to a CloudBees Core cluster are called "Client Masters" to distinguish them from Managed Masters. A master running on Windows is one example that requires a Client Master.
Configure Load Balancer
The load balancer routes traffic from the public internet into the Kubernetes cluster. The standard installation opens the http port (80) and the https port (443). Port 50000 must be opened and must route traffic to the Kubernetes internal port.
Go to the GCE console under Network services - Load balancing
-
select the load balancer
-
click on 'EDIT'
-
select 'Frontend configuration'
-
add a mapping for port TCP:50000 to the IP of the load balancer
Then under VPC Network - Firewall rules edit the cluster firewall rule that has currently port 80 and 443 opened to 0.0.0.0/0 and add port 50000 to the rule.
Configure NGINX Ingress Controller
In GKE, the load balancer does not allow to forward a port to a different port. To overcome this, we can reconfigure NGINX to act as a TCP proxy for the jnlp port.
Modify the NGINX 'tcp-services' config map to enable NGINX to proxy port 50000 as a TCP stream.
nginx-config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
50000: "cje/cjoc:50000"
Apply the config map changes:
$ kubectl apply -f nginx-config-map.yaml
Then add port 50000 as an exposed port for the NGINX controller. Go to the GKE console under Workloads
-
select the nginx-ingress-controller
-
click on 'EDIT'
-
under 'ports' add a containerPort 50000 named 'jnlp'
ports:
- containerPort: 50000
name: jnlp
protocol: TCP
-
save
Then add a service port for port 50000 to the 'ingress-nginx' service.
Go to the GKE console under Discovery & load balancing
-
select the 'ingress-nginx' load balancer service
-
click on 'EDIT'
-
under 'ports' add a 'jnlp' 50000 targetPort
ports: - name: jnlp port: 50000 protocol: TCP targetPort: jnlp
-
save
Test Connection
You can confirm that Operations Center is ready to receive external JNLP requests with the following command:
$ curl $DOMAIN_NAME:50000 Jenkins-Agent-Protocols: Diagnostic-Ping, JNLP4-connect, MultiMaster, OperationsCenter2, Ping Jenkins-Version: 2.107.1.2 Jenkins-Session: b02dc475 Client: 10.20.4.12 Server: 10.20.5.10 Remoting-Minimum-Version: 2.60
Continue installation
Once ports and security are correctly configured in your cloud and on your Client Master, continue the instructions in Adding Client Masters.
Adding JNLP Agents
To provide connectivity for JNLP agents, the master must be configured to "Allow external agents". If the master is not configured as such, edit the master configuration, enable "Allow external agents and then restart the master.
The "Allow external agents" will create a Kubernetes Service of type NodePort for the jnlp port. The NodePort exposed port can be retrieve by looking at the master service.
For example if the master name is 'master-1', the NodePort service will be called 'master-1-jnlp'. In the example below, the jnlp exposed port (JNLP_NODE_PORT) for 'master-1' is 32075 and the master jnlp port (JNLP_MASTER_PORT) is 50004
$ kubectl get svc master-1-jnlp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
master-1-jnlp NodePort 10.23.248.32 <none> 50004:32075/TCP 2h
Configure Load Balancer
The load balancer routes traffic from the public internet into the Kubernetes cluster. In order for the jnlp agent to connect to the master, the jnlp node port must be opened and traffic routed to it on the load balancer.
Go to the GCE console under Network services - Load balancing
-
select the load balancer
-
click on 'EDIT'
-
select 'Frontend configuration'
-
add a mapping for port TCP:<JNLP_NODE_PORT> to the IP of the load balancer
Then under VPC Network - Firewall rules edit the cluster firewall rule that has currently port 80 and 443 opened to 0.0.0.0/0 and add port <JNLP_NODE_PORT> to the rule.
Test Connection
You can confirm that Master is ready to receive external JNLP requests with the following command:
$ curl $DOMAIN_NAME:$JNLP_MASTER_PORT Jenkins-Agent-Protocols: Diagnostic-Ping, JNLP4-connect, OperationsCenter2, Ping Jenkins-Version: 2.138.3.1 Jenkins-Session: f4e6410a Client: 0:0:0:0:0:0:0:1 Server: 0:0:0:0:0:0:0:1 Remoting-Minimum-Version: 3.4
Continue installation
Once the jnlp port is correctly configured in your cloud, you can then create a new 'node' in your master under 'Manage Jenkins → Manage Nodes'.
NOTE that the node should be configured with: - Launch method: 'Launch agent via Web Start' - Tunnel connection through (under "Advanced"): LOAD_BALANCER_IP:JNLP_NODE_PORT
Then follow the instruction, given after you save the node configuration, to launch the agent.
Using Kaniko with CloudBees Core
Introducing Kaniko
Kaniko is a utility that creates container images from a Dockerfile. The image is created inside a container or Kubernetes cluster, which allows users to develop Docker images without using Docker or requiring a privileged container.
Since Kaniko doesn’t depend on the Docker daemon and executes each command in the Dockerfile entirely in the userspace, it enables building container images in environments that can’t run the Docker daemon, such as a standard Kubernetes cluster.
The remainder of this chapter provides a brief overview of Kaniko and illustrates using it in CloudBees Core with a Declarative Pipeline.
How does Kaniko work?
Kaniko looks for the Dockerfile file in the Kaniko context. The Kaniko context can be a GCS storage bucket, an S3 storage bucket, or local directory. In the case of either a GCS or S3 storage bucket, the Kaniko context must be a compressed tar file. Next, if the context contains a compressed tar file, then Kaniko expands it. Otherwise, it starts to read the Dockerfile.
Kaniko then extracts the filesystem of the base image using the FROM
statement in the Dockerfile.
It then executes each command in the Dockerfile.
After each command completes, Kaniko captures filesystem differences.
Next, it applies these differences, if there are any, to the base image and updates image metadata.
Lastly, Kaniko publishes the newly created image to the desired Docker registry.
Security
Kaniko runs as an unprivileged container.
Kaniko still needs to run as root to be able to unpack the Docker base image into its container or execute RUN
Dockerfile commands that require root privileges.
Primarily, Kaniko offers a way to build Docker images without requiring a container running with the privileged flag, or by mounting the Docker socket directly.
Note
|
Additional security information can be found under the Security section of the Kaniko documentation. Also, this blog article on unprivileged container builds provides a deep dive on why Docker build needs root access. |
Kaniko parameters
Kaniko has two key parameters. They are the Kaniko context and the image destination. Kaniko context is the same as Docker build context. It is the path Kaniko expects to find the Dockerfile in and any supporting files used in the creation of the image. The destination parameter is the Docker registry where the Kaniko will publish the images. Currently, Kaniko supports hub.docker.com, GCR, and ECR as the Docker registry.
In addition to these parameters, Kaniko also needs a secret containing the authorization details required to push the newly created image to the Docker registry.
Kaniko debug image
The Kaniko executor image uses scratch and doesn’t contain a shell.
The Kaniko project also provides a debug image, gcr.io/kaniko-project/executor:debug
, this image consists of the Kaniko executor image with a busybox shell.
Note
|
For more details on using the Debug Image, see Debug Image section of the Kaniko documenation. |
Pipeline example
This example illustrates using Kaniko to build a Docker image from a Git repository and pushing the resulting image to a private Docker registry.
Requirements
To run this example, you need the following:
-
A Kubernetes cluster with an installation of CloudBees Core
-
A Docker account or another private Docker registry account
-
Your Docker registry credentials
-
Ability to run
kubectl
against your cluster -
CloudBees Core account with permission to create the new pipeline
Steps
These are the high-level steps for this example:
-
Create a new Kubernetes Secret.
-
Create the Pipeline.
-
Run the Pipeline.
Create a new Kubernetes secret
The first step is to provide credentials that Kaniko uses to publish the new image to the Docker registry.
This example uses kubectl
and a docker.com account.
Tip
|
If you are using a private Docker registry, you can use it instead of docker.com. Just create the Kubernetes secret with the proper credentials for the private registry. |
Kubernetes has a create secret
command to store the credentials for private Docker registries.
Use the create secret docker-registry
kubectl
command to create this secret:
create secret
command $ kubectl create secret docker-registry docker-credentials \ (1)
--docker-username=<username> \
--docker-password=<password> \
--docker-email=<email-address>
-
The name of the new Kubernetes secret.
Create the Pipeline
Create a new pipeline job in CloudBees Core. In the pipeline field, paste the following Declarative Pipeline:
def label = "kaniko-${UUID.randomUUID().toString()}"
podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
imagePullPolicy: Always
command:
- /busybox/cat
tty: true
volumeMounts:
- name: jenkins-docker-cfg
mountPath: /kaniko/.docker
volumes:
- name: jenkins-docker-cfg
projected:
sources:
- secret:
name: docker-credentials (1)
items:
- key: .dockerconfigjson
path: config.json
""") {
node(label) {
stage('Build with Kaniko') {
git 'https://github.com/cb-jeffduska/simple-docker-example.git'
container(name: 'kaniko', shell: '/busybox/sh') {
withEnv(['PATH+EXTRA=/busybox']) {
sh '''#!/busybox/sh
/kaniko/executor --context `pwd` --destination <docker-username>/hello-kaniko:latest (2)
'''
}
}
}
}
}
-
This is where the
docker-credentials
secret, created in the previous step, is mounted into the Kaniko Pod under/kaniko/.docker/config.json
. -
Replace destination with your Docker username such as
hello-kaniko
.
Save the new Pipeline job.
Run the new Pipeline
The sample Pipeline is complete. Run the Pipeline to build the Docker image. When the pipeline is successful, a new Docker image should exist in your Docker registry. The new Docker image can be accessed via standard Docker commands such as docker pull
and docker run
.
Limitations
Kaniko does not use Docker to build the image, thus there is no guarantee that it will produce the same image as Docker would. In some cases, the number of layers could also be different.
Important
|
Kaniko supports most Dockerfile commands, even multistage builds, but does not support all commands. See the list of Kaniko Issues to determine if there is an issue with a specific Dockerfile command. Some rare edge cases are discussed in the Limitations section of the Kaniko documentation. |
Alternatives
There are many tools similar to Kaniko. These tools build container images using a variety of different approaches.
Tip
|
There is a summary of these tools and others in the comparison with other tools section of the Kaniko documentation. |
Here are links to a few of them:
Using self-signed certificates in CloudBees Core
This optional component of CloudBees Core allows to use self-signed certificates or custom root CA (Certificate Authority). It works by injecting a given set of files (certificate bundles) into all containers of all scheduled pods.
Prerequisites
Kubernetes 1.10 or later, with admission controller MutatingAdmissionWebhook
enabled.
In order to check whether it is enabled for your cluster, you can run the following command:
kubectl api-versions | grep admissionregistration.k8s.io/v1beta1
The result should be:
admissionregistration.k8s.io/v1beta1
In addition, the MutatingAdmissionWebhook
and ValidatingAdmissionWebhook
admission controllers should be added and
listed in the correct order in the admission-control flag of kube-apiserver.
Installation
This procedure requires a context with cluster-admin
privilege in order to create the MutatingWebhookConfiguration
.
In the CloudBees Core binary bundle, you will find a directory named sidecar-injector
. The following instructions assume this is the working directory.
Create a certificate bundle
In the following instructions, we assume you are working in the namespace where CloudBees Core is installed,
and the certificate you want to install is named mycertificate.pem
.
For a self-signed certificate, add the certificate itself. If the certificate has been issued from a custom root CA, add the root CA itself.
# Copy reference files locally
kubectl cp cjoc-0:etc/ssl/certs/ca-certificates.crt .
kubectl cp cjoc-0:etc/ssl/certs/java/cacerts .
# Add root CA to system certificate bundle
cat mycertificate.pem >> ca-certificates.crt
# Add root CA to java cacerts
keytool -import -noprompt -keystore cacerts -file mycertificate.pem -storepass changeit -alias service-mycertificate;
# Create a configmap with the two files above
kubectl create configmap --from-file=ca-certificates.crt,cacerts ca-bundles
Setup injector
-
Browse to the directory where CloudBees Core archive has been unpacked, then go to
sidecar-injector
folder. -
Create a namespace to deploy the sidecar injector.
kubectl create namespace sidecar-injector
NoteThe following instructions assume the deployment is performed in the sidecar-injector
namespace. If the target namespace has a different name, a global replacement needs to be done in thesidecar-injector.yaml
file before proceeding -
Create a signed cert/key pair and store it in a Kubernetes
secret
that will be consumed by sidecar deployment../webhook-create-signed-cert.sh \ --service sidecar-injector-webhook-svc \ --secret sidecar-injector-webhook-certs \ --namespace sidecar-injector
-
Patch the
MutatingWebhookConfiguration
by setcaBundle
with correct value from Kubernetes clustercat sidecar-injector.yaml | \ webhook-patch-ca-bundle.sh > \ sidecar-injector-ca-bundle.yaml
NoteIn some Kubernetes deployments, it is possible that the resulting caBundle
is an empty string. This means the deployment doesn’t support certificate-based authentication, but it won’t prevent using this feature. -
Switch to
sidecar-injector
namespacekubectl config set-context $(kubectl config current-context) --namespace=sidecar-injector
-
Deploy resources
kubectl create -f sidecar-injector-ca-bundle.yaml
-
Verify everything is running
The
sidecar-inject-webhook
pod should be running# kubectl get pods NAME READY STATUS RESTARTS AGE sidecar-injector-webhook-deployment-bbb689d69-882dd 1/1 Running 0 5m
# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE sidecar-injector-webhook-deployment 1 1 1 1 5m
Configure namespace
-
Label the namespace where CloudBees Core is installed with
sidecar-injector=enabled
kubectl label namespace mynamespace sidecar-injector=enabled
-
Check
# kubectl get namespace -L sidecar-injector NAME STATUS AGE SIDECAR-INJECTOR default Active 18h mynamespace Active 18h enabled kube-public Active 18h kube-system Active 18h
Verify
-
Deploy an app in Kubernetes cluster, take
sleep
app as an example# cat <<EOF | kubectl create -f - apiVersion: extensions/v1beta1 kind: Deployment metadata: name: sleep spec: replicas: 1 template: metadata: labels: app: sleep spec: containers: - name: sleep image: tutum/curl command: ["/bin/sleep","infinity"] EOF
-
Verify injection has happened
# kubectl get pods -o 'go-template={{range .items}}{{.metadata.name}}{{"\n"}}{{range $key,$value := .metadata.annotations}}* {{$key}}: {{$value}}{{"\n"}}{{end}}{{"\n"}}{{end}}' sleep-d5bf9d8c9-bfglq * com.cloudbees.sidecar-injector/status: injected
Conclusion
You are now all set to use your custom CA across your Kubernetes cluster.
To pick up the new certificate bundle, restart Operations Center and running Managed Masters. When scheduling new build agents, they will also pick up the certificate bundle and allow connection to remote endpoints using your certificates.
Online version published by CloudBees, Inc.
Oracle and Java are registered trademarks of Oracle and/or its affiliates.
Jenkins is a registered trademark of the non-profit Software in the Public Interest organization. Used with permission. See here for more info about the Jenkins project.
The registered trademark Jenkins® is used pursuant to a sublicense from the Jenkins project and Software in the Public Interest, Inc. Read more at www.cloudbees.com/jenkins/about.
Apache, Apache Ant, Apache Maven, Ant and Maven are trademarks of The Apache Software Foundation. Used with permission. No endorsement by The Apache Software Foundation is implied by the use of these marks.
Other names may be trademarks of their respective owners. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and CloudBees was aware of a trademark claim, the designations have been printed in caps or initial caps.
While every precaution has been taken in the preparation of this book, the publisher and authors assume no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein.