This project contains setup for Jenkins that creates jobs and pipelines via Jenkins Job DSL or Jenkinsfile for your projects. The pipelines and jobs use the scripts defined in Cloud Pipelines Scripts repo.

Introduction

This section describes how Jenkins works with Cloud Pipelines.

You do not need to use all the pieces of Cloud Pipelines. You can (and should) gradually migrate your applications to use those pieces of Cloud Pipelines that you think best suit your needs.

Five-second Introduction

Cloud Pipelines Jenkins provides setup for Jenkins that creates jobs and pipelines via Jenkins Job DSL or Jenkinsfile for your projects. The pipelines and jobs use the scripts defined in Cloud Pipelines Scripts repo.

Five-minute Introduction

In these sections you will learn how exactly Cloud Pipelines Jenkins integrates with Cloud Pipelines Scripts and how you can setup deployment pipelines for each project.

How to Use It

The suggested approach is to use the Project Crawler approach, where we scan your organization for projects and create deployment pipeline for each.

Another approach is to pass an environment variable with a list of repositories for which you would like the pipeline to be built.

How It Works

As the following image shows, Cloud Pipelines contains logic to generate a pipeline and the runtime to execute pipeline steps.

how
Figure 1. How Cloud Pipelines works

Once a pipeline is created (for example, by using Jenkins Job DSL), when the jobs are ran, they clone or download Cloud Pipelines code to run each step. Those steps run functions that are defined in the commons module of Cloud Pipelines.

Cloud Pipelines performs steps to guess what kind of a project your repository is (e.g. JVM or PHP) and what framework it uses (Maven or Gradle), and it can deploy your application to a cloud (e.g. Cloud Foundry or Kubernetes)

Project Crawler

In Jenkins, you can generate the deployment pipelines by passing an environment variable with a comma-separated list of repositories. This, however, does not scale. We would like to automatically fetch a list of all repositories from a given organization and team.

To do so, we use the Project Crawler library, which can:

  • Fetch all projects for a given organization.

  • Fetch contents of a file for a given repository.

The following diagram depicts this situation:

crawler

Thanks to the Project Crawler, you can run the seed job, and ,automatically, all the new repositories are picked and pipelines are created for them. Project Crawler supports repositories stored at Github, Gitlab, and Bitbucket. You can also register your own implementation. See the Project Crawler repository for more information.

How Scripts Work with Spinnaker

With Spinnaker, the deployment pipeline is inside of Spinnaker. No longer do we treat Jenkins as a tool that does deployments. In Jenkins, we create only the CI jobs (that is, build and test) and prepare the JSON definitions of Spinnaker pipelines.

The following diagram shows how Jenkins, the seed job for Spinnaker, and Spinnaker cooperate:

spinnaker

Customizing the Project

Cloud Pipelines offers a way to override the way pipelines are built

Overriding Project Setup

If you want to customize the Cloud Pipelines build, you can update the contents of the gradle/custom.gradle build script. That way your customizations will not interfere with the changes in the main part of the code, thus there should be no merge conflicts when pulling the changes from Cloud Pipeline repositories.

Overriding Pipelines

Currently, the best way to extend Jenkins Jenkinsfile pipelines is to make a copy of the Jenkins seed and pipeline jobs.

Overriding Jenkins Job DSL pipelines

We provide an interface (called io.cloudpipelines.common.JobCustomizer) that lets you provide customization for:

  • all jobs

  • build jobs

  • test jobs

  • stage jobs

  • prod jobs

We use the JDK’s java.util.ServiceLoader mechanism to achieve extensibility.

You can write an implementation of that interface (for example, com.example.MyJubCustomizer) and create a META-INF/io.cloudpipelines.common.JobCustomizer file in which you put the com.example.MyJubCustomizer line.

If you create a JAR with your class (for example com.example:my-customizer:1.0.0), put it on the build classpath, as the following example shows:

dependencies {
    // ...
    libs "com.example:my-customizer:1.0.0"
    // ...
}

If you do not want to create a separate library, you can create an implementation in the sources under src/main/resources/META-INF.

Regardless of what you chose, your implementation runs for each job. You can add notifications or any other customizations of your choosing.

Jenkins Pipeline (Common)

In this section we will present the common setup of Jenkins for any platform. We will also provide answers to most frequently asked questions.

Project setup

In the declarative-pipeline you can find a definition of a declarative pipeline. It’s used together with the Blueocean UI.

Under job-dsl folder you’ll find all job-dsl related setup. In its jobs subfolder you have all the seed jobs that will generate pipelines. You can read comments inside each script to understand what it’s doing.

Under demo folder you can find the setup prepared for demo purposes. In its seed subfolder folder you have the init.groovy file which is executed when Jenkins starts. That way we can configure most of Jenkins options for you (adding credentials, JDK etc.). jenkins_pipeline.groovy contains logic to build a seed job (that way you don’t have to even click that job - we generate it for you). Under the k8s folder there are all the configuration files required for deployment to a Kubernetes cluster.

Optional customization steps

All the steps below are not necessary to run the demo. They are needed only when you want to do some custom changes.

Setup settings.xml for Maven deployment

If you want to use the default connection to the Docker version of Artifactory you can skip this step

So that ./mvnw deploy works with Artifactory from Docker we’re already copying the missing settings.xml file for you. It looks more or less like this:

<?xml version="1.0" encoding="UTF-8"?>
<settings>
	<servers>
		<server>
			<id>${M2_SETTINGS_REPO_ID}</id>
			<username>${M2_SETTINGS_REPO_USERNAME}</username>
			<password>${M2_SETTINGS_REPO_PASSWORD}</password>
		</server>
		<server>
			<id>${DOCKER_SERVER_ID}</id>
			<username>${DOCKER_USERNAME}</username>
			<password>${DOCKER_PASSWORD}</password>
			<configuration>
				<email>${DOCKER_EMAIL}</email>
			</configuration>
		</server>
	</servers>
</settings>

As you can see the file is parameterized. In Maven it’s enough to pass to ./mvnw command the proper system property to override that value. For example to pass a different docker email you’d have to call ./mvnw -DDOCKER_EMAIL=foo@bar.com and the value gets updated.

If you want to use your own version of Artifactory / Nexus you have to update the file (it’s in seed/settings.xml).

Setup Jenkins env vars

If you want to only play around with the demo that we’ve prepared you have to set ONE variable which is the REPOS variable. That variable needs to consists of comma separated list of URLs to repositories containing business apps. So you should pass your forked repos URLs.

You can do it in the following ways:

  • globally via Jenkins global env vars (then when you run the seed that variable will be taken into consideration and proper pipelines will get built)

  • modify the seed job parameters (you’ll have to modify the seed job configuration and change the REPOS property)

  • provide the repos parameter when running the seed job

For the sake of simplicity let’s go with the last option.

If you’re choosing the global envs, you HAVE to remove the other approach (e.g. if you set the global env for REPOS, please remove that property in the seed job

If you’re using the Project Crawler based solution, you can also provide your own implementation to customize the created jobs.

Seed properties

Click on the seed job and pick Build with parameters. Then as presented in the screen below (you’ll have far more properties to set) just modify the REPOS property by providing the comma separated list of URLs to your forks. Whatever you set will be parsed by the seed job and passed to the generated Jenkins jobs.

This is very useful when the repos you want to build differ. E.g. use different JDK. Then some seeds can set the JDK_VERSION param to one version of Java installation and the others to another one.

Example screen:

seed

In the screenshot we could parametrize the REPOS and REPO_WITH_BINARIES params.

Global envs
This section is presented only for informational purposes - for the sake of demo you can skip it

You can add env vars (go to configure Jenkins → Global Properties) for the following properties (example with defaults for PCF Dev):

Example screen:

env vars

Set Git email / user

Since our pipeline is setting the git user / name explicitly for the build step you’d have to go to Configure of the build step and modify the Git name / email. If you want to set it globally you’ll have to remove the section from the build step and follow these steps to set it globally.

You can set Git email / user globally like this:

   

manage jenkins
Step 1: Click 'Manage Jenkins'

   

configure system
Step 2: Click 'Configure System'

   

git
Step 3: Fill out Git user information

   

Add Jenkins credentials for GitHub

The scripts will need to access the credential in order to tag the repo.

You have to set credentials with id: git.

Below you can find instructions on how to set a credential (e.g. for Cloud Foundry cf-test credential but remember to provide the one with id git).

   

credentials system
Step 1: Click 'Credentials, System'

   

credentials global
Step 2: Click 'Global Credentials'

   

credentials add
Step 3: Click 'Add credentials'

   

credentials example
Step 4: Fill out the user / password and provide the git credential ID (in this example cf-test)

   

Testing Jenkins scripts

To run the tests against your bash and Groovy scripts just call

./gradlew clean build

How to work with Jenkins Job DSL plugin

Check out the tutorial. Provide the link to this repository in your Jenkins installation.

Docker Image

If you would like to run the pre-configured Jenkins image somewhere other than your local machine, we have an image you can pull and use on DockerHub. The latest tag corresponds to the latest snapshot build. You can also find tags corresponding to stable releases that you can use as well.

The Jenkins docker image is setup for demo purposes. For example it has the following system property -Dpermissive-script-security.enabled=no_security that disables script security. YOU SHOULD NOT USE IT ON PRODUCTION UNLESS YOU KNOW WHAT YOU’RE DOING.

All Environment Variables

Below you can find a table with all environment variables

Table 1. Environment variables
Environment variable name Description

GIT_CREDENTIAL_ID

ID of credentials used for GIT interaction

GIT_SSH_CREDENTIAL_ID

ID of credentials used for GIT interaction via ssh

GIT_USE_SSH

if set to true will set the SSH key for GIT interaction

GIT_USERNAME

Username used for GIT integration

GIT_PASSWORD

Password used for GIT integration

GIT_TOKEN

Token used for GIT integration

GIT_EMAIL

Email used for GIT integration

GIT_NAME

Name used for GIT integration

JDK_VERSION

Name of the used JDK installation

OUTPUT_FOLDER

Folder to which the app will output files

API_COMPATIBILITY_STEP_REQUIRED

Should API compatibility step be there? Defaults to true

DB_ROLLBACK_STEP_REQUIRED

Should DB rollback step be there? Defaults to true

DEPLOY_TO_TEST_STEP_REQUIRED

Should test steps be there? Defaults to true

DEPLOY_TO_STAGE_STEP_REQUIRED

Should stage steps be there? Defaults to true

AUTO_DEPLOY_TO_STAGE

Should deploy to stage automatically? Defaults to true

AUTO_DEPLOY_TO_PROD

Should deploy to prod automatically? Defaults to true

PROJECT_NAME

Project name that should override the default one (which is the one taken from the build)

PROJECT_TYPE

Type of the project. Depends on the used framework. Can be e.g. Maven, Gradle etc. Depending on the value of this env variable a proper {@code projectType/pipeline-$PROJECT_TYPE.sh } script will be sourced

PAAS_TYPE

Type of the used PAAS. Can be e.g. CF, K8S, etc. Depending on the value of this env variable a proper {@code pipeline-$PAAS_TYPE.sh } script will be sourced

SCRIPTS_URL

URL of the repository containing scripts to be used within the build. Defaults to the tar.gz package of the master branch of the Cloud Pipelines repo. You can provide URL to either tar.gz or .git repository.

SCRIPTS_BRANCH

Branch of the repository containing scripts to be used within the build. Defaults to master but when you’re working on a feature in Cloud Pipelines you can want to point to your branch

JENKINS_SCRIPTS_URL

URL of the repository containing scripts to be used within the build. Defaults to the tar.gz package of the master branch of the Cloud Pipelines repo. You can provide URL to either tar.gz or .git repository.

JENKINS_SCRIPTS_BRANCH

Branch of the repository containing scripts to be used within the build. Defaults to master but when you’re working on a feature in Cloud Pipelines you can want to point to your branch

M2_SETTINGS_REPO_ID

id of the credentials that will be put to ~/.m2/settings.xml as credentials used to deploy an artifact

M2_SETTINGS_REPO_USERNAME

username put inside ~/.m2/settings.xml

M2_SETTINGS_REPO_PASSWORD

password put inside ~/.m2/settings.xml

REPO_WITH_BINARIES_CREDENTIAL_ID

id of the credentials that will be passed to your build if you need to upload artifacts to a binary storage

REPO_WITH_BINARIES

URL of the repo that contains binaries

REPO_WITH_BINARIES_FOR_UPLOAD

URL for uploading of the repo that contains binaries. Often points to the REPO_WITH_BINARIES

PIPELINE_DESCRIPTOR

name of the pipeline descriptor. Defaults to {@code cloud-pipelines.yml}

LEGACY_PIPELINE_DESCRIPTOR

name of the legacy pipeline descriptor. Defaults to {@code sc-pipelines.yml}. Will be used as a fallback if the PIPELINE_DESCRIPTOR is not found

PIPELINE_VERSION

env var containing the version of the pipeline

WORKSPACE

env var containing the Jenkins workspace path

TEST_MODE_DESCRIPTOR

used for tests - descriptor to be returned for test purposes

APPLICATION_URL

used in integration tests. URL of the deployed application

STUBRUNNER_URL

used in integration tests. URL of the deployed Stub Runner application

LATEST_PROD_VERSION

used in rollback tests deployment and tests. Latest production version of the application.

LATEST_PROD_TAG

used in rollback tests deployment and tests. Latest production tag of the application.

PASSED_LATEST_PROD_TAG

used in rollback tests deployment and tests. Latest production tag of the application. Certain CI tools (e.g. Concourse) add the PASSED_ prefix before the env var.

REPO_ORGANIZATION

organization / team to crawl by project crawler

REPO_MANAGEMENT_TYPE

type of repo management used. Can be GITHUB, GITLAB, BITBUCKET or OTHER

REPO_URL_ROOT

URL of the API to reach to crawl the organization

REPO_PROJECTS_EXCLUDE_PATTERN

Pattern of projects to exclude

PAAS_TEST_API_URL

URL of the test environment

PAAS_STAGE_API_URL

URL of the stage environment

PAAS_PROD_API_URL

URL of the prod environment

PAAS_TEST_CREDENTIAL_ID

ID of credentials used to connect to test env

PAAS_TEST_USERNAME

Username used to connect to test env

PAAS_TEST_PASSWORD

Password used to connect to test env

PAAS_STAGE_USERNAME

Username used to connect to stage env

PAAS_STAGE_PASSWORD

Password used to connect to stage env

PAAS_PROD_CREDENTIAL_ID

ID of credentials used to connect to prod env

PAAS_STAGE_CREDENTIAL_ID

ID of credentials used to connect to stage env

PAAS_PROD_USERNAME

Username used to connect to prod env

PAAS_PROD_PASSWORD

Password used to connect to prod env

PAAS_TEST_ORG

Organization used for the test environment

PAAS_TEST_SPACE_PREFIX

Prefix prepended to the application name. Together forms a unique name of a test space.

PAAS_STAGE_ORG

Organization used for the stage environment

PAAS_STAGE_SPACE

Space used for the stage environment

PAAS_PROD_ORG

Organization used for the prod environment

PAAS_PROD_SPACE

Space used for the prod environment

PAAS_HOSTNAME_UUID

Hostname prepended to the route. When the name of the app is already taken, the route typically is also used. That’s why you can use this env var to prepend additional value to the hostname

CF_REDOWNLOAD_CLI

defaults to true, forces to redownload CLI regardless of whether it’s already downloaded or not

CF_CLI_URL

URL from which CF should be downloaded

CF_SKIP_PREPARE_FOR_TESTS

if true, will not connect to CF to fetch info about app host

DOCKER_REGISTRY_URL

URL of the docker registry

DOCKER_REGISTRY_ORGANIZATION

Organization where your Docker repo lays

DOCKER_REGISTRY_CREDENTIAL_ID

ID of credentials used to push Docker images

DOCKER_USERNAME

Username used to push Docker images

DOCKER_PASSWORD

Password used to push Docker images

DOCKER_SERVER_ID

In ~/.m2/settings.xml server id of the Docker registry can be set so that credentials don’t have to be explicitly passed

DOCKER_EMAIL

Email used for Docker repository interaction

PAAS_TEST_CA_PATH

Path to the test CA in the container

PAAS_STAGE_CA_PATH

Path to the stage CA in the container

PAAS_PROD_CA_PATH

Path to the prod CA in the container

PAAS_TEST_CLIENT_CERT_PATH

Path to the client certificate for test environment

PAAS_STAGE_CLIENT_CERT_PATH

Path to the client certificate for stage environment

PAAS_PROD_CLIENT_CERT_PATH

Path to the client certificate for prod environment

PAAS_TEST_CLIENT_KEY_PATH

Path to the client key for test environment

PAAS_STAGE_CLIENT_KEY_PATH

Path to the client key for stage environment

PAAS_PROD_CLIENT_KEY_PATH

Path to the client key for prod environment

TOKEN

Token used to login to PAAS

PAAS_TEST_CLIENT_TOKEN_PATH

Path to the file containing the token for test env

PAAS_STAGE_CLIENT_TOKEN_PATH

Path to the file containing the token for stage env

PAAS_PROD_CLIENT_TOKEN_PATH

Path to the file containing the token for prod env

PAAS_TEST_CLIENT_TOKEN_ID

ID of the token used to connect to test environment

PAAS_STAGE_CLIENT_TOKEN_ID

ID of the token used to connect to stage environment

PAAS_PROD_CLIENT_TOKEN_ID

ID of the token used to connect to prod environment

PAAS_TEST_CLUSTER_NAME_ENV_VAR

Name of the cluster for test env

PAAS_STAGE_CLUSTER_NAME

Name of the cluster for stage env

PAAS_PROD_CLUSTER_NAME

Name of the cluster for prod env

PAAS_TEST_CLUSTER_USERNAME

Name of the user to connect to test environment

PAAS_STAGE_CLUSTER_USERNAME

Name of the user to connect to stage environment

PAAS_PROD_CLUSTER_USERNAME

Name of the user to connect to prod environment

PAAS_TEST_SYSTEM_NAME

Name of the system for test env

PAAS_STAGE_SYSTEM_NAME

Name of the system for stage env

PAAS_PROD_SYSTEM_NAME

Name of the system for prod env

PAAS_TEST_NAMESPACE

Namespace used for the test env

PAAS_STAGE_NAMESPACE

Namespace used for the stage env

PAAS_PROD_NAMESPACE

Namespace used for the prod env

KUBERNETES_MINIKUBE

set to true if minikube is used

MYSQL_ROOT_CREDENTIAL_ID

ID of the MYSQL ROOT user credentials

MYSQL_ROOT_USER

Username of the MYSQL user

MYSQL_CREDENTIAL_ID

ID of the MYSQL user credentials

MYSQL_USER

Username of the MYSQL user

SPINNAKER_TEST_DEPLOYMENT_ACCOUNT

Account used for deployment to test env

SPINNAKER_STAGE_DEPLOYMENT_ACCOUNT

Account used for deployment to stage env

SPINNAKER_PROD_DEPLOYMENT_ACCOUNT

Account used for deployment to prod env

SPINNAKER_JENKINS_ROOT_URL

name of the Jenkins host used by Spinnaker

SPINNAKER_JENKINS_ACCOUNT

name of the Jenkins account used by Spinnaker

SPINNAKER_JENKINS_MASTER

name of the Jenkins master installation

SPINNAKER_TEST_HOSTNAME

the hostname appended to the routes for test envs

SPINNAKER_STAGE_HOSTNAME

the hostname appended to the routes for test envs

SPINNAKER_PROD_HOSTNAME

the hostname appended to the routes for test envs

Jenkins Pipeline (Cloud Foundry)

In this chapter, we assume that you deploy your Java application to Cloud Foundry PaaS. The chosen language is just an example, you could perform similar tasks with another language.

The Cloud Pipelines repository contains job definitions and the opinionated setup pipeline, which uses the Jenkins Job DSL plugin. Those jobs form an empty pipeline and a opinionated sample pipeline that you can use in your company.

The following projects take part in the microservice setup for this demo.

  • Github Analytics: The app that has a REST endpoint and uses messaging — part off our business application.

  • Github Webhook: Project that emits messages that are used by Github Analytics — part of our business application.

  • Eureka: Simple Eureka Server. This is an infrastructure application.

  • Github Analytics Stub Runner Boot: Stub Runner Boot server to be used for tests with Github Analytics and using Eureka and Messaging. This is an infrastructure application.

Step-by-step

This is a guide for the Jenkins Job DSL based pipeline.

If you want only to run the demo as far as possible using PCF Dev and Docker Compose, do the following:

Fork Repositories

Four applications compose the pipeline:

You need to fork only the following, because only then can you tag and push the tag to your repository:

Start Jenkins and Artifactory

Jenkins + Artifactory can be ran locally. To do so, run the start.sh script from this repository. The following listing shows the script:

git clone https://github.com/CloudPipelines/jenkins
cd jenkins/demo
./start.sh yourGitUsername yourGitPassword yourForkedGithubOrg

Then Jenkins runs on port 8080, and Artifactory runs on port 8081. The parameters are passed as environment variables to the Jenkins VM, and credentials are set. That way, you need not do any manual work on the Jenkins side. In the above parameters, the third parameter could be yourForkedGithubOrg or yourGithubUsername. Also the REPOS environment variable contains your GitHub org (in which you have the forked repos).

Instead of the Git username and password parameters, you could pass -key <path_to_private_key> (if you prefer to use key-based authentication with your Git repositories).

Deploy the Infra JARs to Artifactory

When Artifactory is running, run the tools/deploy-infra.sh script from this repo. The following listing shows the script:

git clone https://github.com/CloudPipelines/jenkins
cd jenkins/
./tools/deploy-infra.sh

As a result, both the eureka and stub runner repositories are cloned, built, and uploaded to Artifactory.

Start PCF Dev

You can skip this step if you have CF installed and do not want to use PCF Dev. In that case, the only thing you have to do is to set up spaces.
Servers often run run out of resources at the stage step. If that happens clear some apps from PCF Dev and continue.

You have to download and start PCF Dev, as described here.

The default credentials when using PCF Dev are as follows:

username: user
password: pass
email: user
org: pcfdev-org
space: pcfdev-space
api: api.local.pcfdev.io

You can start PCF Dev as follows:

cf dev start

You must create three separate spaces, as follows:

cf login -a https://api.local.pcfdev.io --skip-ssl-validation -u admin -p admin -o pcfdev-org

cf create-space pcfdev-test
cf set-space-role user pcfdev-org pcfdev-test SpaceDeveloper
cf create-space pcfdev-stage
cf set-space-role user pcfdev-org pcfdev-stage SpaceDeveloper
cf create-space pcfdev-prod
cf set-space-role user pcfdev-org pcfdev-prod SpaceDeveloper

You can also run the ./tools/cf-helper.sh setup-spaces script to do this.

Run the Seed Job

We created the seed job for you, but you have to run it. When you do run it, you have to provide some properties. By default we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global environment variables, you have to remove them from the seed.

To run the demo, provide a comma-separated list of the URLs of the two aforementioned forks (github-webhook and github-analytics') in the `REPOS variable.

The following images shows the steps involved:

   

seed click
Step 1: Click the 'jenkins-pipeline-seed-cf' job for Cloud Foundry and jenkins-pipeline-seed-k8s for Kubernetes

   

seed run
Step 2: Click the 'Build with parameters'

   

seed
Step 3: The REPOS parameter should already contain your forked repos (you’ll have more properties than the ones in the screenshot)

   

seed built
Step 4: This is how the results of seed should look like

Run the github-webhook Pipeline

We already created the seed job for you, but you have to run it. When you do run it, you have to provide some properties. By default, we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global environment variables, you have to remove them from the seed.

To run the demo, provide a comma-separated list of URLs of the two aforementioned forks (github-webhook and github-analytics) in the REPOS variable.

The following images shows the steps involved:

   

seed views
Step 1: Click the 'github-webhook' view

   

pipeline run
Step 2: Run the pipeline

   

If your build fails on deploy previous version to stage due to a missing jar, that means that you forgot to clear the tags in your repository. Typically, that happens because you removed the Artifactory volume with a deployed jar while a tag in the repository still points there. See here for how to remove the tag.

   

pipeline manual
Step 3: Click the manual step to go to stage (remember about killing the apps on test env). To do this click the ARROW next to the job name

   

Servers often run run out of resources at the stage step. For that reason, we suggest killing all applications on test. See the FAQ for more detail.

   

pipeline finished
Step 4: The full pipeline should look like this

   

Declarative Pipeline & Blue Ocean

You can also use the declarative pipeline approach with the Blue Ocean UI.

The Blue Ocean UI is available under the blue/ URL (for example, for Docker Machine-based setup: https://192.168.99.100:8080/blue).

The following images show the various steps involved:

   

blue 1
Step 1: Open Blue Ocean UI and click on github-webhook-declarative-pipeline

   

blue 2
Step 2: Your first run will look like this. Click Run button

   

blue 3
Step 3: Enter parameters required for the build and click run

   

blue 4
Step 4: A list of pipelines will be shown. Click your first run.

   

blue 5
Step 5: State if you want to go to production or not and click Proceed

   

blue 6
Step 6: The build is in progress…​

   

blue 7
Step 7: The pipeline is done!

   

There is no possibility of restarting a pipeline from a specific stage after failure. See this issue for more information
Currently, there is no way to introduce manual steps in a performant way. Jenkins blocks an executor when a manual step is required. That means that you run out of executors pretty quickly. See this issue and this StackOverflow question for more information.

Jenkins Cloud Foundry Customization

You can customize Jenkins for Cloud Foundry by setting a variety of environment variables.

You need not see all the environment variables described in this section to run the demo. They are needed only when you want to make custom changes.

Environment Variable Summary

The environment variables that are used in all of the jobs are as follows:

Property Name Property Description Default value

PAAS_TEST_API_URL

The URL to the CF API for the TEST environment

api.local.pcfdev.io

PAAS_STAGE_API_URL

The URL to the CF API for the STAGE environment

api.local.pcfdev.io

PAAS_PROD_API_URL

The URL to the CF API for the PROD environment

api.local.pcfdev.io

PAAS_TEST_ORG

Name of the org for the test env

pcfdev-org

PAAS_TEST_SPACE_PREFIX

Prefix of the name of the CF space for the test environment to which the app name is appended

cloudpipelines-test

PAAS_STAGE_ORG

Name of the org for the stage environment

pcfdev-org

PAAS_STAGE_SPACE

Name of the space for the stage environment

cloudpipelines-stage

PAAS_PROD_ORG

Name of the org for the prod environment

pcfdev-org

PAAS_PROD_SPACE

Name of the space for the prod environment

cloudpipelines-prod

REPO_WITH_BINARIES_FOR_UPLOAD

URL of the repository with the deployed jars

https://artifactory:8081/artifactory/libs-release-local

M2_SETTINGS_REPO_ID

The ID of server from Maven settings.xml

artifactory-local

JDK_VERSION

The name of the JDK installation

jdk8

PIPELINE_VERSION

The version of the pipeline (ultimately, also the version of the jar)

1.0.0.M1-${GROOVY,script ="new Date().format('yyMMdd_HHmmss')"}-VERSION

GIT_EMAIL

The email used by Git to tag the repository

email@example.com

GIT_NAME

The name used by Git to tag the repository

Pivo Tal

PAAS_HOSTNAME_UUID

Additional suffix for the route. In a shared environment, the default routes can be already taken

AUTO_DEPLOY_TO_STAGE

Whether deployment to stage be automatic

false

AUTO_DEPLOY_TO_PROD

Whether deployment to prod be automatic

false

API_COMPATIBILITY_STEP_REQUIRED

Whether the API compatibility step is required

true

DB_ROLLBACK_STEP_REQUIRED

Whether the DB rollback step is present

true

DEPLOY_TO_STAGE_STEP_REQUIRED

Whether to the deploy-to-stage step be present

true

BUILD_OPTIONS

Additional options you would like to pass to the Maven / Gradle build

BINARY_EXTENSION

Extension of the binary uploaded to Artifactory / Nexus. Example: war for WAR artifacts

jar

Jenkins Credentials

Our scripts reference the credentials by IDs. The following table describes the defaults for the credentials:

Property Name Property Description Default value

GIT_CREDENTIAL_ID

Credential ID used to tag a Git repo

git

GIT_SSH_CREDENTIAL_ID

SSH credential ID used to tag a Git repo

gitSsh

GIT_USE_SSH_KEY

If true, pick the SSH credential id to use

false

REPO_WITH_BINARIES_CREDENTIAL_ID

Credential ID used for the repository with jars

repo-with-binaries

PAAS_TEST_CREDENTIAL_ID

Credential ID for CF Test environment access

cf-test

PAAS_STAGE_CREDENTIAL_ID

Credential ID for CF Stage environment access

cf-stage

PAAS_PROD_CREDENTIAL_ID

Credential ID for CF Prod environment access

cf-prod

If you already have in your system a credential to (for example) tag a repository, you can use it by passing the value of the GIT_CREDENTIAL_ID property.

See the cf-helper script for all the configuration options.

Jenkins Pipeline (Kubernetes)

In this chapter, we assume that you deploy your application to Kubernetes PaaS.

The Cloud Pipelines repository contains job definitions and the opinionated setup pipeline that uses Jenkins Job DSL plugin. Those jobs form an empty pipeline and an opinionated sample pipeline that you can use in your company.

The following projects take part in the microservice setup for this demo.

  • Github Analytics: The app that has a REST endpoint and uses messaging — part of our business application.

  • Github Webhook: Project that emits messages that are used by Github Analytics — part of our business application.

  • Eureka: Simple Eureka Server. This is an infrastructure application.

  • Github Analytics Stub Runner Boot: Stub Runner Boot server to be used for tests with Github Analytics ad uses Eureka and Messaging. This is an infrastructure application.

Step-by-step

This is a guide for a Jenkins Job DSL based pipeline.

If you want only to run the demo as far as possible by using PCF Dev and Docker Compose, do the following:

Fork Repositories

Four applications compose the pipeline

You need to fork only the following repositories, because only then can you tag and push the tag to your repository:

Start Jenkins and Artifactory

Jenkins and Artifactory can be ran locally. To do so, run the start.sh script from this repo. The following listing shows the script:

git clone https://github.com/CloudPipelines/jenkins
cd jenkins/demo
./start.sh yourGitUsername yourGitPassword yourForkedGithubOrg yourDockerRegistryOrganization yourDockerRegistryUsername yourDockerRegistryPassword yourDockerRegistryEmail

Then Jenkins runs on port 8080, and Artifactory runs on port 8081. The provided parameters are passed as environment variables to the Jenkins VM and credentials are set. That way, you need not do any manual work on the Jenkins side. In the preceding script, the third parameter could be yourForkedGithubOrg or yourGithubUsername. Also the REPOS environment variable contains your GitHub org in which you have the forked repositories.

Instead of the Git username and password parameters, you could pass -key <path_to_private_key> if you prefer to use the key-based authentication with your Git repositories.

You need to pass the credentials for the Docker organization (by default, we search for the Docker images at Docker Hub) so that the pipeline can push images to your org.

Deploy the Infra JARs to Artifactory

When Artifactory is running, run the tools/deploy-infra.sh script from this repo. The following listing shows the script:

git clone https://github.com/CloudPipelines/jenkins
cd jenkins
./tools/deploy-infra-k8s.sh

As a result, both the eureka and stub runner repos are cloned, built, and uploaded to Artifactory and their docker images are built.

Your local Docker process is reused by the Jenkins instance running in Docker. That is why you do not have to push these images to Docker Hub. On the other hand, if you run this sample in a remote Kubernetes cluster, the driver is not shared by the Jenkins workers, so you can consider pushing these Docker images to Docker Hub too.

Run the seed job

We created the seed job for you, but you have to run it. When you do run it, you have to provide some properties. By default we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global environment variables, you have to remove them from the seed.

To run the demo, provide a comma-separated list of the URLs of the two aforementioned forks (github-webhook and github-analytics') in the `REPOS variable.

The following images shows the steps involved:

   

seed click
Step 1: Click the 'jenkins-pipeline-seed-cf' job for Cloud Foundry and jenkins-pipeline-seed-k8s for Kubernetes

   

seed run
Step 2: Click the 'Build with parameters'

   

seed
Step 3: The REPOS parameter should already contain your forked repos (you’ll have more properties than the ones in the screenshot)

   

seed built
Step 4: This is how the results of seed should look like

Run the github-webhook pipeline

We already created the seed job for you, but you have to run it. When you do run it, you have to provide some properties. By default, we create a seed that has all the properties options, but you can delete most of it. If you set the properties as global environment variables, you have to remove them from the seed.

To run the demo, provide a comma-separated list of URLs of the two aforementioned forks (github-webhook and github-analytics) in the REPOS variable.

The following images shows the steps involved:

   

seed views
Step 1: Click the 'github-webhook' view

   

pipeline run
Step 2: Run the pipeline

   

If your build fails on deploy previous version to stage due to a missing jar, that means that you forgot to clear the tags in your repository. Typically, that happens because you removed the Artifactory volume with a deployed jar while a tag in the repository still points there. See here for how to remove the tag.

   

pipeline manual
Step 3: Click the manual step to go to stage (remember about killing the apps on test env). To do this click the ARROW next to the job name

   

Servers often run run out of resources at the stage step. For that reason, we suggest killing all applications on test. See the FAQ for more detail.

   

pipeline finished
Step 4: The full pipeline should look like this

   

Declarative pipeline & Blue Ocean

You can also use the declarative pipeline approach with the Blue Ocean UI.

The Blue Ocean UI is available under the blue/ URL (for example, for Docker Machine-based setup: https://192.168.99.100:8080/blue).

The following images show the various steps involved:

   

blue 1
Step 1: Open Blue Ocean UI and click on github-webhook-declarative-pipeline

   

blue 2
Step 2: Your first run will look like this. Click Run button

   

blue 3
Step 3: Enter parameters required for the build and click run

   

blue 4
Step 4: A list of pipelines will be shown. Click your first run.

   

blue 5
Step 5: State if you want to go to production or not and click Proceed

   

blue 6
Step 6: The build is in progress…​

   

blue 7
Step 7: The pipeline is done!

   

There is no possibility of restarting a pipeline from a specific stage after failure. See this issue for more information
Currently, there is no way to introduce manual steps in a performant way. Jenkins blocks an executor when a manual step is required. That means that you run out of executors pretty quickly. See this issue and this StackOverflow question for more information.

Jenkins Kubernetes customization

You can customize Jenkins for Cloud Foundry by setting a variety of environment variables.

You need not see all the environment variables described in this section to run the demo. They are needed only when you want to make custom changes.

All env vars

The environment variables that are used in all of the jobs are as follows:

Property Name Property Description Default value

DOCKER_REGISTRY_ORGANIZATION

Name of the docker organization to which Docker images should be deployed

cloudpipelines

DOCKER_REGISTRY_CREDENTIAL_ID

Credential ID used to push Docker images

docker-registry

DOCKER_SERVER_ID

Server ID in settings.xml and Maven builds

docker-repo

DOCKER_EMAIL

Email used to connect to Docker registry and Maven builds

change@me.com

DOCKER_REGISTRY_ORGANIZATION

URL of the Kubernetes cluster for the test environment

cloudpipelines

DOCKER_REGISTRY_URL

URL of the docker registry

https://index.docker.io/v1/

PAAS_TEST_API_URL

URL of the API of the Kubernetes cluster for the test environment

192.168.99.100:8443

PAAS_STAGE_API_URL

URL of the API of the Kubernetes cluster for the stage environment

192.168.99.100:8443

PAAS_PROD_API_URL

URL of the API of the Kubernetes cluster for the prod environment

192.168.99.100:8443

PAAS_TEST_CA_PATH

Path to the certificate authority for test the environment

/usr/share/jenkins/cert/ca.crt

PAAS_STAGE_CA_PATH

Path to the certificate authority for stage the environment

/usr/share/jenkins/cert/ca.crt

PAAS_PROD_CA_PATH

Path to the certificate authority for the prod environment

/usr/share/jenkins/cert/ca.crt

PAAS_TEST_CLIENT_CERT_PATH

Path to the client certificate for the test environment

/usr/share/jenkins/cert/apiserver.crt

PAAS_STAGE_CLIENT_CERT_PATH

Path to the client certificate for the stage environment

/usr/share/jenkins/cert/apiserver.crt

PAAS_PROD_CLIENT_CERT_PATH

Path to the client certificate for the prod environment

/usr/share/jenkins/cert/apiserver.crt

PAAS_TEST_CLIENT_KEY_PATH

Path to the client key for the test environment

/usr/share/jenkins/cert/apiserver.key

PAAS_STAGE_CLIENT_KEY_PATH

Path to the client key for the stage environment

/usr/share/jenkins/cert/apiserver.key

PAAS_PROD_CLIENT_KEY_PATH

Path to the client key for the test environment

/usr/share/jenkins/cert/apiserver.key

PAAS_TEST_CLIENT_TOKEN_PATH

Path to the file containing the token for the test environment

PAAS_STAGE_CLIENT_TOKEN_PATH

Path to the file containing the token for the stage environment

PAAS_PROD_CLIENT_TOKEN_PATH

Path to the file containing the token for the prod environment

PAAS_TEST_CLIENT_TOKEN_ID

ID of the credential containing access token for test environment

PAAS_STAGE_CLIENT_TOKEN_ID

ID of the credential containing access token for the stage environment

PAAS_PROD_CLIENT_TOKEN_ID

ID of the credential containing access token for the prod environment

PAAS_TEST_CLUSTER_NAME

Name of the cluster for the test environment

minikube

PAAS_STAGE_CLUSTER_NAME

Name of the cluster for the stage environment

minikube

PAAS_PROD_CLUSTER_NAME

Name of the cluster for the prod environment

minikube

PAAS_TEST_CLUSTER_USERNAME

Name of the user for the test environment

minikube

PAAS_STAGE_CLUSTER_USERNAME

Name of the user for the stage environment

minikube

PAAS_PROD_CLUSTER_USERNAME

Name of the user for the prod environment

minikube

PAAS_TEST_SYSTEM_NAME

Name of the system for the test environment

minikube

PAAS_STAGE_SYSTEM_NAME

Name of the system for the stage environment

minikube

PAAS_PROD_SYSTEM_NAME

Name of the system for the prod environment

minikube

PAAS_TEST_NAMESPACE

Namespace for the test environment

cloudpipelines-test

PAAS_STAGE_NAMESPACE

Namespace for the stage environment

cloudpipelines-stage

PAAS_PROD_NAMESPACE

Namespace for the prod environment

cloudpipelines-prod

KUBERNETES_MINIKUBE

Whether to connect to Minikube

true

REPO_WITH_BINARIES_FOR_UPLOAD

URL of the repository with the deployed jars

https://artifactory:8081/artifactory/libs-release-local

REPO_WITH_BINARIES_CREDENTIAL_ID

Credential ID used for the repository with jars

repo-with-binaries

M2_SETTINGS_REPO_ID

The ID of server from Maven settings.xml

artifactory-local

JDK_VERSION

The name of the JDK installation

jdk8

PIPELINE_VERSION

The version of the pipeline (ultimately, also the version of the jar)

1.0.0.M1-${GROOVY,script ="new Date().format('yyMMdd_HHmmss')"}-VERSION

GIT_EMAIL

The email used by Git to tag the repository

email@example.com

GIT_NAME

The name used by Git to tag the repository

Pivo Tal

AUTO_DEPLOY_TO_STAGE

Whether deployment to stage be automatic

false

AUTO_DEPLOY_TO_PROD

Whether deployment to prod be automatic

false

API_COMPATIBILITY_STEP_REQUIRED

Whether the API compatibility step is required

true

DB_ROLLBACK_STEP_REQUIRED

Whether the DB rollback step is present

true

DEPLOY_TO_STAGE_STEP_REQUIRED

Whether the deploy-to-stage step is present

true

BUILD_OPTIONS

Additional options you would like to pass to the Maven / Gradle build

Preparing to Connect to GCE

Skip this step if you do not use GCE

In order to use GCE, we need to have gcloud running. If you already have the CLI installed, skip this step. If not run the following command to have the CLI downloaded and an installer started:

$ ./tools/k8s-helper.sh download-gcloud

Next, configure gcloud. Run gcloud init and log in to your cluster. You are redirected to a login page. Pick the proper Google account and log in.

Pick an existing project or create a new one.

Go to your platform page (click on Container Engine) in GCP and connect to your cluster with the following values:

$ CLUSTER_NAME=...
$ ZONE=us-east1-b
$ PROJECT_NAME=...
$ gcloud container clusters get-credentials ${CLUSTER_NAME} --zone ${ZONE} --project ${PROJECT_NAME}
$ kubectl proxy

The Kubernetes dashboard runs at http://localhost:8001/ui/.

We need a Persistent Disk for our Jenkins installation. Create it as follows:

$ ZONE=us-east1-b
$ gcloud compute disks create --size=200GB --zone=${ZONE} cloudpipelines-jenkins-disk

Once the disk has been created, you need to format it. See the instructions at https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting

Connecting to a Kubo or GCE Cluster

Skip this step if you do not use Kubo or GCE

This section describes how to deploy Jenkins and Artifactory to a Kubernetes cluster deployed with Kubo.

To see the dashboard, run kubectl proxy and access localhost:8081/ui.
  1. Log in to the cluster.

  2. Deploy Jenkins and Artifactory to the cluster:

    • ./tools/k8s-helper.sh setup-tools-infra-vsphere for a cluster deployed on VSphere

    • ./tools/k8s-helper.sh setup-tools-infra-gce for a cluster deployed to GCE

  3. Forward the ports so that you can access the Jenkins UI from your local machine, by using the following settings

$ NAMESPACE=default
$ JENKINS_POD=jenkins-1430785859-nfhx4
$ LOCAL_PORT=32044
$ CONTAINER_PORT=8080
$ kubectl port-forward --namespace=${NAMESPACE} ${JENKINS_POD} ${LOCAL_PORT}:${CONTAINER_PORT}
----
  1. Go to Credentials, click System and Global credentials, as the following image shows:

    kubo credentials
  2. Update git, repo-with-binaries and docker-registry credentials

  3. Run the jenkins-pipeline-k8s-seed seed job and fill it out with the following data

  4. Put kubernetes.default:443 here (or KUBERNETES_API:KUBERNETES_PORT)

    • PAAS_TEST_API_URL

    • PAAS_STAGE_API_URL

    • PAAS_PROD_API_URL

  5. Put /var/run/secrets/kubernetes.io/serviceaccount/ca.crt data here:

    • PAAS_TEST_CA_PATH

    • PAAS_STAGE_CA_PATH

    • PAAS_PROD_CA_PATH

  6. Uncheck the Kubernetes Minikube value.

    • Clear the following variables:

      • PAAS_TEST_CLIENT_CERT_PATH

      • PAAS_STAGE_CLIENT_CERT_PATH

      • PAAS_PROD_CLIENT_CERT_PATH

      • PAAS_TEST_CLIENT_KEY_PATH

      • PAAS_STAGE_CLIENT_KEY_PATH

      • PAAS_PROD_CLIENT_KEY_PATH

  7. Set /var/run/secrets/kubernetes.io/serviceaccount/token value to these variables:

    • PAAS_TEST_CLIENT_TOKEN_PATH

    • PAAS_STAGE_CLIENT_TOKEN_PATH

    • PAAS_STAGE_CLIENT_TOKEN_PATH

      • Set the cluster name to these variables (you can get the cluster name by calling kubectl config current-context):

    • PAAS_TEST_CLUSTER_NAME

    • PAAS_STAGE_CLUSTER_NAME

    • PAAS_PROD_CLUSTER_NAME

  8. Set the system name to these variables (you can get the system name by calling kubectl config current-context):

    • PAAS_TEST_SYSTEM_NAME

    • PAAS_STAGE_SYSTEM_NAME

    • PAAS_PROD_SYSTEM_NAME

  9. Update the DOCKER_EMAIL property with your email address.

  10. Update the DOCKER_REGISTRY_ORGANIZATION with your Docker organization name.

  11. If you do not want to upload the images to DockerHub, update DOCKER_REGISTRY_URL.

    pks seed
  12. Run the pipeline

Setup examples

You can go to setup subpage to read more about different setup scenarios.

Building the Project

This section covers how to build the project. It covers:

Project Setup

The declarative-pipeline folder contains all logic related to the Declarative Pipeline based approach of deploying software.

Under demo folder you can find the setup prepared to start a demo instance of Jenkins and Artifactory.

In docs you can find HTML documentation built from contents of docs-sources.

The job-dsl folder contains all logic related to the Jenkins Job DSL based approach of deploying software.

The tools folder contains tools required for building and that ease the work with demo setup.

Prerequisites

As prerequisites, you need to have shellcheck, bats, jq and ruby installed. If you use a Linux machine, bats and shellcheck are installed for you.

To install the required software on Linux, type the following command:

$ sudo apt-get install -y ruby jq

If you use a Mac, run the following commands to install the missing software:

$ brew install jq
$ brew install ruby
$ brew install bats
$ brew install shellcheck

Bats Submodules

To make bats work properly, we needed to attach Git submodules. To have them initialized, either clone the project or (if you have already cloned the project) pull to update it. The following command clones the project:

$ git clone --recursive https://github.com/CloudPipelines/jenkins.git

The following commands pull the project:

$ git submodule init
$ git submodule update

If you forget about this step, Gradle runs these steps for you.

Build and test

Once you have installed all the prerequisites, you can run the following command to build and test the project:

$ ./gradlew clean build

Generate Documentation

To generate the documentation, run the following command:

$ ./gradlew generateDocs

Releasing the Project

This section covers how to release the project by publishing a Docker image.

Publishing A Docker Image

Gradle is fully setup to build and release the project. Just pass the -PreleaseDocker property to the build to also upload the Docker images.

$ ./gradlew clean build -PreleaseDocker

You need to setup the environment variables / system properties / build properties

  • DOCKER_REGISTRY_URL - defaults to https://index.docker.io/v1/

  • DOCKER_HUB_USERNAME - defaults to changeme

  • DOCKER_HUB_PASSWORD - defaults to changeme

  • DOCKER_HUB_EMAIL - defaults to change@me.com

CI Server Worker Prerequisites

Cloud Pipelines uses Bash scripts extensively. The following list shows the software that needs to be installed on a CI server worker for the build to pass:

 apt-get -y install \
    bash \
    git \
    tar \
    zip \
    curl \
    ruby \
    wget \
    unzip \
    python \
    jq
In the demo setup all of these libraries are already installed.

Jenkins FAQ

This section provides answers to the most frequently asked questions about using Jenkins with Cloud Pipelines.

Pipeline version contains ${PIPELINE_VERSION}

You can check the Jenkins logs and see the following warning:

WARNING: Skipped parameter `PIPELINE_VERSION` as it is undefined on `jenkins-pipeline-sample-build`.
	Set `-Dhudson.model.ParametersAction.keepUndefinedParameters`=true to allow undefined parameters
	to be injected as environment variables or
	`-Dhudson.model.ParametersAction.safeParameters=[comma-separated list]`
	to whitelist specific parameter names, even though it represents a security breach

To fix it, you have to do exactly what the warning suggests. Also, you should ensure that the Groovy token macro processing checkbox is set.

Pipeline version is not passed to the build

You can see that the Jenkins version is properly set. However, in the build version, it is still snapshot and the echo "${PIPELINE_VERSION}" does not print anything.

You can check the Jenkins logs and see the following warning:

WARNING: Skipped parameter `PIPELINE_VERSION` as it is undefined on `jenkins-pipeline-sample-build`.
	Set `-Dhudson.model.ParametersAction.keepUndefinedParameters`=true to allow undefined parameters
	to be injected as environment variables or
	`-Dhudson.model.ParametersAction.safeParameters=[comma-separated list]`
	to whitelist specific parameter names, even though it represents a security breach

To fix it, you have to do exactly what the warning suggests.

The build times out with pipeline.sh information

This is a Docker compose issue. The problem is that for some reason, only in Docker, the execution of Java hangs. However, it hangs randomly and only the first time you try to run the pipeline.

The solution to this issue is to run the pipeline again. If it passes once, it will pass for any subsequent build.

Another thing that you can try is to run it with plain Docker. That helps sometimes.

Can I use the pipeline for some other repositories?

Yes. You can pass the REPOS variable with a comma-separated list of project_name$project_url format. If you do not provide the PROJECT_NAME, the repository name is extracted and used as the name of the project.

For example, a REPOS value equal to https://github.com/spring-cloud-samples/github-analytics,https://github.com/spring-cloud-samples/github-webhook results in the creation of pipelines with root names github-analytics and github-webhook.

For example, a REPOS equal to myanalytics$https://github.com/spring-cloud-samples/github-analytics,myfeed$https://github.com/spring-cloud-samples/atom-feed results in the creation of pipelines with root names myanalytics for github-analytics and myfeed for github-webhook.

Can this work for ANY project out of the box?

Not really. This is an “opinionated pipeline”. That is why we took some opinionated decisions

Can I modify this to reuse in my project?

Yes. It is open-source. The important thing is that the core part of the logic is written in Bash scripts. That way, in the majority of cases, you could change only the bash scripts without changing the whole pipeline.

The rollback step fails due to a missing JAR

You must have pushed some tags and have removed the Artifactory volume that contained them. To fix this, remove the tags by using the following command:

$ git tag -l | xargs -n 1 git push --delete origin
I want to provide a different JDK version.
  • By default, we assume that you have configured a JDK with an ID of jdk8.

  • If you want a different one, override the JDK_VERSION environment variable to point to the proper one.

    The docker image comes in with Java installed at /usr/lib/jvm/java-8-openjdk-amd64. You can go to Global Tools and create a JDK with an ID of jdk8 and set JAVA_HOME to /usr/lib/jvm/java-8-openjdk-amd64.

To change the default settings, follow the steps shown in the following images:

manage jenkins
Step 1: Click 'Manage Jenkins'
global tool
Step 2: Click 'Global Tool'
jdk installation
Step 3: Click 'JDK Installations'
jdk
Step 4: Fill out JDK Installation with path to your JDK
How can I enable groovy token macro processing?

We scripted that. However, if you need to this manually, follow the steps shown in the following images:

manage jenkins
Step 1: Click 'Manage Jenkins'
configure system
Step 2: Click 'Configure System'
groovy token
Step 3: Click 'Allow token macro processing'
How can I make deployment to stage and prod be automatic?

Set the relevant property or environment variable to true:

  • AUTO_DEPLOY_TO_STAGE to automatically deploy to stage.

  • AUTO_DEPLOY_TO_PROD to automatically deploy to prod.

How can I skip testing API compatibility?

Set the API_COMPATIBILITY_STEP_REQUIRED environment variable to false and re-run the seed (you can pick it from the seed job’s properties, too).

I can’t tag the repo.

You may get an error similar to the following:

19:01:44 stderr: remote: Invalid username or password.
19:01:44 fatal: Authentication failed for 'https://github.com/marcingrzejszczak/github-webhook/'
19:01:44
19:01:44 	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1740)
19:01:44 	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1476)
19:01:44 	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:63)
19:01:44 	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$8.execute(CliGitAPIImpl.java:1816)
19:01:44 	at hudson.plugins.git.GitPublisher.perform(GitPublisher.java:295)
19:01:44 	at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:45)
19:01:44 	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:779)
19:01:44 	at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:720)
19:01:44 	at hudson.model.Build$BuildExecution.post2(Build.java:185)
19:01:44 	at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:665)
19:01:44 	at hudson.model.Run.execute(Run.java:1745)
19:01:44 	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
19:01:44 	at hudson.model.ResourceController.execute(ResourceController.java:98)
19:01:44 	at hudson.model.Executor.run(Executor.java:404)

Most likely, you passed a wrong password. Check the credentials section for how to update your credentials.

I am unauthorized to deploy infrastructure jars.

Most likely, you forgot to update your local settings.xml file with the Artifactory’s setup. Check out this section of the docs and update your settings.xml file.

Signing Artifacts

In some cases, it may be required that, when you perform a release, that the artifacts be signed before you push them to the repository. To do this, you need to import your GPG keys into the Docker image that runs Jenkins. This can be done by placing a file called public.key that contains your public key and a file called private.key that contains your private key in the seed directory. These keys are imported by the init.groovy script runs when Jenkins starts.

Using SSH keys for Git

The seed job checks whether an environment variable called GIT_USE_SSH_KEY is set to true. If it is true, the environment variable called GIT_SSH_CREDENTIAL_ID is chosen as the one that contains the ID of the credential that contains SSH private key. By default, GIT_CREDENTIAL_ID is picked as the one that contains the username and password to connect to git.

You can set these values in the seed job by filling out the form and toggling a checkbox.

Deploy to stage fails and does not redeploy a service (Kubernetes).

There can be a number of reasons for this issue. Remember, though, that, for stage, we assume that a sequence of manual steps needs to be performed. We do not redeploy any existing services, because, most likely, you deliberately have it set up that way. If, in the logs of your application, you can see that you cannot connect to a service, first ensure that the service is forwarding traffic to a pod. Next, if that is not the case, delete the service and re-run the step in the pipeline. That way, Cloud Pipelines redeploy the service and the underlying pods.

I ran out of resources. (Cloud Foundry)

[jenkins-cf-resources]] When deploying the application to stage or prod, you can get an Insufficient resources exception. The way to solve it is to kill some applications from the test and stage environments. To do so, run the following commands:

$ cf target -o pcfdev-org -s pcfdev-test
$ cf stop github-webhook
$ cf stop github-eureka
$ cf stop stubrunner

You can also run ./tools/cf-helper.sh kill-all-apps to remove all demo-related apps deployed to PCF Dev.

Deploying to test or stage or prod fails with an error about finding space (Cloud Foundry)

You receive an exception similar to the following:

20:26:18 API endpoint:   https://api.local.pcfdev.io (API version: 2.58.0)
20:26:18 User:           user
20:26:18 Org:            pcfdev-org
20:26:18 Space:          No space targeted, use 'cf target -s SPACE'
20:26:18 FAILED
20:26:18 Error finding space pcfdev-test
20:26:18 Space pcfdev-test not found

It means that you forgot to create the spaces in your PCF Dev installation.

The route is already in use (Cloud Foundry).

If you play around with Jenkins, you can end up with routes that are already occupied, as identified by errors similar to the following:

Using route github-webhook-test.local.pcfdev.io
Binding github-webhook-test.local.pcfdev.io to github-webhook...
FAILED
The route github-webhook-test.local.pcfdev.io is already in use.

To resolve the issue, delete the offending routes, by using commands similar to the following:

$ yes | cf delete-route local.pcfdev.io -n github-webhook-test
$ yes | cf delete-route local.pcfdev.io -n github-eureka-test
$ yes | cf delete-route local.pcfdev.io -n stubrunner-test
$ yes | cf delete-route local.pcfdev.io -n github-webhook-stage
$ yes | cf delete-route local.pcfdev.io -n github-eureka-stage
$ yes | cf delete-route local.pcfdev.io -n github-webhook-prod
$ yes | cf delete-route local.pcfdev.io -n github-eureka-prod

You can also run the ./tools/cf-helper.sh delete-routes script.

How can I run helper scripts against a real Cloud Foundry instance that I’m logged into?

Assuming that you are already logged into the cluster, uyou can run the helper script with the REUSE_CF_LOGIN=true env variable, as shown in the following example:

REUSE_CF_LOGIN=true ./tools/cf-helper.sh setup-prod-infra

This script create the MySQL database and the RabbitMQ service and downloads and deploys Eureka to the space and organization you are logged into.