Jx: Jenkins-x YAML configuration for monorepo across internal folders

Created on 11 Aug 2019  ·  13Comments  ·  Source: jenkins-x/jx

Summary

We have a monorepo with multiple microservices. It currently gets built using old Jenkins through a Jenkinsfile in each of the sub-folders containing the respective microservice, and a simple Jenkins job configured through the UI. For explicitness, the repository is something like:

monorepo/
- microservice1:
    - Jenkinsfile
    - ...other files
- microservice2:
    - Jenkinsfile
    - ...other files

Currently we are trying to explore having this project built using Jenkins X, and the idea would be to have a jenkins-x.yml file on each of each of the respective sub-folders containing the microservices.

I have seen that there was functionality added to handle a "push folder to new repo" step through the jx step split monorepo step as per https://github.com/jenkins-x/jx/issues/822, however it seems that at least how we've attempted using it, it's not implemented as a core step that could be triggered through a jx base file.

The ideal way atm could be having a monorepo jenkins-x.yml file with contents similar to the following:

buildPack: monorepo
pipelineConfig:
  pipelines:
    release:
      build:
        steps:
          split: 
          - microservice1
          - microservice2
          - ...

Edit: [Added extra thoughts]

It's worth mentioning that one of the main benefits we see by using a monorepo is the centralisation and alignment with PRs and Issues, which means that we would want to make sure that all the PROW notifications are provided in the monorepo, instead of the sub-repos.

Given we have this constraint, ideally it would be possible to handle the steps without having to create a whole new monorepo for each of these microservices.

Is there a way to do this through the YAML? Alternatively how could it be triggered?

kinenhancement prioritimportant-longterm

Most helpful comment

I succeeded to build & deploy everything using monorepo approach ~14 services.

The from 0 to 100 cicd flow time is like ~30 minutes, with all builds unit and integration tests.

Before going monorepo jx consider:

  • Lots of shell scripts, working with git for changed services discovery, tagging, versioning, this is ugly...
  • Skaffold is not able to cache from remote repositories, builds could take less time - issue here
  • You will be forced to use jenkins classic instead of tekton, I feel like it would be better just to use only tekton instead, and all the custom gitops

It took me 80 hours to build everything.

All 13 comments

It's worth mentioning that one of the main benefits we see by using a monorepo is the centralisation and alignment with PRs and Issues, which means that we would want to make sure that all the PROW notifications are provided in the monorepo, instead of the sub-repos.

Given we have this constraint, ideally it would be possible to handle the steps without having to create a whole new monorepo for each of these microservices.

A monorepo is simply a pattern that often exists in the wide world, so support for it seems valuable as it opens up many additional potential users.

Splitting the repo in many smaller repos is certainly a nice and quick way to handle it, but one of the benefits of a monorepo is that it's easier/cheaper to split and merge microservices as the code base grows and as such happens often. It seems to me that over time one quickly ends up with a ton of similarly named repos where it's unclear which ones are still used, which ones are relics etc. That does seem hard to maintain to me.

Initial support might be just allowing to specify a subdirectory into a monorepo and treating the subdir as root.

Can we make this happen? Monorepos is very important in microservices world.

I think we should make this topic alive and publish more approaches to monorepos using jx. As much as a I hate the ideology of service per repository, I am still willing to use Jenkins X and I still believe this might be very good tool.

Would be great to get help from Jenkins X developers on how to build more less generic jenkins-x.yaml, skaffold.yaml files for our monorepos.

I am starting by sharing my approaches and what issues I am having right now, correct me if my approaches are nonsense.

Approach #1

  1. My monorepo structure (Nodejs/Typescript) looks something like this
├── OWNERS
├── OWNERS_ALIASES
├── README.md
├── aws-ecr.sh
├── charts
├── docker-compose.integration.yml
├── docker-compose.yml
├── docker-entrypoint-initdb.d
|  └── init.sh
├── identify_services.sh
├── jenkins-x.yml
├── jest.config.base.js
├── jest.config.js
├── lerna.json
├── microservices
|  ├── api-gateway
|  ├── api-gateway-deprecated
|  ├── api-gateway-reverse-proxy
|  ├── api-testsuite
|  ├── api-workers
|  ├── auth
|  ├── buyback
|  ├── devices
|  ├── frontend-admin
|  ├── integrations
|  ├── subscriptions
|  └── workflow-engine
├── package.json
├── skaffold.yaml
├── tmp
|  ├── cache
|  ├── localstack
|  └── postgres-data
├── tsconfig.json
├── tsconfig.spec.json
├── wait-for-service.sh
└── yarn.lock
  1. Each service/subfolder has Dockerfile that has multistages. Looks something like this:
# First stage
# - installing dependencies and devDependencies
# - building dist
# ----------------------------------------------------------
FROM node:10-alpine as build

LABEL authors="Valdas Mazrimas <[email protected]>"

WORKDIR /srv/apigateway

COPY microservices/api-gateway ./microservices/api-gateway
COPY package.json yarn.lock tsconfig*.json jest.config*.js .eslintrc ./

RUN yarn install --pure-lockfile --non-interactive --cache-folder ./ycache; rm -rf ./ycache

WORKDIR /srv/apigateway/microservices/api-gateway

RUN yarn prod:build


# Second stage - unit tests
# - copying files from build stage
# - running unit tests against it
# - running lint
# ----------------------------------------------------------
FROM node:10-alpine as validate

WORKDIR /srv/apigateway

COPY --from=build /srv/apigateway .
WORKDIR /srv/apigateway/microservices/api-gateway

RUN yarn validate:test


# Third stage - start
# - install only dependencies
# - start app process
# ----------------------------------------------------------
FROM node:10-alpine as start

WORKDIR /srv/apigateway

COPY --from=build /root/.npmrc /root/.npmrc 
COPY --from=build /srv/apigateway/package.json /srv/apigateway/package.json
COPY --from=build /srv/apigateway/yarn.lock /srv/apigateway/yarn.lock
COPY --from=build /srv/apigateway/microservices/api-gateway/dist /srv/apigateway/microservices/api-gateway/dist
COPY --from=build /srv/apigateway/microservices/api-gateway/package.json /srv/apigateway/microservices/api-gateway/package.json
COPY --from=build /srv/apigateway/microservices/api-gateway/yarn.lock /srv/apigateway/microservices/api-gateway/yarn.lock

RUN yarn install --prod --pure-lockfile --non-interactive --cache-folder ./ycache; rm -rf ./ycache

WORKDIR /srv/apigateway/microservices/api-gateway

USER node

EXPOSE 3001

CMD ["yarn", "prod:run"]

Build command: docker build . -f microservices/api-gateway/Dockerfile -t api-gateway

  1. jenkins-x.yaml looks something like this:
buildPack: typescript
pipelineConfig:
  pipelines:
    pullRequest:
      build:
        replace: true
        steps:
          - name: container-build
            sh: export VERSION=$PREVIEW_VERSION && skaffold build -f skaffold.yaml
          - name: npmrc
            sh: echo 0
          - name: npm-install
            sh: echo 0
          - name: npm-test
            sh: echo 0
  1. skaffold.yaml looks like this:
apiVersion: skaffold/v1beta9
kind: Config
build:
  artifacts: 
    - image: api-gateway
      context: .
      kaniko:
        dockerfile: microservices/api-gateway/Dockerfile
        buildArgs:
           ... destination ecr repo
deploy: 
  kubectl: {}
  1. identify_services.sh
#!/bin/bash
set -e

SERVICES_DIR="microservices"

changed_folders=`git diff --name-only $SERVICES_DIR | grep / | awk 'BEGIN {FS="/"} {print $2}' | uniq`

for folder in $changed_folders
do
  if [ "$folder" == "$1" ]; then
    echo "found changes in $1"
    exit 0
  fi
done

echo "not found change in $1"
exit 1

I would use it: ./identify_services.sh microservice-name // exit 0 or exit 1

Problems:

  1. Kaniko executor is unable to run child dockerfile with the context as root.
  2. I have identify_services.sh bash script which returns what services are changed, how to use it ?
  3. I can use jenkins-x.yaml and build images with kaniko executor there and skaffold for deployment only, but then I do not know how to identify buildable services.

Problem with skaffold: https://github.com/GoogleContainerTools/skaffold/issues/3822

In your approach you're building everything via Docker instead of tekton, not sure it"s the best way.

I'm intending to create a custom pipeline looping around all modified projects in the monorepo, and execute the different steps on each of them, but I'm just starting, nothing to show yet.

@valdestron can you share the content of your identify_services.sh script?

In your approach you're building everything via Docker instead of tekton, not sure it"s the best way.

I'm intending to create a custom pipeline looping around all modified projects in the monorepo, and execute the different steps on each of them, but I'm just starting, nothing to show yet.

Hmm, thats because I am not very familiar with tekton, Ill look at it. Completely agree that my aproach is not correct, looks too hard.

Ok I succeeded to build CICD for monorepo using nothing more than jx. Ill prepare an example repository later after I finish the manifests part with helm.

Infra:

  • AWS EKS
  • 3 nodes x.large
  • jx version 2.0.1243
  • git provider: github
  • file and folder structure the same as in previous comments
  1. jenkins-x.yaml
pipelineConfig:
  pipelines:
    pullRequest:
      pipeline:
        options:
          distributeParallelAcrossNodes: true
        agent:
          image: gcr.io/jenkinsxio/builder-nodejs12x:latest
        stages:
          - name: test-and-build-services
            options:
                volumes:
                  - name: docker-config
                    secret:
                      secretName: jenkins-docker-cfg
                  - name: aws-credentials
                    secret:
                      secretName: aws-creds
                containerOptions:
                  volumeMounts:
                    - name: docker-config
                      mountPath: /builder/home/.docker/
                    - name: aws-credentials
                      mountPath: /builder/home/.aws/
            parallel:
              - name: batch-1-test-and-build
                steps:
                  - name: version
                    sh: export VERSION=$PREVIEW_VERSION
                  - name: setup-ecr
                    sh: ./scripts/aws-ecr.sh
                  - name: npmrc
                    sh: printf "@bit:registry=https://node.bit.dev%s\n//node.bit.dev/:_authToken=xxx" >> /builder/home/.npmrc
                  - name: npm-install
                    # sh: yarn install
                    sh: echo 0
                  - name: npm-test
                    # sh: CI=true DISPLAY=:99 yarn test
                    sh: echo 0
                  - name: build-images
                    sh: ./scripts/build.sh

Do not know if i need these parallels I might stick with just teckton tasks.

  1. aws-ecr.sh script that creates ecr repositories of my monorepo services, kaniko unable to create them dynamically, does not matter..
#!/bin/bash

set -e

function setup_ecr() {
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    unzip awscliv2.zip
    ./aws/install
    aws --version

}

function create_repos() {
  for D in `find microservices -maxdepth 1 -not -path microservices`
  do
    subdir=$(basename $D)
    region=eu-central-1
    access_key_id=$(cat ~/.aws/credentials | grep aws_access_key_id | awk 'BEGIN {FS="= "} {print $2}')
    secret_access_key=$(cat ~/.aws/credentials | grep aws_secret_access_key | awk 'BEGIN {FS="= "} {print $2}')
    AWS_ACCESS_KEY_ID=$access_key_id AWS_SECRET_ACCESS_KEY=$secret_access_key AWS_DEFAULT_REGION=$region aws ecr create-repository --repository-name xxx/xx/${subdir} || true
  done
}


setup_ecr
create_repos
  1. build.sh Script that helps to identify changed services
#!/bin/bash
set -e

service_name=${1:-'null'}

function upgrade_skaffold() {
    curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/v1.6.0/skaffold-linux-amd64
    chmod +x skaffold
    mv skaffold /usr/local/bin
    skaffold version
}

function ok_to_build_service() {
    #$PULL_BASE_SHA
    changed_folders=`git diff --name-only $PULL_BASE_SHA..HEAD | grep microservices/ | awk 'BEGIN {FS="/"} {print $2}' | uniq`
    echo "$changed_folders"
}

function run_skaffold() {
    result=$(ok_to_build_service)
    services=""
    if [ ! -z "$result" ]; then
        echo $result | while read line; do
            for service in $line; do
                services+="$service,"
            done
            skaffold build -f skaffold.yaml --build-image=[$(echo ${services%$})] --cache-artifacts=false
        done
    fi
}

upgrade_skaffold
run_skaffold
  1. skaffold.yaml builds and deployments manifest, currently I did not have deployments yet as this is easy part.
apiVersion: skaffold/v2beta1
kind: Config
build:
  artifacts:
  - image: xxx.dkr.ecr.eu-central-1.amazonaws.com/xxx/xxxx/api-gateway
    context: .
    kaniko: 
      dockerfile: microservices/api-gateway/Dockerfile
  - image: xxx.dkr.ecr.eu-central-1.amazonaws.com/xxx/xxxx/api-gateway-deprecated
    context: .
    kaniko: 
      dockerfile: microservices/api-gateway-deprecated/Dockerfile
  cluster:
    dockerConfig:
      secretName: jenkins-docker-cfg
    namespace: jx
  tagPolicy:
    envTemplate:
      template: '{{.IMAGE_NAME}}:{{.VERSION}}'
deploy:
  kubectl: {}

WIth this setup and files and folder structure, I am successfully running builds for my microservices. Later will implement deployment manifests with helm.

I succeeded to build & deploy everything using monorepo approach ~14 services.

The from 0 to 100 cicd flow time is like ~30 minutes, with all builds unit and integration tests.

Before going monorepo jx consider:

  • Lots of shell scripts, working with git for changed services discovery, tagging, versioning, this is ugly...
  • Skaffold is not able to cache from remote repositories, builds could take less time - issue here
  • You will be forced to use jenkins classic instead of tekton, I feel like it would be better just to use only tekton instead, and all the custom gitops

It took me 80 hours to build everything.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle rotten

/remove-lifecycle rotten

Was this page helpful?
0 / 5 - 0 ratings