NX promotes the monorepo and adds great tooling on top of the angular-cli to make this easier to manage for developers. The dependency graph, tags, custom lint rules and custom schematics are working out great in our projects and developers are happy. The area that is not so opinionated by NX is in the Continuous Delivery/Deployment area. It would be great to see some opinionated best practices either via docs or via schematics that provide best practices &/or starting points.
Note: I know that every company and dev team is different. What might be successful for one team might not be for another but I would guess that most users of nx have similar end goals which is getting their apps pushed to production. The differences probably are in how they get there and how many environments, loading testing, infrastructure, etc.. but ultimate goal is to get app pushed to production.
Some assumptions:
For example, currently a pull-request to Master provides an opportunity to run affected:test and affected:build. This is also when our code-review is done. Finally the change is merged into master (aka trunk) where we build our containers and tag it. This gets automatically released to dev servers for testing. I think the current features of NX with affected:build or even if bazel was being used solves this part of the CI and makes sure our build stay fast and we do not do extra work.
Where it starts to get a little harder to sell the idea of the monorepo is when each application owner wants to be able to cut a release of their application at different intervals. What if then a hotfix needs to be moved into that individual application?
I have scoured the web looking for ideas and it seems like the following four approaches come out:
Feature Request
Recognizing that everyone's situation is different, It would be great to either have some documentation that gave guidance or schematics that would provide opinionated starting point on how to solve for this in a monorepo. Not meaning use xyz vendor but rather here is what we have seen be fairly successful and a good starting point. Tweak as necessary.
This probably borderline question/feature request so if you do not feel it is appropriate here I can move it to StackOverflow.
Thanks,
Lance
Hi Nx Team,
It would be great if you share your inputs on this.
Thanks.
Would indeed be awesome to get some examples
Hey folks! We are thinking about writing a guide on setting up CI for a monorepo. Not everything can be made concrete given that CI solutions differ so much, but the basic ideas are the same regardless if you use Circle, Azure, or Jenkins.
@vsavkin - Thanks that would be great!
We have been successfully following pattern 4 (I mentioned above) on CircleCI where we run yarn run affected:apps --base=origin/master~1
on each check-in to master. We then just execute a shell script per affected app. It is then up to the app owner to determine how they want to build and deploy their application. In practice, most of the apps have a very similar shell script to build the app, build the container and deploy to the dev server.
Then releasing to other environments is manual with help from a custom cli we built to lookup available containers and choose servers for deployment.
Note - we do enforce squashing PR commits but I'm guessing if we wanted we could enhance it to look at bitbucket/github for the number of commits in a PR and set the --base appropriately to make sure we are only building the affected apps.
As for Hotfixes, we just do hotfix branches where we first merge to master and then cherry pick the commit into the hotfix branch. Then to release the hotfix to production we do via the custom release cli we have built.
I would say the one thing that is most annoying part of how we have our monorepo setup is that we have angular.json, tslint.json
, etc... as dependencies for the apps which trigger all apps to be built when changed. On one hand it makes sense to rebuild everything to make sure nothing has been broken by the commit but sometimes it is a simple adding an asset to a single application or changing one flag (baseUrl or something) for a single application in angular.json
. As we have added more applications and test the builds have become slower so it would be nice if in those cases it would just build the single application rather than everything. Not sure if NX can do anything here because of the way the angular.json is setup or maybe this can be addressed in the future by Bazel??
Either way look forward to seeing your guide and let me know if you need any help reviewing or contributing to it!
Cheers,
Lance
Also do not forget about cleaning up environments - otherwise it will become a mess pretty quickly
In our environment (bitbucket pipelines -> aws s3 -> cloudflare) we can do something like scripted creation of bucket if needed and syncing dist folder into it and even create subdomain in cloudflare - it is easy for one app, but in case of monorepo with multiple apps becomes not so easy task
What is even worse if you will have some simple SPA, some with SSR and Angular Elements all hosted on different platforms - seems like the only posible solution here is for each app to have its own deploy script defined
@vsavkin Hi, any progress with the guide? where we can find it? 馃槃
@gperdomor sorry, haven't written it yet.
Also configuration question should be covered somehow if we are talking about some kind of continuous delivery systems which lives on its own and separated from continuous integration, e.g. OctopusDeploy which is responsible for deploying artifacts (zipped dist apps) there is no environment.ts in dist and should be somehow managed also
I'm really interested by getting some guidance around this.
We're using Nx for our project (NG client-side app + Nest back-end app) and I'm now busy implementing a first iteration of our CI/CD pipeline on Gitlab.
A first pitfall I just fell into is the fact that Cypress is apparently not working under Alpine; bummer since I chose that as the base node image variant for our Dockerfile and Gitlab CI config ;-)
For us at moment we are doing following (in bitbucket pipelines)
step 1: determine base
determine commit of previous successful build for current branch
if nothing found and we are in pull request - take last build for pull request target
if still nothing take master
step 2: build affected against determined base
step 3: foreach project we found in angular.json and having dist folder sync to S3
pitfalls:
wishes:
What about an affected:deploy
with the just released angular cli feature, deploy -> ng deploy yourApp
?
@dianjuar not sure if I find what you mean, is it - https://github.com/angular/angular-cli/commit/5df50bacbe11f029e7d841395f16c02d804f07db this commit? It might be good starting point but still not sure how then it will wireup with affected
from nx
I'm taking about the ng cli feature deploy.
So maybe the NX team can do an affected:deploy
and start the deploy of those apps. Just like affected:test
Folks, a quick update for you:
We are working on the updated Nx video course which will cover not just the basics, but also things like CI, org management. The first part of the course should be released next week with other parts coming soon after.
@vsavkin i hope also include dockerization of affected apps :D
@vsavkin yo - I'm working on this right now as well and I'm wondering if there's an easy way to even just get an array of the affected app names. I think that would be enough since I'm writing custom scripts for the Nest + Docker stuff. I've looked through the src but can't seem to figure out how to tap into that command so I can use the output.
@mcblum you can read angular.json file which contains list of all available apps and their corresponding dist paths so after affected build you gonna know what apps were affected
@mcblum I wrote a script to do that for our prehook tests/linting.
const nrwl = require('@nrwl/schematics/src/command-line/affected');
nrwl.affected({
_: [`affected:${command} --base=${environment} --head=HEAD --coverage=no`],
target: command,
base: `origin/${environment}`,
head: 'HEAD',
exclude: [],
parallel: false,
maxParallel: 3,
});
}
I was having issues using child processes so I'm just calling what they're using under the hood. It's broken a couple of times during new Nx releases, but never took more than an hour to fix.
@mac2000 totally, I guess what I didn't totally realize is that if some of the builds fail it will throw a non-zero exit code, right? What about in the case where some builds fail? Are people doing deploys of, say, just one app in the monorepo or is it always all or nothing? I assume if user-service
fails, there will be no user-service
dir in the dist/apps folder, correct?
@mcblum It's all or nothing for me. I probably could figure out something that would allow one app to fail and one to succeed if they didn't share a dependency that required rebuilding, but idk If I like that. Build fail should be a signal something is wrong and you need to fix it, not let me push this through.
Agree, if something broken in one apps there is a chance that it might hurt other, just imagine a case when one app builds with -aot and another without, both using shared component which has problems
Our script at moment looks something like this one:
# build affected
yarn affected:build --prod --base=$base --no-progress
# foreach outputPath in angular.json
for dist in $(cat angular.json | grep '"outputPath": "dist/apps/' | sed 's|"outputPath": ||' | sed 's|,||' | sed 's|"||g')
do
# if dist contains outputPath
if [ -d "$dist" ]
then
# extract app name
app=$(echo $dist | sed 's|dist/apps/||' | sed 's|/|-|g')
echo "$app: $dist" # myapp: dist/apps/myapp
# todo: sync s3 or zip and send to your continuous integration
fi
done
Hi all,
Currently, I am trying to determine --base
option (or maybe --head
option when running affected:*s
command(s). When running multiple times against a branch, we will get the following warning message:
> nx affected:build --base=origin/develop
> NX No affected projects to run target "build" on
The above warning occurs when we rerun the pipeline, the reason for that is the previous execution got an error when deploying the affected apps to a target environment. So using --base
option will not work since the code has been merged to origin/$base
. We trigger deployment based on develop branch after a PR is merged to our main branch which is develop.
Instead of using --base
option, I think we need to use --head
option. This article shows an approach to implement for Jenkins, but I am not sure whether it can be done with Azure DevOps pipeline as well. I am new to Azure DevOps
Rather than using master
as a base, use a SHA. With one pipeline I have the very end writes an artifacts.json file which contains the sha of the last successful build. Since we know that, we can compare --base=${sha} --head=HEAD and know which apps have changed between two arbitrary points in time. This works on code in any state, merged or not.
Give that a try, see if it works!
I also thought about it but still finding an example of create and get that artifacts.json file when executing pipeline, does Azure pipeline support publish and retrieve? Or we need to upload to somewhere such as AWS S3
@hoang-innomizetech all good man -- we use AWS but I'm sure Azure has some concept of S3, right? Here's what we do, it's quite simple but it works for us. Keep in mind, we're on Gitlab and we deploy to our own Docker Swarm:
.gitlab-ci.yml: https://paste.nationalmachine.io/akifotekef.http
build.ts: https://paste.nationalmachine.io/uzaweyexaw.js
buildAngularDocker: https://paste.nationalmachine.io/behejelowa.js
artifacts.ts: https://paste.nationalmachine.io/orudalahuy.js
You'll need to handle whatever you're doing with S3 / Azure by setting your env variables and assigning the correct profile, but this works. If you don't want to do that, another way is just to run something like Redis or another simple kv store where you can push and pull data. All you really need to know is when the last time ${x} happened so you can run your affected commands.
For us, once things are built they are all configured externally so we only build them once and ship the same code to all envs. Clients make a pre-Angular bootstrap API call to a config service, Nest apps run on env variables which are set by out deploy scripts.
Hope this helps, and good luck!
Thank you @mcblum. Great to see it works with TypeScript :)
@hoang-innomizetech sure thing man -- yeah I don't like Bash so I'd prefer to just write everything in TS and use execSync
when needed. Laziness for the win!
@vsavkin any progress on the tutorial? Thanks
For my CICD pipeline, at the end of a successful build, I push a git tag (**) with ${branchName}-last-good-build
on the SHA of the last commit for which I did the build.
Then, when I build again, I look up the SHA of the same tag ${branchName}-last-good-build
and use that.
This avoids any external S3.
Also, once a PR is merged, I go out and delete the corresponding PR's branch's successful tag ${branchName}-last-good-build
. But I only do this for feature branches, not for master
, staging
, or production
branches, because I don't want tags from feature branches sticking around forever.
** Using Github API -- but you can just do it with regular git tag
and git push --tags
)
Yep it will definitely work and what is good it will not depend on which CI is used
But still to reduce build time we need to somehow figure out how to reuse ivy caches between build in our case build time is already 20+ minutes
Wow, 20 mins for only building release? Right now had 7 services/apps and it takes around 20mins to build and deploy.
@atifsyedali Your idea is a good suggestion, but sometime we will need to write extra information about the last successfully for an example last affected apps or we need to somehow force redeploy an app
export interface Artifacts {
/**
* Last commit SHA of the previous success build
*/
lastSuccessCommit: string;
/**
* Last success change logs contains author and subject from GIT log
*/
lastSuccessChangeLogs?: string;
/**
* List of app that need to force redeploy
*/
forceRedeployApps?: string[];
/**
* List of app have affected by previous success build
*/
lastAffectedApps?: string[];
/**
* Branch name was affected of preview success build
*/
branchName?: string;
/**
* Time of previous success build
*/
lastRunAt?: Date;
}
@hoang-innomizetech Good point, but each of those things is implied in my private repo that uses Github Actions (your use case may be different if your repo is public):
lastSuccessCommit
: this is the SHA against which I tag the successful build.
lastSuccessChangeLogs
: these are the names of the commits that are part of the PR.
forceRedeployApps
: I have a package.json in each nx-app that I can artifically increase it's semantic version to force redeploy that app, since it just gets included as part of the nx affected
list.
lastAffectedApps
: (a) I print out the affected apps in the CICD pipeline, (2) I add the affected apps that are built as artifacts for the CICD job, (3) I (should be soon) create a comment in the PR for the affected apps that were deployed.
branchName
: I have a husky rule that takes the branch name and prefixes every commit message with it. So I know exactly which branches are included as part of PRs that roll over to master, staging, and production.
lastRunAt
: this is already included as part of the checks of the PR using Github Actions that I use.
This is a good start: https://blog.nrwl.io/blazing-fast-distributed-ci-with-nx-a1f5974f7393
Would love to see adding Docker to the mix and how to deal with deriving separate package.json
files with minimum packages needed to run a given container.
Just found nx-semantic-release late last night. It's a new plugin for semantic-release with only one contributor, but it addresses a lot of issues brought up here, and could be a good step to automating release cycles.
Hey folks. Just wanted to give you and update.
We have two repos talking through the distributed build setup:
This one talks about using distributed caching:
We are going to create another repo showing the setup for CircleCI. And we are working on more structured docs about CI/CD.
All of them focus on the CI part of the CI/CD story. The deployment part tends to be a lot more org-specific, so it's harder to write about general recommendations, but we are thinking about it as well.
Please add GitLab CI to the mix. It's used by quite many these days.
@demisx this is a POC with gitlab support https://github.com/nrwl/nx-azure-build/pull/2
@gperdomor Great. Thank you. In the back of my mind I was expecting something like nx-giltabci-build
. Or maybe nx-aws-gitlabci-build
. GitLab CI with AWS is quite a popular mix.
@vsavkin Can you guys please review this PR nrwl/nx-azure-build#2 and reward man's hard work?
@demisx at the moment of send that PR i didn't know the intentions of having multiple repos, so i just update the scripts to work with azure and gitlab, but is a good starting point
For those working in Gitlab CI/CD, you can utilize some of the built-in variables that Gitlab provides. When using a centralized Git workflow, you will often make several commits for each git push
to remote. By using these variables, you can lint, test, and build only the affected apps and libs even when there are multiple commits between each push.
In the nx
documentation, you will often see a command such as:
nx affected:dep-graph --base=master~1 --head=master
This works well in CI/CD if you make 1 commit for every 1 push. But what if you make 10 commits locally and push once? Then you are only getting what was affected between the current state of the repo and the most recent commit.
By using Gitlab CI/CD built-in variables, you can write something like this:
nx affected:lint --base=${CI_COMMIT_BEFORE_SHA} --head=${CI_COMMIT_SHA}
This will give you everything affected since the last time you pushed to your remote repository, which is really useful in Gitlab's CI/CD pipeline.
@zachgoll Thanks, it works, but only somewhat. Often on merge requests CI_COMMIT_BEFORE_SHA
is all zeroes:
$ nx affected:lint --base=${CI_COMMIT_BEFORE_SHA} --head=${CI_COMMIT_SHA}
fatal: Not a valid commit name 0000000000000000000000000000000000000000
/builds/timesheetsapp/timesheet-frontend/node_modules/yargs/yargs.js:1109
else throw err
^
Error: Command failed: git merge-base 0000000000000000000000000000000000000000 a6b03ae5cc6f4fd7f39dba3b863dccb1dcb54575
fatal: Not a valid commit name 0000000000000000000000000000000000000000
Or if it's not zeroes, it detect nothing as affected, even though there are affected parts that should be linted (entire new ui lib e.g.)
Do you have any solution for this?
@Frotty That is a fair point, and as I mentioned, that solution is mostly for those working with a centralized Git workflow where all commits are happening on the master branch. I do not readily have a solution for those dealing with multiple branches.
@zachgoll Thanks for the response. I was able to pinpoint the issues.
The hash is all zeroes if you push to a branch which doesn't have a MR attached to it, which I suppose is fine.
The affected
issue was due to merging a different MR branch into the MR branch. After the one branch was merged into master, and the remaining one updated, it worked.
Avoiding these two issues, your suggested command seems okay.
Hi, sorry about this.
This was mislabeled as stale. We are testing ways to mark _not reproducible_ issues as stale so that we can focus on actionable items but our initial experiment was too broad and unintentionally labeled this issue as stale.
BTW in our case we had following scenario - on release date there were really wrong merge into master our lead decided that it will be better to delete it and force push which did broke the build because previous commit did not exists anymore so we added kind of workaround for such cases to always use master as base
Any approaches for getting this to work in TeamCity?
For anyone using TeamCity, I was able to get the affected scripts working by writing a shell script that leverages the %system.teamcity.build.changedFiles.file%
agent build property, which gives you the path to a file that contains a list of files that were changed. The content of the file can then be parsed into a comma delimited list of files and passed to the affected command's --files
flag.
changedFilesPath=%system.teamcity.build.changedFiles.file%
# parse changedFiles file into comma delimited list
changedFiles=$(cut -d: -f1 "$changedFilesPath" | paste -sd "," -)
# noop if no files changes, otherwise affected command will fail
if [ -z "$changedFiles" ]; then
echo "No files changed... Skipping npm run affected script."
# pass changed files to affected command with --files option
else
nx affected:test --files="$changedFiles"
fi
This issue has been automatically marked as stale because it hasn't had any recent activity. It will be closed in 14 days if no further activity occurs.
If we missed this issue please reply to keep it active.
Thanks for being a part of the Nx community! 馃檹
Does someone got this working using gitlab with multibranch strategy ?
Hey for gitlab-ci I use this:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
# Define variables
variables:
AFFECTED_BASE: '(if [ "$${CI_COMMIT_REF_SLUG}" == "master" ]; then echo "HEAD~1"; else echo "origin/master"; fi);'
.get_affected_base: &get_affected_base
before_script:
- BASE=$(eval $AFFECTED_BASE)
- echo "Affected Base ${BASE}"
stages:
- build
- quality
- test
- release
- deploy
build:
stage: build
image: @myorg/docker-env:node14-nx
after_script:
- nx report
script:
# Install all project dependencies
- yarn install
lint:
stage: quality
image: @myorg/docker-env:node14-nx
<<: *get_affected_base
script:
- nx affected:lint --base=$BASE --head=HEAD
format:
stage: quality
image: @myorg/docker-env:node14-nx
<<: *get_affected_base
script:
- nx format:check --base=$BASE --head=HEAD
test:
stage: test
image: @myorg/docker-env:node14-nx
<<: *get_affected_base
script:
- nx affected:test --base=$BASE --head=HEAD
I'm still working to figure out how I will handle deploy (perhaps add deploy with run-commands in architect for each projects).
@Nightbr take note that you are always building against master which might be a waste of time
Imagine the following:
feature-1
and made the first commitmaster
branch which is ok, and for example, build 5 appsIn our ci we are retrieving previous successful build for the same branch and use its hash as base which in example above will build single app for second commit
PS: if you have configured build cache on the other hand it wont hurt much
@mac2000 Nice! I couldn't achieve to retrieve the last previous successful build on the same branch with Gitlab-ci so I prefer to rerun all CI stage on the same Merge Request (branch) to avoid failing commit.
For example, branch feature-1
, commit-1
CI passed, commit-2
CI failed, commit-3
CI passed.
But if you have a way to get the latest previous successful build on the same branch in order to get the commit hash and use it as the base for affected it could be a nice optimization :+1:
EDIT: Seems not in predefined environment variables of Gitlab CI - https://docs.gitlab.com/ee/ci/variables/predefined_variables.html
Thanks!
Most helpful comment
Hey folks! We are thinking about writing a guide on setting up CI for a monorepo. Not everything can be made concrete given that CI solutions differ so much, but the basic ideas are the same regardless if you use Circle, Azure, or Jenkins.