Generator-jhipster: Travis Continuous Integration

Created on 4 Nov 2017  路  15Comments  路  Source: jhipster/generator-jhipster

As discussed in private, I open this issue to publicly discuss about the current continuous integration with Travis and how it can be improved.

Here an old ticket: https://github.com/jhipster/generator-jhipster/issues/2182

What it is done today

When, there is a pull request or a commit, Travis will launch a lot of builds here: https://travis-ci.org/jhipster/generator-jhipster/
It's a build matrix, which test all these 20 configurations

There are 5 builds at the same time, and testing these 20 configurations took a lot of time.

To help that, there is a new project: https://github.com/hipster-labs/jhipster-travis-build
This project will launch a daily build for some configurations:

  • docker: 1 build
  • ng1: 9 config
  • ngx: 10 config
  • ngx-gradle: 10 config
  • microservice: 8 config
  • react: 1 config

The problem with the daily build -> it can't test pull request, only the code in Master branch.

What can be improved

I would suggest to create a new repo, which contains all .yo-rc.json configuration file, with the .json file for entities.
This repo can be used by generator-jhipster during the CI, by jhipster-travis-build, and some other modules like generator-jhipster-entity-audit, etc

We can change the current 20 configurations to test the most used options.
There are some options, I'm not sure we should test in main generator-jhipster:

  • infinispan
  • Cassandra
  • AngularJS
    There are already test in hipster-labs.

And I have a problem with UAA+Protractor. It failed randomly every day, and I have to restart it. So I'm not sure to keep the Protractor tests.

Other type of tests can be added, like Kubernetes, as we can launch MiniKube inside Travis.
Other configurations too: mongodb+elasticsearch, Couchbase, etc.

Ping @acherm : it is the ticket for you :-) If you have any questions, new ideas etc.

needs-discussion

Most helpful comment

Dear all,

Thanks for initiating the discussions!

We are currently trying to replicate our research effort for testing all configurations of JHipster (see our past experience and results for version 3.6.1 https://arxiv.org/abs/1710.07980).
It's taking time and effort. There are technical barriers/difficulties I want to share with you ;)

Obviously, in practice, we cannot test all configurations of JHipster (according to our estimation, the number is now around 200K configurations for version 4.8.2). Our research goal is mostly to understand how to efficiently _sample_ configurations.

The present initiatives for increasing the number of configurations to test is great and necessary, but perhaps the chosen sample does not cover some (combination of) options leading to failures.

Ideally speaking, we would like to have a procedure that automatically chooses and executes a sample of configurations. For doing so, we need:

  1. a so-called variability model for encoding all possible configurations of JHipster, on top of which we can execute some sampling algorithms with logic solvers. A manual elaboration of the variability model is possible, but you have to reproduce the effort for every version of JHipster. We are currently trying two alternate solutions right now. The first one is to parse the "prompt.js" and translate the JavaScript code into a variability model. Problem/barrier: it boils down to program a JS interpreter and there are subtle cases to handle. Another approach is to emulate the behavior of a user trying every options with the textual configurator (and stop it when we have the .yo-rc.json). Problem/barrier: a bit radical and brute force, let's see if it scales.
    If you have any hint to handle the "prompt.js", don't hesitate ;)

  2. an "all inclusive environment" in which we have all tools/packages to execute _any_ possible configuration. We had a look at DevBox https://github.com/jhipster/jhipster-devbox It is a great initiative but it does not contain all tools, meaning that some configurations cannot be build or executed. Here we are wondering how you deal with tools/packages in Travis CI: did you install everything on Travis? Or did you select (on purpose/incidentally) configurations that eventually work on Travis machines?

  3. a "pre-caching" of dependencies (eg Maven) since it can dramatically speed up the compilation/building process. Do you rely on Travis "cache" for this task?
    Ideally speaking we would like to pre-download every possible dependencies and install it on the testing machines (to avoid downloading again and again dependencies). Is the repo https://github.com/jhipster/jhipster-dependencies used for that purpose?
    Our basic strategy is to include every dependency by activating all options in _pom.xml https://github.com/jhipster/generator-jhipster/blob/master/generators/server/templates/_pom.xml (normally there are no conflicts) and then downloading everything.

  4. custom building/testing procedures aware of the specificities of a JHipster configuration: Let's take a very simple example here. When we select "maven" as option, we will launch ./mvnw but when it's "gradle" we call ./gradlew
    The same observation applies for database options: we need to use a "custom entity" for testing the JHipster application.
    The work initiated here is great: https://github.com/hipster-labs/jhipster-travis-build/blob/master/travis/scripts/01-generate-entities.sh
    but we suspect more custom commands are needed for handling any configurations.

To sum up, we would like to extend the wonderful work realized here https://github.com/hipster-labs/jhipster-travis-build by bringing further automation. By making it work for _any_ configurations, we can breath more diversity/randomness in the choice of configurations and thus cover more cases.

Any help/clarification on the four mentioned points much appreciated!

Best regards,

Mathieu Acher on behalf of my colleagues at University of Rennes and University of Namur

All 15 comments

Few ideas/comments

  1. We can also make it parellel by also using circle CI so we do some builds with travis and some with circle CI. I believe it wont be hard to reuse the scripts for that as well. It would reduce the overall build time. I would suggest doing half of the items in travis and other half in circleCI
  2. We will replace the AngularJS builds with React anyway
  3. Yes protractor for UAA can be disabled I guess

I have noticed that generating the application takes a significant amount of time 153sec. However, if I regenerate the app, it takes only 83 sec. So, I think if after each commit on master we bake docker images for all the config we are testing, and then regenerate the application based on the PR changes, this would save us 70 * 20 = 1400 sec (almost 24min).

The caveat I see with this approach is that we have to push docker images after each commit on master. Also, we need to investigate the side effects of a PR that removes some files.

Wasn't the goal to make specific JHipster yo templates/modules?
I suggest to templatize some of the oldest techs (Angular 1 for example) as they are not so Hipsterish anymore: it will speed up the travis build ;-).

Concerning protractor, I'm pretty extremist: no way to deactivate tests, no way to have some random failing one: I would stop every other developments and concentrate all the effort on fixing this. An Open source project cannot live without a well baked CI running tests.

Another repo for testing is IMHO a bad idea: every PR should be tested easily without any hand made stuf by a commiter because their time is precious.

About MicroServices with UAA:

  • I don't have stats but I'm not sure it is used a lot
  • I don't want to deactivate tests for Protractor, only for Protractor+UAA. The others work fine
  • Protractor tests for UAA are only done in this repo https://github.com/hipster-labs/jhipster-travis-build not in the main generator-jhipster
  • Currently, I'm the only maintainer of this repo and I'm really borring for restarting this random failed build every day. That's why I said I will deactivate Protractor for UAA only.

JHipster is a specific project, you can't test all configurations. There are more than 26000 configurations.

So we tried to test the most used in the main repo, and the repo https://github.com/hipster-labs/jhipster-travis-build simply tried some other configurations.

@pascalgrimaud WDYT about trying out circle CI?

Ok let's try. I will do it on my fork

@deepu105 @pascalgrimaud currently using Circle, it's pretty neat (compared to Travis) and really cool they have their open-source offer :)

Let me contact Travis CI - maybe they would be ready to give us more containers?

it would be awesome!

Dear all,

Thanks for initiating the discussions!

We are currently trying to replicate our research effort for testing all configurations of JHipster (see our past experience and results for version 3.6.1 https://arxiv.org/abs/1710.07980).
It's taking time and effort. There are technical barriers/difficulties I want to share with you ;)

Obviously, in practice, we cannot test all configurations of JHipster (according to our estimation, the number is now around 200K configurations for version 4.8.2). Our research goal is mostly to understand how to efficiently _sample_ configurations.

The present initiatives for increasing the number of configurations to test is great and necessary, but perhaps the chosen sample does not cover some (combination of) options leading to failures.

Ideally speaking, we would like to have a procedure that automatically chooses and executes a sample of configurations. For doing so, we need:

  1. a so-called variability model for encoding all possible configurations of JHipster, on top of which we can execute some sampling algorithms with logic solvers. A manual elaboration of the variability model is possible, but you have to reproduce the effort for every version of JHipster. We are currently trying two alternate solutions right now. The first one is to parse the "prompt.js" and translate the JavaScript code into a variability model. Problem/barrier: it boils down to program a JS interpreter and there are subtle cases to handle. Another approach is to emulate the behavior of a user trying every options with the textual configurator (and stop it when we have the .yo-rc.json). Problem/barrier: a bit radical and brute force, let's see if it scales.
    If you have any hint to handle the "prompt.js", don't hesitate ;)

  2. an "all inclusive environment" in which we have all tools/packages to execute _any_ possible configuration. We had a look at DevBox https://github.com/jhipster/jhipster-devbox It is a great initiative but it does not contain all tools, meaning that some configurations cannot be build or executed. Here we are wondering how you deal with tools/packages in Travis CI: did you install everything on Travis? Or did you select (on purpose/incidentally) configurations that eventually work on Travis machines?

  3. a "pre-caching" of dependencies (eg Maven) since it can dramatically speed up the compilation/building process. Do you rely on Travis "cache" for this task?
    Ideally speaking we would like to pre-download every possible dependencies and install it on the testing machines (to avoid downloading again and again dependencies). Is the repo https://github.com/jhipster/jhipster-dependencies used for that purpose?
    Our basic strategy is to include every dependency by activating all options in _pom.xml https://github.com/jhipster/generator-jhipster/blob/master/generators/server/templates/_pom.xml (normally there are no conflicts) and then downloading everything.

  4. custom building/testing procedures aware of the specificities of a JHipster configuration: Let's take a very simple example here. When we select "maven" as option, we will launch ./mvnw but when it's "gradle" we call ./gradlew
    The same observation applies for database options: we need to use a "custom entity" for testing the JHipster application.
    The work initiated here is great: https://github.com/hipster-labs/jhipster-travis-build/blob/master/travis/scripts/01-generate-entities.sh
    but we suspect more custom commands are needed for handling any configurations.

To sum up, we would like to extend the wonderful work realized here https://github.com/hipster-labs/jhipster-travis-build by bringing further automation. By making it work for _any_ configurations, we can breath more diversity/randomness in the choice of configurations and thus cover more cases.

Any help/clarification on the four mentioned points much appreciated!

Best regards,

Mathieu Acher on behalf of my colleagues at University of Rennes and University of Namur

1) It鈥檚 the hardest part, IMO. Currently, it鈥檚 done manually, we generated some projects, took the .yo-rc.json files, and put them in tests.
We tried to test the most used configuration.

2) About DevBox, you should have all tools to test every configurations. Maybe you mean about Oracle and MSSql ? We have now docker-compose files for them. But it鈥檚 really long to start.

You can test the configurations inside our Docker image jhipster/jhipster but you can鈥檛 launch Protractor tests, because it needs Chrome. We can鈥檛 install Chrome inside this image. And, for testing with Protractor, you need to launch Database: you can鈥檛 launch easily Docker inside Docker.

About Travis, yes, we install all tools which is needed: Java8, Git, Yarn, NPM, Gulp, Bower, etc. It鈥檚 pretty fast. Some of them are already pre-installed.

3) Travis has a very nice cache system. See there lines of code: https://github.com/jhipster/generator-jhipster/blob/master/.travis.yml#L16-L19
This cache is not global but specific to every build from the matrix, and it updated automatically after every builds. Here the official doc
And every PR will use these caches, so it鈥檚 really nice.

We could cache node_modules too, but recently, we encounter some issues with JS libraries, specially because we used yarn link so I removed it.

4) sure it can be improved, specially for entities and relationships.

Currently, we used Travis for testing a lot of configurations, and testing all of them will be impossible. Travis is totally free for Open Source project, and it would not be nice from us if we launch builds 24/24 7/7.
So we should choose what configurations we want to test:

  • Maven
  • Gradle
  • Ng1
  • NgX
  • React
  • MicroServices

And for each groups, we should use less than 10 builds.
It would be 60 builds daily, and it鈥檚 already a lot. Of course, these groups can be changed / improved :)

Something you didn't mention, is to test other sub generators: Heroku, CloudFoundry, AWS, Kubernetes etc... It's really not easy ;-)

Another idea would be to launch intelligent builds for pull requests:

  • if it's a typo fix: skip the build
  • if it's concerned Cassandra files only: Travis should launch only Cassandra builds
  • if there is a lot of modified files: launch the default builds
  • etc.

@jdubois did you hear back from Travis? else we could still use Circle CI for half the test, its free as well

@deepu105 yes they answered me, but they can't help us (of course they already provide so much to us, I totally understand this)

I was working with CircleCI, few days ago.
It is in standby because I was a little bit busy, but I will put here the results as soon as it's done.

I'm closing this, as Travis + CircleCI are in continuous improvement, so I can't let this ticket opened all the time.
Anyway, don't hesitate to comment, to add new ideas etc.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

sdoxsee picture sdoxsee  路  4Comments

frantzynicolas picture frantzynicolas  路  3Comments

DanielFran picture DanielFran  路  3Comments

Steven-Garcia picture Steven-Garcia  路  3Comments

DanielFran picture DanielFran  路  3Comments