Generator-jhipster: Microservice Cassandra : status Down

Created on 11 May 2016  路  4Comments  路  Source: jhipster/generator-jhipster

Overview of the issue

When generating a microservice with Cassandra database, the microservice starts but his status is down

JHipster Version(s)

Current master (and probably since 3.0.0 but I'm not sure)

JHipster configuration, a .yo-rc.json file generated in the root folder
{
  "generator-jhipster": {
    "jhipsterVersion": "3.2.1",
    "baseName": "mscassandra",
    "packageName": "com.mycompany.myapp",
    "packageFolder": "com/mycompany/myapp",
    "serverPort": "8081",
    "authenticationType": "jwt",
    "hibernateCache": "no",
    "databaseType": "cassandra",
    "devDatabaseType": "cassandra",
    "prodDatabaseType": "cassandra",
    "searchEngine": "no",
    "buildTool": "maven",
    "jwtSecretKey": "fc9dbd0c2435ef0b0b08ba229829b84e3e6757fe",
    "enableTranslation": true,
    "applicationType": "microservice",
    "testFrameworks": [
      "gatling"
    ],
    "jhiPrefix": "jhi",
    "skipClient": true,
    "skipUserManagement": true,
    "nativeLanguage": "en",
    "languages": [
      "en",
      "fr"
    ],
    "clusteredHttpSession": "no",
    "websocket": "no",
    "enableSocialSignIn": false
  }
}
Reproduce the error

1) generate a microservice project with Database (see .yo-rc.json below)
2) start registry
3) start database cassandra
4) start microservice

mscassandra

Here the log

2016-05-11 15:11:53.853  INFO 9326 --- [           main] com.mycompany.myapp.MscassandraApp       : The following profiles are active: prod
2016-05-11 15:11:57.806  INFO 9326 --- [ost-startStop-1] c.mycompany.myapp.config.WebConfigurer   : Web application configuration, using profiles: [prod]
2016-05-11 15:11:57.813  INFO 9326 --- [ost-startStop-1] c.mycompany.myapp.config.WebConfigurer   : Web application fully configured
2016-05-11 15:11:58.200  INFO 9326 --- [ost-startStop-1] com.mycompany.myapp.MscassandraApp       : Running with Spring profile(s) : [prod]
2016-05-11 15:12:02.046  WARN 9326 --- [           main] c.n.c.sources.URLConfigurationSource     : No URLs will be polled as dynamic configuration sources.
2016-05-11 15:12:02.054  WARN 9326 --- [           main] c.n.c.sources.URLConfigurationSource     : No URLs will be polled as dynamic configuration sources.
2016-05-11 15:12:03.072  INFO 9326 --- [           main] c.n.d.provider.DiscoveryJerseyProvider   : Using JSON encoding codec LegacyJacksonJson
2016-05-11 15:12:03.072  INFO 9326 --- [           main] c.n.d.provider.DiscoveryJerseyProvider   : Using JSON decoding codec LegacyJacksonJson
2016-05-11 15:12:03.224  INFO 9326 --- [           main] c.n.d.provider.DiscoveryJerseyProvider   : Using XML encoding codec XStreamXml
2016-05-11 15:12:03.224  INFO 9326 --- [           main] c.n.d.provider.DiscoveryJerseyProvider   : Using XML decoding codec XStreamXml
2016-05-11 15:12:03.511  INFO 9326 --- [           main] c.n.d.s.r.aws.ConfigClusterResolver      : Resolving eureka endpoints via configuration
2016-05-11 15:12:03.552  INFO 9326 --- [           main] com.netflix.discovery.DiscoveryClient    : Disable delta property : false
2016-05-11 15:12:03.553  INFO 9326 --- [           main] com.netflix.discovery.DiscoveryClient    : Single vip registry refresh property : null
2016-05-11 15:12:03.555  INFO 9326 --- [           main] com.netflix.discovery.DiscoveryClient    : Force full registry fetch : false
2016-05-11 15:12:03.556  INFO 9326 --- [           main] com.netflix.discovery.DiscoveryClient    : Application is null : false
2016-05-11 15:12:03.557  INFO 9326 --- [           main] com.netflix.discovery.DiscoveryClient    : Registered Applications size is zero : true
2016-05-11 15:12:03.558  INFO 9326 --- [           main] com.netflix.discovery.DiscoveryClient    : Application version is -1: true
2016-05-11 15:12:03.559  INFO 9326 --- [           main] com.netflix.discovery.DiscoveryClient    : Getting all instance registry info from the eureka server
2016-05-11 15:12:03.795  INFO 9326 --- [           main] com.netflix.discovery.DiscoveryClient    : The response status is 200
2016-05-11 15:12:03.796  INFO 9326 --- [           main] com.netflix.discovery.DiscoveryClient    : Starting heartbeat executor: renew interval is: 30
2016-05-11 15:12:03.800  INFO 9326 --- [           main] c.n.discovery.InstanceInfoReplicator     : InstanceInfoReplicator onDemand update allowed rate per min is 4
2016-05-11 15:12:03.848  INFO 9326 --- [           main] com.netflix.discovery.DiscoveryClient    : Saw local status change event StatusChangeEvent [timestamp=1462972323848, current=UP, previous=STARTING]
2016-05-11 15:12:04.211  WARN 9326 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : Saw local status change event StatusChangeEvent [timestamp=1462972324211, current=DOWN, previous=UP]
2016-05-11 15:12:04.212  WARN 9326 --- [nfoReplicator-0] c.n.discovery.InstanceInfoReplicator     : Ignoring onDemand update due to rate limiter
2016-05-11 15:12:04.212  INFO 9326 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_MSCASSANDRA/mscassandra:b1b2ae1c30519cd0281ed823200695bc: registering service...
2016-05-11 15:12:04.224  INFO 9326 --- [           main] com.mycompany.myapp.MscassandraApp       : Started MscassandraApp in 13.478 seconds (JVM running for 21.277)
2016-05-11 15:12:04.226  INFO 9326 --- [           main] com.mycompany.myapp.MscassandraApp       : 
----------------------------------------------------------
    Application 'mscassandra' is running! Access URLs:
    Local:      http://127.0.0.1:8081
    External:   http://127.0.1.1:8081
----------------------------------------------------------
2016-05-11 15:12:04.227  INFO 9326 --- [           main] com.mycompany.myapp.MscassandraApp       : 
----------------------------------------------------------
    Config Server:  Not found or not setup for this application
----------------------------------------------------------
2016-05-11 15:12:04.698  INFO 9326 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_MSCASSANDRA/mscassandra:b1b2ae1c30519cd0281ed823200695bc - registration status: 204
2016-05-11 15:12:33.797  INFO 9326 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient    : Disable delta property : false
2016-05-11 15:12:33.797  INFO 9326 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient    : Single vip registry refresh property : null
2016-05-11 15:12:33.797  INFO 9326 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient    : Force full registry fetch : false
2016-05-11 15:12:33.797  INFO 9326 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient    : Application is null : false
2016-05-11 15:12:33.797  INFO 9326 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient    : Registered Applications size is zero : true
2016-05-11 15:12:33.798  INFO 9326 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient    : Application version is -1: false
2016-05-11 15:12:33.798  INFO 9326 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient    : Getting all instance registry info from the eureka server
2016-05-11 15:12:33.858  INFO 9326 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient    : The response status is 200
area

Most helpful comment

This error comes from the Mail health check:

  • The mail health check is running on all apps, and is usually failing as people don't have an SMTP server configured
  • So that means all applications are usually seen as "DOWN" globally, as there is this email service which is "DOWN"
  • That has never caused any issue!!
  • When using Cassandra, there is a Cassandra Health check that we coded: this also has never caused any issue before

-> but strangely, the combinaison of the "DOWN" mail health check and our specific Cassaandra health check (which is "UP") makes Eureka thinks the application is "DOWN".

The solution I have for the moment is to remove the Mail health check. This looks quite logical, as anyway most people are not using it, so it's better this way. Of course, people using it just need to switch it on again.

All 4 comments

This error comes from the Mail health check:

  • The mail health check is running on all apps, and is usually failing as people don't have an SMTP server configured
  • So that means all applications are usually seen as "DOWN" globally, as there is this email service which is "DOWN"
  • That has never caused any issue!!
  • When using Cassandra, there is a Cassandra Health check that we coded: this also has never caused any issue before

-> but strangely, the combinaison of the "DOWN" mail health check and our specific Cassaandra health check (which is "UP") makes Eureka thinks the application is "DOWN".

The solution I have for the moment is to remove the Mail health check. This looks quite logical, as anyway most people are not using it, so it's better this way. Of course, people using it just need to switch it on again.

Hi, we have disable the mail health check( of course we use newer jhipster ,have already modify), but , we alse have "DOWN" status , also state = 204.

@piaofudeyu : if you have an issue, plz open a new ticket with all required information

@piaofudeyu
Jdubois is right. Please check yml configuration under spring section, if there is a DB Mongo, Redis, Rabbitmq or other dependant is not connected, the status will be DOWN forever.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

sdoxsee picture sdoxsee  路  4Comments

chegola picture chegola  路  4Comments

DanielFran picture DanielFran  路  3Comments

Steven-Garcia picture Steven-Garcia  路  3Comments

ahmedeldeeb25 picture ahmedeldeeb25  路  3Comments