On our main application, we been had noticing an issue where we see TIMEOUT errors (those red alerts) when running the entire stack through Docker - running just the registry, gateway, and service via a ./gradlew command doesn't seem to trigger the timeout so far. The error seems to occur only intermittently when the gateway attempts to make a call to a separate service; we do not see the error when attempting to sign in, for instance.
Prior to submitting this issue, I've created a new application (still using the microservices configuration) consisting of a new gateway (called gateway) and microservice (called service1) app, and their respective configurations are provided below.
Docker logs for the gateway and service are here (note that this was started using the prod profile): http://pastebin.com/LPGfg78U
On the client side, the browser should spit out an error on the console, e.g. from Chrome:
http://localhost:8080/service1/api/stuffs?cacheBuster=1469041883439&page=0&size=20&sort=id,asc 500 (Internal Server Error)
While searching through the past issues here and on SO, there seem to be a couple of other reports of a similar issue. I'm hoping to be able to confirm whether this is truly a bug or if it's due to something I'm overlooking or misconfiguring.
3.4.2
.yo-rc.json file generated in the root folderGateway:
{
"generator-jhipster": {
"jhipsterVersion": "3.4.2",
"baseName": "gateway",
"packageName": "some.sample",
"packageFolder": "some/sample",
"serverPort": "8080",
"authenticationType": "jwt",
"hibernateCache": "hazelcast",
"clusteredHttpSession": "hazelcast",
"websocket": "no",
"databaseType": "sql",
"devDatabaseType": "postgresql",
"prodDatabaseType": "postgresql",
"searchEngine": "no",
"buildTool": "gradle",
"jwtSecretKey": "d4d86ce6490f6214302b2c628db4aa01dd86dbdd",
"useSass": true,
"applicationType": "gateway",
"testFrameworks": [],
"jhiPrefix": "jhi",
"enableTranslation": false
}
}
Service:
{
"generator-jhipster": {
"jhipsterVersion": "3.4.2",
"baseName": "service1",
"packageName": "some.sample",
"packageFolder": "some/sample",
"serverPort": "8081",
"authenticationType": "jwt",
"hibernateCache": "hazelcast",
"databaseType": "sql",
"devDatabaseType": "postgresql",
"prodDatabaseType": "postgresql",
"searchEngine": "elasticsearch",
"buildTool": "gradle",
"jwtSecretKey": "2515e9c0c90ac006134365126452884edf76cfe9",
"enableTranslation": true,
"applicationType": "microservice",
"testFrameworks": [],
"jhiPrefix": "jhi",
"skipClient": true,
"skipUserManagement": true,
"clusteredHttpSession": "no",
"websocket": "no",
"enableSocialSignIn": false,
"nativeLanguage": "en",
"languages": [
"en",
"fr"
]
}
}
entityName.json files generated in the .jhipster directoryThe JDL was not used; the entity was created using yo jhipster:entity, and the appropriate skip flag for the service and gateway.
Gateway:
{
"relationships": [],
"fields": [
{
"fieldName": "field1",
"fieldType": "String"
}
],
"changelogDate": "20160720190010",
"dto": "no",
"service": "no",
"entityTableName": "stuff",
"pagination": "pagination",
"microserviceName": "service1",
"searchEngine": "no"
}
Service:
{
"relationships": [],
"fields": [
{
"fieldName": "field1",
"fieldType": "String"
}
],
"changelogDate": "20160720190010",
"dto": "no",
"service": "no",
"entityTableName": "stuff",
"pagination": "pagination",
"microserviceName": "service1",
"searchEngine": "elasticsearch"
}
OS: OSX 10.11.5
Browsers: Firefox (45.2.0) and Chrome (51.0.2704.106)
Startup using the usual docker-compose up -d command. While logged in, attempt to access an entity's page. For example, trigger the default GET request to list the entity, or POST when saving a new item. The request will still go through - creating a new item will actually be persisted to the DB and should display on the site on refresh.
The most similar issue previously reported is: #3771, but it sounds like it only occurs on the first call for him. Like him, waiting a few minutes doesn't seem to prevent the issue; I was also able to trigger it again when I attempted to create a new item (for the Stuff entity). I wasn't able to find any issues which also utilizes Docker.
Haven't been able to pinpoint the exact problem. As mentioned in the linked issue, increasing the timeout may be a band-aid solution, but doesn't solve the issue of answering why the requests can occasionally time out.
running just the registry, gateway, and service via a ./gradlew command doesn't seem to trigger the timeout so far
With docker-compose, everything run in prod profile.
Can you try to package in prod profile, then use java -jar target/project.war --spring.profiles.active=prod and try to reproduce the timeout error ?
Gateway (this is immediately before the exception):
2016-07-20 18:22:32.196 INFO 42830 --- [nio-8080-exec-1] c.h.partition.InternalPartitionService : [192.168.1.21]:5702 [dev] [3.6.1] Initializing cluster partition table arrangement...
2016-07-20 18:22:32.247 WARN 42830 --- [nio-8080-exec-1] o.s.c.n.zuul.web.ZuulHandlerMapping : No routes found from RouteLocator
2016-07-20 18:22:59.615 WARN 42830 --- [rixMetricPoller] com.netflix.spectator.api.Spectator : no config impl found in classpath, using default
2016-07-20 18:23:04.179 WARN 42830 --- [nio-8080-exec-7] o.s.c.n.z.filters.post.SendErrorFilter : Error during filtering
Registry:
2016-07-20 18:22:01.085 INFO 42331 --- [nio-8761-exec-3] c.n.e.registry.AbstractInstanceRegistry : Registered instance SERVICE1/service1:41dcbd210229b34252a852501f7aa016 with status UP (replication=false)
2016-07-20 18:22:01.600 INFO 42331 --- [io-8761-exec-10] c.n.e.registry.AbstractInstanceRegistry : Registered instance SERVICE1/service1:41dcbd210229b34252a852501f7aa016 with status UP (replication=true)
(Edited to remove service1 snippet which was a duplicate of the registry's. There wasn't anything eventful from the service1 logs)
Noticed that there was a 3.5 release so I decided to create a new project and repeat what Pascal mentioned. Looks like it's reproducible there as well with the same errors/output.
Additional notes:
1) It looks like I can make a request directly to the service using Postman and there would be no timeout. A subsequent request for the same resource via the actual site resulted in a timeout.
2) When starting up the gateway/service using the java command method, I see the timeout error during pretty much any request to the service. Previously using ./gradlew, I would only see the error during the first request and not subsequent ones.
We are having this same problem in our project. We "solved" by increasing timeout. I think that's a dirty solution... We didn't found the source of the problem yet. Following.
Same problem for me, whether the registry, services and gateway are launched using docker or using the WAR files directly though.
@evelknievel can you indicate which timeouts you modified for them to disappear? I can't find anything related in the conf files.
Some settings I use on gateway:
zuul:
host:
connect-timeout-millis: 5000
socket-timeout-millis: 10000
hystrix:
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds: 10000
Well, now I don't have TIMEOUTs anymore, thanks @gmarziou :)
That doesn't look very clean, but if indeed that solves the issue, we should have those parameters by default.
@gmarziou would you like to do the PR? Otherwise I could do it, of course.
I will do it by the end of the week.
Hello, i have the same problem :
{
"timestamp" : "2018-05-12T21:21:37.326+0000",
"status" : 500,
"error" : "Internal Server Error",
"exception" : "com.netflix.zuul.exception.ZuulException",
"message" : "TIMEOUT"
}
My gateway can't connect the microservice, while there are well recorded on the registry server.
Is there a real solution to this problem?
That's been fixed more than 2 years ago... so unless you didn't upgrade, you don't have the same issue
@jdubois I'm just download jhipster and jhypster-registry from github.
I must update jhypster-registry myself ? Because i see my jhipster microservice et gateways have the last version 4.14.13, but jhypster-registry have 4.3.0 in the file .yo-rc.json.
@csu6 please follow the project guidelines, if you have an issue, open a ticket with the specific details - I can't help you if I don't have the full details.
And concerning the version numbers, JHipster Registry is a separate project, so yes it has a different version number, but it's already configured to have the correct version in the Docker Compose configuration.
Ok, i'll try again, but i don't use Docker.
@csu6 the most interesting in the Docker Compose configuration (see jhipster-registry.yml) is that you will get the recommended version number for JHipster Registry and your application.
I just use this tutorial : http://www.baeldung.com/jhipster-microservices
Most helpful comment
Some settings I use on gateway: