Hello everyone,
I'm using testcontainers for E2E testing our Spring application. Everything was working, but one day on my PC (my colleague don't have any problem) it started crashing. docker-compose up is working, I cleaned the whole docker cache, volumes, images, containers..., I cleaned all the gradle build, but I'm still receiving the following error:
2018-11-05 16:30:10.479 DEBUG 12682 --- [ main] o.t.utility.TestcontainersConfiguration : Testcontainers configuration overrides will be loaded from file:/home/peter/.testcontainers.properties
2018-11-05 16:30:10.479 DEBUG 12682 --- [ main] o.t.utility.TestcontainersConfiguration : Testcontainers configuration overrides loaded from TestcontainersConfiguration(properties={docker.client.strategy=org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy})
2018-11-05 16:30:10.497 INFO 12682 --- [ main] o.t.d.DockerClientProviderStrategy : Loaded org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy from ~/.testcontainers.properties, will try it first
2018-11-05 16:30:10.514 INFO 12682 --- [ main] o.t.d.DockerClientProviderStrategy : Will use 'okhttp' transport
2018-11-05 16:30:10.637 DEBUG 12682 --- [ ducttape-0] o.t.d.DockerClientProviderStrategy : Pinging docker daemon...
2018-11-05 16:30:10.807 INFO 12682 --- [ main] tAndSystemPropertyClientProviderStrategy : Found docker client settings from environment
2018-11-05 16:30:10.819 INFO 12682 --- [ main] o.t.d.DockerClientProviderStrategy : Found Docker environment with Environment variables, system properties and defaults. Resolved:
dockerHost=unix:///var/run/docker.sock
apiVersion='{UNKNOWN_VERSION}'
registryUrl='https://index.docker.io/v1/'
registryUsername='peter'
registryPassword='null'
registryEmail='null'
dockerConfig='DefaultDockerClientConfig[dockerHost=unix:///var/run/docker.sock,registryUsername=peter,registryPassword=<null>,registryEmail=<null>,registryUrl=https://index.docker.io/v1/,dockerConfigPath=/home/peter/.docker,sslConfig=<null>,apiVersion={UNKNOWN_VERSION},dockerConfig=<null>]'
2018-11-05 16:30:10.820 INFO 12682 --- [ main] org.testcontainers.DockerClientFactory : Docker host IP address is localhost
2018-11-05 16:30:10.932 INFO 12682 --- [ main] org.testcontainers.DockerClientFactory : Connected to docker:
Server Version: 18.06.1-ce
API Version: 1.38
Operating System: Arch Linux
Total Memory: 7850 MB
2018-11-05 16:30:10.950 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : Looking up auth config for image: quay.io/testcontainers/ryuk:0.2.2
2018-11-05 16:30:10.950 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : RegistryAuthLocator has configFile: /home/peter/.docker/config.json (exists) and commandPathPrefix:
2018-11-05 16:30:10.953 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : registryName [quay.io] for dockerImageName [quay.io/testcontainers/ryuk:0.2.2]
2018-11-05 16:30:10.953 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : no matching Auth Configs - falling back to defaultAuthConfig [null]
2018-11-05 16:30:10.953 DEBUG 12682 --- [ main] o.t.d.a.AuthDelegatingDockerClientConfig : Effective auth config [null]
2018-11-05 16:30:14.422 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : Looking up auth config for image: quay.io/testcontainers/ryuk:0.2.2
2018-11-05 16:30:14.422 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : RegistryAuthLocator has configFile: /home/peter/.docker/config.json (exists) and commandPathPrefix:
2018-11-05 16:30:14.423 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : registryName [quay.io] for dockerImageName [quay.io/testcontainers/ryuk:0.2.2]
2018-11-05 16:30:14.423 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : no matching Auth Configs - falling back to defaultAuthConfig [null]
2018-11-05 16:30:14.423 DEBUG 12682 --- [ main] o.t.d.a.AuthDelegatingDockerClientConfig : Effective auth config [null]
2018-11-05 16:30:14.977 DEBUG 12682 --- [containers-ryuk] o.testcontainers.utility.ResourceReaper : Sending 'label=org.testcontainers%3Dtrue&label=org.testcontainers.sessionId%3D164abc25-0a92-4e4d-b6d4-c448d98e9a29' to Ryuk
2018-11-05 16:30:14.977 DEBUG 12682 --- [containers-ryuk] o.testcontainers.utility.ResourceReaper : Received 'ACK' from Ryuk
2018-11-05 16:30:14.977 INFO 12682 --- [ main] org.testcontainers.DockerClientFactory : Ryuk started - will monitor and terminate Testcontainers containers on JVM exit
โน๏ธ Checking the system...
โ Docker version should be at least 1.6.0
โ Docker environment should have more than 2GB free disk space
2018-11-05 16:30:15.039 INFO 12682 --- [ main] ๐ณ [alpine/socat:latest] : Pulling docker image: alpine/socat:latest. Please be patient; this may take some time but only needs to be done once.
2018-11-05 16:30:15.039 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : Looking up auth config for image: alpine/socat:latest
2018-11-05 16:30:15.039 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : RegistryAuthLocator has configFile: /home/peter/.docker/config.json (exists) and commandPathPrefix:
2018-11-05 16:30:15.039 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : registryName [index.docker.io] for dockerImageName [alpine/socat:latest]
2018-11-05 16:30:15.039 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : no matching Auth Configs - falling back to defaultAuthConfig [null]
2018-11-05 16:30:15.039 DEBUG 12682 --- [ main] o.t.d.a.AuthDelegatingDockerClientConfig : Effective auth config [null]
2018-11-05 16:30:17.356 DEBUG 12682 --- [containers-ryuk] o.testcontainers.utility.ResourceReaper : Sending 'label=com.docker.compose.project%3Dyt7kd71uwxpq' to Ryuk
2018-11-05 16:30:17.358 INFO 12682 --- [ main] ๐ณ [docker-compose] : Local Docker Compose is running command: pull
2018-11-05 16:30:17.369 DEBUG 12682 --- [ main] o.t.s.o.z.exec.ProcessExecutor : Executing [docker-compose, pull] in /home/peter/Workspace/rds/rds-backend/out/test/resources with environment {COMPOSE_PROJECT_NAME=yt7kd71uwxpq, COMPOSE_FILE=/home/peter/Workspace/rds/rds-backend/out/test/resources/docker-compose.yml}.
2018-11-05 16:30:17.371 DEBUG 12682 --- [ main] o.t.s.o.z.exec.ProcessExecutor : Started java.lang.UNIXProcess@48b2dbc4
2018-11-05 16:30:17.398 DEBUG 12682 --- [containers-ryuk] o.testcontainers.utility.ResourceReaper : Received 'ACK' from Ryuk
2018-11-05 16:30:17.771 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] : Pulling db ...
2018-11-05 16:30:17.771 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] : Pulling uptime-db ...
2018-11-05 16:30:19.149 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] :
2018-11-05 16:30:19.150 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] : Pulling uptime-db ... pulling from library/mysql
2018-11-05 16:30:19.182 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] :
2018-11-05 16:30:19.183 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] : Pulling db ... pulling from timescale/timescaledb
2018-11-05 16:30:19.183 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] :
2018-11-05 16:30:19.183 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] : Pulling db ... digest: sha256:48eac4ba55f6338015...
2018-11-05 16:30:19.183 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] :
2018-11-05 16:30:19.183 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] : Pulling db ... status: image is up to date for t...
2018-11-05 16:30:19.184 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] :
2018-11-05 16:30:19.185 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] : Pulling db ... done
2018-11-05 16:30:19.634 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] :
2018-11-05 16:30:19.635 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] : Pulling uptime-db ... digest: sha256:42bab37eda993e417c...
2018-11-05 16:30:19.635 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] :
2018-11-05 16:30:19.635 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] : Pulling uptime-db ... status: image is up to date for m...
2018-11-05 16:30:19.636 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] :
2018-11-05 16:30:19.636 ERROR 12682 --- [ Thread-6] ๐ณ [docker-compose] : Pulling uptime-db ... done
2018-11-05 16:30:19.685 DEBUG 12682 --- [ main] o.t.s.o.z.exec.WaitForProcess : java.lang.UNIXProcess@48b2dbc4 stopped with exit code 0
2018-11-05 16:30:19.686 ERROR 12682 --- [ main] ๐ณ [docker-compose] :
2018-11-05 16:30:19.687 INFO 12682 --- [ main] ๐ณ [docker-compose] : Docker Compose has finished running
2018-11-05 16:30:19.687 INFO 12682 --- [ main] ๐ณ [docker-compose] : Local Docker Compose is running command: up -d
2018-11-05 16:30:19.687 DEBUG 12682 --- [ main] o.t.s.o.z.exec.ProcessExecutor : Executing [docker-compose, up, -d] in /home/peter/Workspace/rds/rds-backend/out/test/resources with environment {COMPOSE_PROJECT_NAME=yt7kd71uwxpq, COMPOSE_FILE=/home/peter/Workspace/rds/rds-backend/out/test/resources/docker-compose.yml}.
2018-11-05 16:30:19.688 DEBUG 12682 --- [ main] o.t.s.o.z.exec.ProcessExecutor : Started java.lang.UNIXProcess@66e1b2a
2018-11-05 16:30:19.943 ERROR 12682 --- [ Thread-8] ๐ณ [docker-compose] : Creating network "yt7kd71uwxpq_default" with the default driver
2018-11-05 16:30:20.050 ERROR 12682 --- [ Thread-8] ๐ณ [docker-compose] : Creating yt7kd71uwxpq_uptime-db_1_bee56e916398 ...
2018-11-05 16:30:20.050 ERROR 12682 --- [ Thread-8] ๐ณ [docker-compose] : Creating yt7kd71uwxpq_db_1_ed9e0d2e72c6 ...
2018-11-05 16:30:20.488 ERROR 12682 --- [ Thread-8] ๐ณ [docker-compose] :
2018-11-05 16:30:20.488 ERROR 12682 --- [ Thread-8] ๐ณ [docker-compose] : Creating yt7kd71uwxpq_uptime-db_1_bee56e916398 ... done
2018-11-05 16:30:20.521 ERROR 12682 --- [ Thread-8] ๐ณ [docker-compose] :
2018-11-05 16:30:20.521 ERROR 12682 --- [ Thread-8] ๐ณ [docker-compose] : Creating yt7kd71uwxpq_db_1_ed9e0d2e72c6 ... done
2018-11-05 16:30:20.560 DEBUG 12682 --- [ main] o.t.s.o.z.exec.WaitForProcess : java.lang.UNIXProcess@66e1b2a stopped with exit code 0
2018-11-05 16:30:20.561 ERROR 12682 --- [ main] ๐ณ [docker-compose] :
2018-11-05 16:30:20.561 INFO 12682 --- [ main] ๐ณ [docker-compose] : Docker Compose has finished running
2018-11-05 16:30:20.563 INFO 12682 --- [ main] ๐ณ [alpine/socat:latest] : Creating container for image: alpine/socat:latest
2018-11-05 16:30:20.563 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : Looking up auth config for image: alpine/socat:latest
2018-11-05 16:30:20.563 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : RegistryAuthLocator has configFile: /home/peter/.docker/config.json (exists) and commandPathPrefix:
2018-11-05 16:30:20.563 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : registryName [index.docker.io] for dockerImageName [alpine/socat:latest]
2018-11-05 16:30:20.563 DEBUG 12682 --- [ main] o.t.utility.RegistryAuthLocator : no matching Auth Configs - falling back to defaultAuthConfig [null]
2018-11-05 16:30:20.563 DEBUG 12682 --- [ main] o.t.d.a.AuthDelegatingDockerClientConfig : Effective auth config [null]
2018-11-05 16:30:20.598 ERROR 12682 --- [ main] ๐ณ [alpine/socat:latest] : Could not start container
org.testcontainers.containers.ContainerLaunchException: Aborting attempt to link to container yt7kd71uwxpq_uptime-db_1 as it is not running
at org.testcontainers.containers.GenericContainer.applyConfiguration(GenericContainer.java:492) [testcontainers-1.9.1.jar:na]
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:256) [testcontainers-1.9.1.jar:na]
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:237) [testcontainers-1.9.1.jar:na]
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:76) ~[duct-tape-1.0.7.jar:na]
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:235) [testcontainers-1.9.1.jar:na]
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:220) [testcontainers-1.9.1.jar:na]
at org.testcontainers.containers.DockerComposeContainer.startAmbassadorContainers(DockerComposeContainer.java:248) ~[testcontainers-1.9.1.jar:na]
at org.testcontainers.containers.DockerComposeContainer.start(DockerComposeContainer.java:158) ~[testcontainers-1.9.1.jar:na]
at com.rieter.rds.util.DockerEnv.start(DockerUtil.kt:33) ~[classes/:na]
at com.rieter.rds.util.DockerExtension.beforeAll(DockerUtil.kt:13) ~[classes/:na]
at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$invokeBeforeAllCallbacks$7(ClassTestDescriptor.java:358) ~[junit-jupiter-engine-5.3.1.jar:5.3.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.invokeBeforeAllCallbacks(ClassTestDescriptor.java:358) ~[junit-jupiter-engine-5.3.1.jar:5.3.1]
at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.before(ClassTestDescriptor.java:197) ~[junit-jupiter-engine-5.3.1.jar:5.3.1]
at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.before(ClassTestDescriptor.java:74) ~[junit-jupiter-engine-5.3.1.jar:5.3.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:102) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:95) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:71) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at java.util.ArrayList.forEach(ArrayList.java:1257) ~[na:1.8.0_192]
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:110) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:95) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:71) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) ~[junit-platform-engine-1.3.1.jar:1.3.1]
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220) ~[junit-platform-launcher-1.3.1.jar:1.3.1]
at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188) ~[junit-platform-launcher-1.3.1.jar:1.3.1]
at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202) ~[junit-platform-launcher-1.3.1.jar:1.3.1]
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181) ~[junit-platform-launcher-1.3.1.jar:1.3.1]
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128) ~[junit-platform-launcher-1.3.1.jar:1.3.1]
at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:74) ~[junit5-rt.jar:na]
at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) ~[junit-rt.jar:na]
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) ~[junit-rt.jar:na]
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) ~[junit-rt.jar:na]
Can anybody help me?
Thanks
Problem seems to be in this line:
2018-11-05 16:30:20.598 ERROR 12682 --- [ main] ๐ณ [alpine/socat:latest] : Could not start container
Can you please run this image manually with docker run and see what happens?
Problem seems to be in this line:
2018-11-05 16:30:20.598 ERROR 12682 --- [ main] ๐ณ [alpine/socat:latest] : Could not start container
Can you please run this image manually with docker run and see what happens?
@kiview Thank you for the quick response. I tried the example from the docker hub
docker run --restart=always -p 127.0.0.1:2376:2375 -v /var/run/docker.sock:/var/run/docker.sock alpine/socat tcp-listen:2375,fork,reuseaddr unix-connect:/var/run/docker.sock
Unable to find image 'alpine/socat:latest' locally
latest: Pulling from alpine/socat
ff3a5c916c92: Pull complete
abb964a97c4c: Pull complete
Digest: sha256:5f245d7a2d63fccdb098834d00d9fb04c404a3f1423eb2f84045fc00e93d7c32
Status: Downloaded newer image for alpine/socat:latest
^C
Sorry, I don't have any real clue here.
You're colleague is also using Arch?
Best debugging idea for me is looking into the container logs while they are starting and before the exception appears (also Docker daemon logs maybe).
They are using Mint. For now I ended with using multiple GenericContainers. Thank you for help, I will try to investigate the docker daemon :slightly_smiling_face:
I actually prefer using multiple container objects over docker-compose for
most Testcontainers use cases, so no objection here ;)
javorka notifications@github.com schrieb am Mi., 7. Nov. 2018, 10:39:
They are using Mint. For now I ended with using multiple
GenericContainers. Thank you for help, I will try to investigate the docker
daemon ๐โ
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/testcontainers/testcontainers-java/issues/955#issuecomment-436563474,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AE2jaEOiZ7FoopbF6C60nss3D4t8AcYFks5usqo0gaJpZM4YOtO8
.
Hi
It's seems I'm having the similar problem. Could not start alipne:socat/latest internally KafkaContainer is suing SocatContainer. My integration test uses KafkaContainer and while loading the ApplicationContext it ends up in
2018-11-15 15:38:46.098 INFO 24220 --- [ main] ๐ณ [alpine/socat:latest] : Creating container for image: alpine/socat:latest
2018-11-15 15:38:46.098 WARN 24220 --- [ main] o.t.utility.RegistryAuthLocator : Failure when attempting to lookup auth config (dockerImageName: alpine/socat:latest, configFile: /home/fabdul/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/fabdul/.docker/config.json (No such file or directory)
2018-11-15 15:38:46.113 ERROR 24220 --- [ main] ๐ณ [alpine/socat:latest] : Could not start container
java.lang.reflect.UndeclaredThrowableException: null
at com.sun.proxy.$Proxy95.exec(Unknown Source) ~[na:na]
at org.testcontainers.containers.Network$NetworkImpl.create(Network.java:88) ~[testcontainers-1.10.0.jar:na]
at org.testcontainers.containers.Network$NetworkImpl.getId(Network.java:59) ~[testcontainers-1.10.0.jar:na]
at org.testcontainers.containers.GenericContainer.applyConfiguration(GenericContainer.java:526) ~[testcontainers-1.10.0.jar:na]
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:256) ~[testcontainers-1.10.0.jar:na]
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:237) ~[testcontainers-1.10.0.jar:na]
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:76) ~[duct-tape-1.0.7.jar:na]
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:235) ~[testcontainers-1.10.0.jar:na]
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:220) ~[testcontainers-1.10.0.jar:na]
at org.testcontainers.containers.KafkaContainer.doStart(KafkaContainer.java:68) ~[kafka-1.10.0.jar:na]
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:220) ~[testcontainers-1.10.0.jar:na]
at org.testcontainers.lifecycle.Startable$start.call(Unknown Source) ~[na:na]
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) ~[groovy-2.5.4.jar:2.5.4]
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:115) ~[groovy-2.5.4.jar:2.5.4]
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:119) ~[groovy-2.5.4.jar:2.5.4]
at ...................................MyTestClass.groovy.
at org.springframework.boot.SpringApplication.applyInitializers(SpringApplication.java:649) ~[spring-boot-2.1.0.RELEASE.jar:2.1.0.RELEASE]
at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:373) ~[spring-boot-2.1.0.RELEASE.jar:2.1.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:314) ~[spring-boot-2.1.0.RELEASE.jar:2.1.0.RELEASE]
at org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:127) ~[spring-boot-test-2.1.0.RELEASE.jar:2.1.0.RELEASE]
at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:99) ~[spring-test-5.1.2.RELEASE.jar:5.1.2.RELEASE]
at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:117) ~[spring-test-5.1.2.RELEASE.jar:5.1.2.RELEASE]
at org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:108) ~[spring-test-5.1.2.RELEASE.jar:5.1.2.RELEASE]
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:118) ~[spring-test-5.1.2.RELEASE.jar:5.1.2.RELEASE]
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:83) ~[spring-test-5.1.2.RELEASE.jar:5.1.2.RELEASE]
at org.springframework.boot.test.autoconfigure.SpringBootDependencyInjectionTestExecutionListener.prepareTestInstance(SpringBootDependencyInjectionTestExecutionListener.java:44) ~[spring-boot-test-autoconfigure-2.1.0.RELEASE.jar:2.1.0.RELEASE]
at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:246) ~[spring-test-5.1.2.RELEASE.jar:5.1.2.RELEASE]
at org.spockframework.spring.SpringTestContextManager.prepareTestInstance(SpringTestContextManager.java:56) ~[spock-spring-1.2-groovy-2.5.jar:1.2]
at org.spockframework.spring.SpringInterceptor.interceptInitializerMethod(SpringInterceptor.java:43) ~[spock-spring-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.extension.AbstractMethodInterceptor.intercept(AbstractMethodInterceptor.java:24) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.extension.MethodInvocation.proceed(MethodInvocation.java:97) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.invoke(BaseSpecRunner.java:475) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.runInitializer(BaseSpecRunner.java:341) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.runInitializer(BaseSpecRunner.java:336) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.initializeAndRunIteration(BaseSpecRunner.java:274) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.runSimpleFeature(BaseSpecRunner.java:266) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.doRunFeature(BaseSpecRunner.java:260) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner$5.invoke(BaseSpecRunner.java:243) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.invokeRaw(BaseSpecRunner.java:484) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.invoke(BaseSpecRunner.java:467) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.runFeature(BaseSpecRunner.java:235) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.runFeatures(BaseSpecRunner.java:185) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.doRunSpec(BaseSpecRunner.java:95) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner$1.invoke(BaseSpecRunner.java:81) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.invokeRaw(BaseSpecRunner.java:484) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.invoke(BaseSpecRunner.java:467) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.runSpec(BaseSpecRunner.java:73) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.BaseSpecRunner.run(BaseSpecRunner.java:64) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.spockframework.runtime.Sputnik.run(Sputnik.java:63) ~[spock-core-1.2-groovy-2.5.jar:1.2]
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) ~[surefire-junit4-2.22.1.jar:2.22.1]
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) ~[surefire-junit4-2.22.1.jar:2.22.1]
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) ~[surefire-junit4-2.22.1.jar:2.22.1]
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) ~[surefire-junit4-2.22.1.jar:2.22.1]
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) ~[surefire-booter-2.22.1.jar:2.22.1]
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) ~[surefire-booter-2.22.1.jar:2.22.1]
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) ~[surefire-booter-2.22.1.jar:2.22.1]
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) ~[surefire-booter-2.22.1.jar:2.22.1]
Caused by: java.lang.reflect.InvocationTargetException: null
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.testcontainers.dockerclient.AuditLoggingDockerClient.lambda$wrappedCommand$14(AuditLoggingDockerClient.java:98) ~[testcontainers-1.10.0.jar:na]
... 57 common frames omitted
Caused by: com.github.dockerjava.api.exception.NotFoundException: {"message":"could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network"}
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.execute(OkHttpInvocationBuilder.java:273) ~[testcontainers-1.10.0.jar:na]
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.execute(OkHttpInvocationBuilder.java:257) ~[testcontainers-1.10.0.jar:na]
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.post(OkHttpInvocationBuilder.java:128) ~[testcontainers-1.10.0.jar:na]
at com.github.dockerjava.core.exec.CreateNetworkCmdExec.execute(CreateNetworkCmdExec.java:27) ~[testcontainers-1.10.0.jar:na]
at com.github.dockerjava.core.exec.CreateNetworkCmdExec.execute(CreateNetworkCmdExec.java:12) ~[testcontainers-1.10.0.jar:na]
at com.github.dockerjava.core.exec.AbstrSyncDockerCmdExec.exec(AbstrSyncDockerCmdExec.java:21) ~[testcontainers-1.10.0.jar:na]
at com.github.dockerjava.core.command.AbstrDockerCmd.exec(AbstrDockerCmd.java:35) ~[testcontainers-1.10.0.jar:na]
... 62 common frames omitted
Versions used:
java 11
testcontainers/kafka: 1.10.1
ubuntu 16.04
The problem which I posted yesterday got solved by cleaning up iptables rules, bridge network devices, and routing table entries created by docker containers.
docker network prune --filter "until=24h"
But still I noticed that the image containers are not properly stopped if there are test failures. I've an application which has around 100~ integration tests. All uses testcontainer-postgresql and about 50 tests failed due to some other reasons. And I found that at the end docker ps command shows about 50 postgres:x.x.x containers are still running in my local system. This seems an error in testcontainer. Or this has to be manually handled in our tests postgresContainer.stop() ?
Having the same issue as @faris-git - any progress on this?
Pruned the networks, stopped and removed all Docker containers, restarted the Docker engine as well as MacOS, upgraded testcontainers-mysql to latest version and then downgraded it again and magically my test works now.
@faris-git the cleanup of containers (and networks) should be tackled automatically, so something is not right here... Do you see any logs from ResourceReaper or โRyukโ during test runs? In particular, any errors from these components?
Sent with GitHawk
@faris-git the cleanup of containers (and networks) should be tackled automatically, so something is not right here... Do you see any logs from ResourceReaper or โRyukโ during test runs? In particular, any errors from these components?
Sent with GitHawk
I ran into this issue when I run many tests in one job and if more tests got interrupted(e.g. due to ApplicationContext could not start). I see that when the tests are interrupted then testcontainer couldn't stop the container properly(especially socat container). As in error stack above
While there might be scenarios in which the containers can't be shutdown properly during the test execution, they should always be removed shortly after JVM exit.
So when you checked that the containers where still running, was the JVM still running as well?
While there might be scenarios in which the containers can't be shutdown properly during the test execution, they should always be removed shortly after JVM exit.
So when you checked that the containers where still running, was the JVM still running as well?
You are right the containers are stopped one by one after JVM exit. But found that the opened networks are not killed/prune properly. That's the reason in my case kafka couldn't start that there are no free socat(zookeeper) available somehow.May be it could be system based.
After manually clearing the docker networks all went fine :) 'docker network prune`
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If you believe this is a mistake, please reply to this comment to keep it open. If there isn't one already, a PR to fix or at least reproduce the problem in a test case will always help us get back on track to tackle this.
This issue has been automatically closed due to inactivity. We apologise if this is still an active problem for you, and would ask you to re-open the issue if this is the case.
Encountering this issue with networks not being pruned properly leading eventually to the error. Pruning the networks does resolve it as a workaround, but could definitely use a permanent fix. Some co-workers running into this 1/day
Are you sure this is the same issue?
Which version of Testcontainers are you using?
Networks should be cleaned by the Ryuk container worst case after JVM exit.
Fairly certain it's the same issue, I can prune the networks, run tests that invoke testcontainers, and the network is still showing when I run docker network ls. I'm running 1.11.1
@nateha1984 are you running Docker Compose or "normal" containers?
We're using Docker Compose
Just hit what I believe is the same issue - and docker network prune _(Deleted Networks: ... x21)_ fixed it.
There seems to be a bug in Docker Compose, I reported it here:
https://github.com/docker/compose/issues/6636
It looks like the workaround suggested in the Docker Compose issue won't work with testcontainers at the moment as testcontainers is using DC 1.8.0 and v2.1 compose files are compatible with DC 1.9.0+. Any plans to upgrade the DC version?
@nateha1984 can you use the local compose mode?
@bsideup said:
There seems to be a bug in Docker Compose, I reported it here:
docker/compose#6636
Then @shin-, the maintainer from Docker Compose, said in that thread:
@shin- said:
Hey @chris-crone @bsideup -- The reason for this is the v2.0 file format is mapped to an API version that didn't support labels on networks and volumes (see the code here). Using 2.1 or above will solve this issue.
The solution to this issue is to use Docker Compose file version 2.1, where TestContainers only supports version 2.0 (I just tried). So the networks aren't going to get automatically pruned because they're not being tagged properly.
Docker Compose will either have to support tagging of networks with file version 2.0, or TestContainers will have to support file version 2.1.
@rnorth, @bsideup: Are you guys going to support Compose file version 2.1?
In my case the issue went the same scenario as in the original report:
Everything was working, but one day on my PC (my colleague don't have any problem) it started crashing.
I realized that this _one day_ was exactly the day I added some lightweight unit tests alongside with the integration tests with Testcontainers' docker-compose. I am using static singleton container pattern.
plus: running docker-compose manually (not with testcontainers) works stable.
plus: it's not a local issue, it reproduces on my build server, with a fresh docker running every time.
Given all that (and logs), my suspicion is that the container is run _twice_: once in surefire maven plugin execution (unit tests), and once in failsafe execution (integration tests). And the second run fails due to the default network being occupied.
I'm going to experiment with some options. The first thing I thought of is:
Maybe this gives others some ideas on how to resolve it.
UPD The issue was solved for me by introducing lazy instantiation of the compose testcontainer in tests.
We have a very similar issue with a docker-compose based tests (using Testcontainers) hanging on Jenkins (ubuntu) but run fine locally (mac, windows). The same test runs successfully sometimes on the Jenkins then fails 10 times in a row (sporadic). Network seems ok - the slave is a freshly installed ubuntu slave without anything else running (physical pc, not a vm or circleCI env etc.). Pruning everything and restarting the whole machine has little effect.
The PC running jenkins slave has some strange issues with the DNS though - sometimes not pingable by name. The problem might be unrelated but i thought i mention it as it could be input for someone trying to fix this.
No obvious errors can be seen - it just hangs there and fails after waiting for at least 10minutes. Which is also strange because timeout is actually 5 minutes ... anyway - would be nice to have docker-compose stuff running correctly. Normal testcontainer tests run perfectly. For now we just convert all docker-compose based tests to multiple testcontainers. So far this is working for us. But maybe you could deprecate docker-compose tests until this is fixed in a stable manner as it is time consuming to switch testing styles.
@Schwaller we do _recommend_ using GenericContainers and not Docker Compose as they are more reliable and provide better features. That said, there are no known issues with Docker Compose, not to mention blockers that would make us deprecate it any time soon. But you should always remember that Docker Compose integration delegates most of the things the Docker Compose binary which may behave weirdly. Last but not least, the DNS issue you mentioned suggests that there is some issue with the network on the machine which may affect Docker Compose.
@bsideup thx for the quick response! I understand that its not ideal that DNS seems to be not perfect but this never stopped any other process from working correctly. I can run the docker-compose file on that machine manually without any issues. Just as soon as testcontainers&docker-compose is involed chances are around 80% that within a month it will develop sporadic issues.
If you think it works for others I guess I have to take your word for it ;-)
BTW thx for creating/maintaining testcontainers! Great stuff!
Most helpful comment
BTW thx for creating/maintaining testcontainers! Great stuff!