Testcontainers-java: Podman support

Created on 20 Nov 2019  路  88Comments  路  Source: testcontainers/testcontainers-java

It would be really nice if TestContainers supported podman.
This is a real game changer to no longer depend on Docker daemons to run.

resolutiosomedaymaybe

Most helpful comment

Correct we are under heavy development and are looking at testcontainers to prove that we handle the Docker API correctly. We have an extended version to support advanced features of Podman as well.

All 88 comments

Thanks for asking! Since this is not the first time somebody asks for Podman, I would like to post a (non complete, most probably) list of requirements we need to do the Testcontainers' magic:

  • [ ] Cross-platform support. It should work exactly the same on Linux, Mac and Windows, without having us do the VM management
  • [ ] Docker networks, ability to start/connect/disconnect containers with/to Networks
  • [ ] Image management - pulling (including the various authorisation mechanisms), listing, building
  • [ ] Long running processes that survive the test run
  • [ ] Access to /var/run/docker.sock from inside the container, ability to do regular Docker commands with it
  • [ ] Log access/streaming
  • [ ] Stdin/Stdout/Stderr streaming
  • [ ] There are many Docker images that require root user, these should work too
  • [ ] Volumes management

I am not a Podman expert, and if somebody from the community can help clarifying these, I would be happy to learn, and, as long as every point is clarified (the list is not 100% complete and we may need to add more in future) we can consider adding the support for Podman.

and BTW, "daemonless" here makes it harder for Testcontainers :)

From my Podman experience:

  • Access to /var/run/docker.sock : I don't think so, since there is no daemon;
  • Cross Platform support: Podman only works in Linux. I haven't checked the internals of Testcontainers yet, but Podman offers an alias to replace the docker command, so Testcontainers could use Podman in Linux with the same CLI syntax as used in Docker (if the CLI is used);
  • Image Building: is performed by Buildah

I wish @rhatdan could chime in the discussion to clarify or correct my statements :smiley:

if the CLI is used

We're not using the CLI, but the REST API.

It looks to me that we should rather focus on waiting for the rootless Docker (an ongoing effort upstream), Podman as it is (plus Buildah, plus whatever else is needed) cannot provide the same functionality we need to do what we're doing in Testcontainers.

Dropping this here because I found it useful and relevant to the discussion :)

https://medium.com/nttlabs/cgroup-v2-596d035be4d7

We're not using the CLI, but the REST API.

Podman recently started offering a REST API: https://podman.io/blogs/2020/01/17/podman-new-api.html

@ppalaga nice!

Do you have a compatibility table or something, to see which Docker's API endpoints are supported or not? Thanks!

@bsideup
I would help putting that table together. Is there a list of Docker API endpoints (and options) testcontainers uses or a test-suite we can run against podman's Docker compability API.

@crunchtime-ali I believe running Testcontainers' test suite is the best way to check whether all APIs are supported or not. Our tests should cover all types of Docker features we are using.

You can also start a TCP proxy between Testcontainers and Podman to collect the list of endpoints that are used.

Some more information that might be helpful:

https://github.com/containers/libpod/blob/master/docs/source/markdown/podman-system-service.1.md

https://github.com/containers/libpod/blob/master/API.md

The Podman API supposedly has a docker API compatible endpoint.

And it can also expose a sock socket file as well, which should meet the requirement of having a /var/run/docker.sock (or equivalent) inside the container.

Correct we are under heavy development and are looking at testcontainers to prove that we handle the Docker API correctly. We have an extended version to support advanced features of Podman as well.

I'm also looking at doing the reverse: Running testcontainers tests in podman CI. Solving the challenges here, will help me get to there. (subscribed)

how does one run the test suite? hints around how to specify the sock to connect to is a bonus. ill do the work.

@baude

Thanks for working on it!

we read the same environment variables as the Docker CLI does (as one of the strategies we support), e.g. DOCKER_HOST

By default, we also search for Docker environment in well-known locations (like /var/run/docker.sock)

Our primary CI is Azure Pipelines:
https://github.com/testcontainers/testcontainers-java/blob/1.12.5/azure-pipelines.yml

@baude @rhatdan
Great to see you get involved and that we can hope for great progress towards feature parity and Podman compatibility in Testcontainers. I know about a lot of Testcontainers users in more strict enterprise environments that would be very happy about Podman support 馃檪

If there is any way I can support you, feel free to get in touch or join our Slack, if you want to have more extensive discussions.

Thanks @baude for taking this up, the java-stuff is out of my depth, but poke me on IRC if/when I can help with any automation-things.

recent update here ... our compatibility layer requires the use of versioned paths (by api). the testcontainers suite does not appear to deal with that. i wrote a small java app using docker-java, which test containers also uses, and by default the versioned paths are not used there but it does provide the capability to do so with:

.withApiVersion("1.24")

So we need some sort of way to configure that in testcontainers and perhaps things will progress.

update! a contributor has fixed ^^ so now just a matter of understanding how to dissect the test failures.

Defining DOCKER_HOST with /var/run/user/1000/podman/podman.sock generate this error: java.io.IOException: [111] Connection refused, even if the socket is created by the same Linux user account (running podman api with command systemctl --user start podman).

Defining DOCKER_HOST with /var/run/user/1000/podman/podman.sock generate this error: java.io.IOException: [111] Connection refused, even if the socket is created by the same Linux user account (running podman api with command systemctl --user start podman).

Podman != Docker.

Just I'd mention that I came to this ticket after installing Fedora 32, realising I couldn't get docker-ce to run without lots of kernel/firewal changes https://github.com/docker/for-linux/issues/955

Looks like redhat are pushing podman
https://developers.redhat.com/blog/2019/02/21/podman-and-buildah-for-docker-users/

So until/if the docker ticket is fixed, future fedora installations of docker are out of reach for layman users.
However I'm wiping and rebuilding with 31, can't live without testcontainer ;-)

Looks like redhat are pushing podman

It is so indeed, although Podman does not support all Docker's features and does not work with tools like Docker Compose, Fabric8 Docker Maven Plugin by Red Hat ( https://github.com/fabric8io/docker-maven-plugin/issues/1330 ), Testcontainers and others.

It is a bit sad to see a lot of push with statements like "just replace Docker CLI with Podman CLI and _everything_ will be working" while, in reality, there is a ton of things missing.
There were some attempts at adding Docker compatibility layer to Podman, but last information I heard was that it does not pass Testcontainers' test suite.

However I'm wiping and rebuilding with 31, can't live without testcontainer ;-)

Thank you! FYI @kiview is running Fedora 32 with Docker, he just had to apply a couple of modifications (cgroups v1, firewall settings)


Also FYI, I did some experiments with Rootless Docker and apparently it is working! (in PoC mode)
I will explore further because I believe that this is the way to go - good old Docker, just without the root requirement.

It is a bit sad to see a lot of push with statements like "just replace Docker CLI with Podman CLI and _everything_ will be working" while, in reality, there is a ton of things missing.

Could you please list which specific features Podman is missing to be able to serve as a replacement for Docker in Testcontainers?

@ppalaga sorry, but I don't know (I am on Mac and can't use Podman to even test)

perhaps ask @baude:
https://github.com/testcontainers/testcontainers-java/issues/2088#issuecomment-595960775

Also, If Podman serves as a drop-in replacement for Docker (as claimed), it should be trivial to test it - just run Testcontainers' and docker-java's test suites with it.

The original goal was Podman as a drop in replacement for Docker CLI. Which it has done a pretty damn good job at. The next level is to have Podman implement the Docker API, which is ongoing and heavily being worked on right now. We call this APIV2. You should be seeing Release candidates for Podman 2.0 right now, where we are beginning the testing on it. If people could test suites like Testcontainers and docker-java's test suite, we would like to see the results.

Potential issues we have seen so far are:

  • Test suites like docker-py which rely on Docker-swarm, which we don't intend on implementing.
  • Docker-compose has issues since Podman networking and Docker networking are very different.
    we are attempting to work around these issues, to see where we are on these test suites.

Bottom line on this, is we need help from the community to help with tests, and fixes to get to the point where we can support the Docker.sock API.

@rhatdan Thanks for the update!

If someone from your side can help me configuring Podman on GitHub Actions, I could add a branch to docker-java and run the test suite (excluding the Swarm Mode tests)
Once done, I can do the same for Testcontainers 馃憤

@cevich PTAL
@bsideup Thanks.

@bsideup what do you need? a github notification set up?

@baude some example of running GitHub Actions with Podman configured

After restarting the linux system, I get this error:

May 26, 2020 11:44:25 PM org.testcontainers.dockerclient.DockerClientProviderStrategy lambda$getFirstValidStrategy$2
INFO: Found Docker environment with Environment variables, system properties and defaults. Resolved dockerHost=unix:///run/user/1000/podman/podman.sock
...
Caused by: org.testcontainers.containers.ContainerFetchException: Can't get Docker image: RemoteDockerImage(imageNameFuture=java.util.concurrent.CompletableFuture@74ad8d05[Completed normally], imagePullPolicy=DefaultPullPolicy(), dockerClient=LazyDockerClient.INSTANCE)
    at org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1265)
    at org.testcontainers.containers.GenericContainer.logger(GenericContainer.java:600)
    at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:311)
    ... 45 more
Caused by: com.github.dockerjava.api.exception.InternalServerErrorException: {"cause":"no such image","message":"NewFromLocal(): unable to find 'quay.io/testcontainers/ryuk:0.2.3' in local storage: no such image","response":500}

It sounds better but it might be a problem with podman config or something else.

I will do a new test with podman 1.9.3-2 (archlinux), and maybe 2.0

Is there a reason to use rootless podman? I would figure the equivalence is with rootful?

Using the system podman API socket fail with
Caused by: java.io.IOException: com.sun.jna.LastErrorException: [13] Permission denied
Even changing socket acl wth 0666.

@ruddy32 Does API socket respond to curl? See https://liquidat.wordpress.com/2020/04/20/howto-using-the-new-podman-api/amp/

Your previous "no such image" error was probably due to misconfigured registries. Podman doesn't default to docker.io. I guess you figured that one already though

Using API with curl works fine.

Repository configuration is setup with registries = ['docker.io', 'registry.fedoraproject.org', 'quay.io', 'registry.access.redhat.com', 'registry.centos.org'].

I undertand this configuration is ok for quay.io.

Finally had some time to test this. There seems to be few issues with podman's current implementation:

  • In podman created timestamp for image is a string ( 2020-06-23T11:06:02.096204727Z ) in docker it's an integer (long)
  • Podman doesn't seem to support filters for images ( e.g. http://localhost/images/json?filter=testcontainersofficial%2Fryuk%3A0.3.0 )

I'll report these issues to containers/libpod.

containers/libpod#6796
containers/libpod#6797

docker-java/docker-java#1424

Patched docker-java locally and now it succeeds to pull ryuk image, but crashes when it starts to launch it. There's something wrong with podman service, when creating rootless containers (could be os specific).

Issue documented here: containers/libpod#6798

Unfortunately there's yet another issue: containers/libpod#6799

My next thought was to disable ryuk for now. However that didn't get me much further. There also seems to be an issue related to polling of image pull status. I don't bother to report that before container create endpoint works as expected.

Something is also broken in testcontainers startup checks :-) While checks be bypassed with a configuration option, it makes sense to fix them.

While there are multiple issues, I don't think that situation looks hopeless at all. RedHat is doing awesome job coordinating podman efforts. If I look statistics correctly, there's more activity on podman than there is on moby. And then there's buildah, fuse-overlayfs and crio too :-)

Finally had time to set environment to work with a debugger. This speeds things up from my perspective.

Podman doesn't currently expose port information the same way that docker does: containers/libpod#6803

@ricardozanini Rest API is the one that I'm testing against 馃檪

There's still few blockers, most notable are issues with container creation and exec endpoints. I was told that first one will be resolved by major refactoring that unifies code paths for libpod (podman) and compat (docker) API handlers for container creation. The issue with exec API was just recently posted, so that might take a while to be resolved.

I also believe there might be compatibility issue with image pulling related to way that pull progress is being monitored by docker-java. I didn't file an issue for that yet, since other issues are of higher priority (documenting issue takes quite a bit of time)

I might be moving on to test GitLab runner while these issues are being resolved 馃檪

Exec issues were fixed with a simple patch. There is still some kind of issue with either connection management or connection tracking (podman connections counts show that there are active connections while netstat shows none)

Here's a first passing "test":

@Testcontainers
public class NotATest {
    public static class SomeContainer extends GenericContainer<SomeContainer> {
        public SomeContainer() {
            super("redis:5.0.3-alpine");
            withExposedPorts(6379);
            setPortBindings(Collections.singletonList("6379:6379/tcp"));
            setCommand("docker-entrypoint.sh", "redis-server");
        }
    }

    // container {
    @Container
    public GenericContainer<?> redis = new SomeContainer();


    @BeforeEach
    public void setUp() {
        ch.qos.logback.classic.Logger tc = (ch.qos.logback.classic.Logger) org.slf4j.LoggerFactory.getLogger("org.testcontainers");
        tc.setLevel(Level.ALL);
        ch.qos.logback.classic.Logger dj = (ch.qos.logback.classic.Logger) org.slf4j.LoggerFactory.getLogger("com.github.dockerjava");
        dj.setLevel(Level.ALL);
        ch.qos.logback.classic.Logger ap = (ch.qos.logback.classic.Logger) org.slf4j.LoggerFactory.getLogger("org.apache.http");
        ap.setLevel(Level.ALL);

        String address = redis.getHost();
        Integer port = redis.getFirstMappedPort();

        System.out.println(address + ":" + port);
    }

    @Test
    public void testSimpleSetAndGet() {
        Jedis jedis = new Jedis("localhost");

        jedis.set("foo", "yes");
        System.out.println("Redis is a live: " + jedis.get("foo"));

        jedis.close();
    }
}

There is a pull request pending that is supposed to fix setCommand part, but for now it's mandatory to define command to prevent podman creating invalid container. Publish all flag wasn't working correctly last time I checked - which explains why I'm calling both expose and setPortBindings. There might be a fair bit of work to be done to fix publish all, considering that libpod and podman's v2 (docker) api use currently different code paths. Libpod api seems far more reliable than v2 at the moment.

For now I'm running with TESTCONTAINERS_RYUK_DISABLED set to true.

containers/podman#6835 fixed requirement to use setCommand. So now you need to look for containers/podman#6918 which fixes some leaks related to use of exec. There's currently no issue open for exposing ports - and unfortunately I will be soon off for summer holidays (4weeks) I did ping person that fixed setCommand and he might be able to fix issue with ports.

containers/podman#6835 fixed requirement to use setCommand. So now you need to look for containers/podman#6918 which fixes some leaks related to use of exec. There's currently no issue open for exposing ports - and unfortunately I will be soon off for summer holidays (4weeks) I did ping person that fixed setCommand and he might be able to fix issue with ports.

https://github.com/containers/podman/pull/6835#issuecomment-656194612
@skorhone Could you show the steps how to to reproduce the error?

@zhangguanzhang If you run testcase I posted few comments back and remove setPortBindings, you should be able to replicate behavior that I see. Be sure to disable that ryuk

I'm running podman in my testcontainers tests just like I've done in: https://github.com/skorhone/libpod-gitlab-it/blob/cirrus/.cirrus.yml . TCP binding makes it easier to capture API call. docker-java supports reading target from DOCKER_HOST so it's trivial to use TCP with it.

If you want to build test case manually, you need to get a container that has a service that is binding to TCP port. e.g. netcat (nc). Then expose port and set publish all ports to true. Exposed port should be published to a randomized port

@zhangguanzhang If you run testcase I posted few comments back and remove setPortBindings, you should be able to replicate behavior that I see. Be sure to disable that ryuk

I'm running podman in my testcontainers tests just like I've done in: https://github.com/skorhone/libpod-gitlab-it/blob/cirrus/.cirrus.yml . TCP binding makes it easier to capture API call. docker-java supports reading target from DOCKER_HOST so it's trivial to use TCP with it.

If you want to build test case manually, you need to get a container that has a service that is binding to TCP port. e.g. netcat (nc). Then expose port and set publish all ports to true. Exposed port should be published to a randomized port

I see this in ci:

panic: Local repo not found, please run `make development_setup`

could you retry test with run the make before the testing?

@zhangguanzhang It wouldn't help. CI is currently running tasks in a container (docker) and it seems that podman doesn't behave too well there. I've plans to move this CI build to use vm instead. Just need to wait until I get back to my workstation (- 4weeks from now)

The bits that you can use for debugging testcontainers is the way that I start podman & use ngrep. Network captures help a lot when trying to understand the cause & building test casws. If I had my workstation available, I would have provided exact calls testcontainers makes 馃檪

If you have time, you could make podman API log all requests and responses on trace level. This would remove the need of using additional tool for capturing API calls. I was about to do it yesterday, but was forced to do some else.

I've tested podman 2.0.3 + commits from https://github.com/containers/podman/pull/6878 and https://github.com/containers/podman/pull/6815 ( + TESTCONTAINERS_RYUK_DISABLED=true )

Container is starting but seems like logs are not available (error "There are no stdout/stderr logs available for the failed container")
Caused by problem with container creating. Container stops right after start. Checking ...

Given that the work is being done on Podman's side, I am wondering whether it makes sense to have this issue open.

Perhaps "Testcontainers support" in Podman's issue tracker is a better idea?

I think it would be nice to keep this one open until Podman's related issues are fixed, but I'll leave it for you to decide :wink:

The problem is that there is no visibility in this issue other than the issues that @kamkie posted. So it is unclear what's the status of Podman's Docker compatibility (that affects Testcontainers). And we can't make Podman's team always refer this (Testcontainers') issue every time they make progress towards a full support :)

Yep, I agree with @bsideup.

There's no action for the Testcontainers team to take - we're all waiting on Podman to support a sufficiently broad portion of the Docker API. When that's available, it should just work.

I'll close, and we'll hope that GitHub points people to this issue if they're looking to find out about using Testcontainers with Podman.

the two issues mentioned seemed to have podman prs merged. how do we (podman) know if there are more issues?

@baude Issue https://github.com/containers/podman/issues/7235 is definitely not fixed although marked as such. You can test yourself - there is reproducer described in the description.
I've opened new issue https://github.com/containers/podman/issues/7923

It would be great if someone on the Podman side created a "tracking" issue for TestContainers support, because otherwise as users we have no easy way to know whether testcontainers will work on Podman or not without trolling through tens of issues.

I'll do it in the podman issue tracker and reference in this issue

@gastaldi thank you! 馃憤

Are there still any blockers on podman side? I see that containers/podman#7934 (duplicate of containers/podman#7235 and containers/podman#7923) got a PR merged.

Is it possible to configure TestContainer, and Docker-Java, with a different socket ? (example: unix:///run/user/1000/podman/podman.sock)
Settings DOCKER_HOST, as describe in docker-java readme, with 'unix:///run/user/1000/podman/podman.sock' does not work. Is there any other solution ?

@ruddy32 setting DOCKER_HOST as an environment variable works. Make sure that you set it correctly.

The maven surefire plugin is configure with :

          <configuration>
              ...
              <environmentVariables>
                  **<DOCKER_HOST>unix:///run/user/1000/podman/podman.sock</DOCKER_HOST>
                  <TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE>unix:///run/user/1000/podman/podman.sock</TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE>**
                  <TESTCONTAINERS_RYUK_DISABLED>false</TESTCONTAINERS_RYUK_DISABLED>
                  <api.version>v2</api.version>
                  <KEYCLOAK_SERVICE>http://localhost:8180</KEYCLOAK_SERVICE>
              </environmentVariables>
          </configuration>`

Maven build show

Caused by: com.github.dockerjava.api.exception.InternalServerErrorException: {"cause":"stat //var/run/docker.sock: permission denied","message":"CreateContainerFromCreateConfig(): error checking path \"//var/run/docker.sock\": stat //var/run/docker.sock: permission denied","response":500}

@ruddy32

InternalServerErrorException

this is an error you get from the server (aka Docker daemon), meaning that setting DOCKER_HOST works.
As for the Podman itself, it may or may not work, see https://github.com/containers/podman/issues/7927

"//var/run/docker.sock" is handled by podman system service.
I would like to make TestContainer use "//run/user/1000/podman/podman.sock" instead of "//var/run/docker.sock".
"//run/user/1000/podman/podman.sock" is handled by podman user service.

@ruddy32 once again - InternalServerErrorException suggests that Testcontainers was able to communicate to Docker/Podman/Whatever, and it replied with an error. You can share full logs where I can show you that. The error itself is Podman's behaviour, probably some incompatibility with Docker or something - see the issue I linked.

As I understand, DOCKER_HOST is not used if "//var/run/docket.sock" exists.
Podman should manage "//var/run/docket.sock" link dynamically when API starts.
I'm surprised that DOCKER_HOST is not used first, before checking "//var/run/docket.sock" existence and status.

@ruddy32 please share the logs.

@ruddy32 Is your surefire configured to use forking? If not, then you can't expect environment variable definition to work properly.

Also when reporting this kind of issue, you should provide a minimal sample that demonstrates the behavior. Otherwise it's going to be endless "but it works for me" debate

Surefire maven plugin configuration is above
Same problem with forkCount = 3.
Here is the full log :

2020-10-27 08:38:59,227 ERROR [com.tim.ria.sec.tes.KeycloakOIDCTestResource] (main) Failed to run Keycloak container: org.testcontainers.containers.ContainerLaunchException: Container startup failed
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:330)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:311)
...
Caused by: org.testcontainers.containers.ContainerFetchException: Can't get Docker image: RemoteDockerImage(imageName=quay.io/keycloak/keycloak:10.0.2, imagePullPolicy=DefaultPullPolicy())
at org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1279)
at org.testcontainers.containers.GenericContainer.logger(GenericContainer.java:613)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:320)
... 47 more
Caused by: com.github.dockerjava.api.exception.InternalServerErrorException: {"cause":"stat //var/run/docker.sock: permission denied","message":"CreateContainerFromCreateConfig(): error checking path \"//var/run/docker.sock\": stat //var/run/docker.sock: permission denied","response":500}
at com.github.dockerjava.okhttp.OkHttpInvocationBuilder.execute(OkHttpInvocationBuilder.java:293)
at com.github.dockerjava.okhttp.OkHttpInvocationBuilder.execute(OkHttpInvocationBuilder.java:271)
at com.github.dockerjava.okhttp.OkHttpInvocationBuilder.post(OkHttpInvocationBuilder.java:129)
at com.github.dockerjava.core.exec.CreateContainerCmdExec.execute(CreateContainerCmdExec.java:33)
at com.github.dockerjava.core.exec.CreateContainerCmdExec.execute(CreateContainerCmdExec.java:13)
at com.github.dockerjava.core.exec.AbstrSyncDockerCmdExec.exec(AbstrSyncDockerCmdExec.java:21)
at com.github.dockerjava.core.command.AbstrDockerCmd.exec(AbstrDockerCmd.java:35)
at com.github.dockerjava.core.command.CreateContainerCmdImpl.exec(CreateContainerCmdImpl.java:595)
at org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:94)
at org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:168)
at org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
at org.testcontainers.LazyDockerClient.listImagesCmd(LazyDockerClient.java:12)
at org.testcontainers.images.LocalImagesCache.maybeInitCache(LocalImagesCache.java:68)
...

@ruddy32 this is not the full log, but just the exception. Consider configuring the logging as per https://www.testcontainers.org/supported_docker_environment/logging_config/ and sharing the full log, ideally at DEBUG level

Before exception, log provide following information

2020-10-27 21:38:09,964 INFO  [org.tes.doc.DockerClientProviderStrategy] (main) Loaded org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy from ~/.testcontainers.properties, will try it first<br/>
2020-10-27 21:38:10,428 INFO  [org.tes.doc.EnvironmentAndSystemPropertyClientProviderStrategy] (main) Found docker client settings from environment
2020-10-27 21:38:10,428 INFO  [org.tes.doc.DockerClientProviderStrategy] (main) Found Docker environment with Environment variables, system properties and defaults. Resolved dockerHost=unix:///run/user/1000/podman/podman.sock
2020-10-27 21:38:10,611 INFO  [org.tes.DockerClientFactory] (main) Docker host IP address is localhost
2020-10-27 21:38:10,663 INFO  [org.tes.DockerClientFactory] (main) Connected to docker: 
  Server Version: 2.1.1
  API Version: 1.40
  Operating System: arch
  Total Memory: 15865 MB

As I understand, docker-java does not take care of DOCKER_HOST variable definition.

Found Docker environment with Environment variables, system properties and defaults. Resolved dockerHost=unix:///run/user/1000/podman/podman.sock
Connected to docker:

it successfully connected to Podman it seems (Server Version: 2.1.1 is definitely not Docker's version), so I am not sure what are you talking about here, sorry. I suggest you move the conversation to Podman's issue tracker since it does not look like an issue with Testcontainers and its connectivity.

Caused by: com.github.dockerjava.api.exception.InternalServerErrorException: {"cause":"stat //var/run/docker.sock: permission denied","message":"CreateContainerFromCreateConfig(): error checking path "//var/run/docker.sock": stat //var/run/docker.sock: permission denied","response":500}

Not sure that version is the problem.
Docker-Java use "//var/run/docker.sock" instead of "//run/user/1000/podman/podman.sock".
Docker-Java does not find DOCKER_HOST configuration, even though TestContainer find this configuration.
I try to understand why this environment variable is not catch by Docker-Java.
I will post an issue to Docker-Java.

@ruddy32 please re-read @bsideup's last comment.

Your logs show that Testcontainers is finding the right unix socket.

Please take this issue to podman's issue tracker, because this is not a Testcontainers or Docker-Java issue.

Your logs show that Testcontainers is finding the right unix socket.

This is right, and TestContainer is able to retrieve information from the API.

But one part of log shows that Docker-Java is not using the right socket. Do you mean that's normal?

@ruddy32 there is zero evidence that docker-java is not using the right socket. It makes no sense, especially given that docker-java is what Testcontainers uses to communicate to Docker.

I am kindly asking you to move the conversation to Podman's issue tracker because this is not an issue with Testcontainers. Any future comments without a reproducer that uses just Docker will be considered offtopic and marked accordingly.

Correct podman will not listen by default at /var/run/docker.sock, which is probably where the clients are attempting to talk.
We have a package podman-docker which installs a symlink from /var/run/docker.sock -> /var/run/podman.sock
Which should fix the problems you are seeing.
Alternatively you could configure podman to listen at /var/run/docker.sock.

We have a package podman-docker which installs a symlink from /var/run/docker.sock -> /var/run/podman.sock

This package is installed yet. "/var/run/podman.sock" is not the active socket.
The system is configure with DOCKER_HOST environment variable set with "unix:///run/user/1000/podman/podman.sock"
The problem is that TestContainer is using the right socket "/run/user/1000/podman/podman.sock" and docker-java is using the wrong socket "/var/run/podman.sock".

Alternatively you could configure podman to listen at /var/run/docker.sock

I understand that docker-java supports this socket only. This might be the solution. But this is a "non sens" using podman with root user, even on a developer workstation.

Using the last TestContainer version, I get the following log :

[INFO] Running com.security.test.KeycloakServerTest
2020-11-06 08:05:36,073 INFO  [org.tes.doc.DockerClientProviderStrategy] (main) Loaded org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy from ~/.testcontainers.properties, will try it first
2020-11-06 08:05:36,463 INFO  [org.tes.doc.DockerClientProviderStrategy] (main) Found Docker environment with Environment variables, system properties and defaults. Resolved dockerHost=unix:///run/user/1000/podman/podman.sock
2020-11-06 08:05:36,464 INFO  [org.tes.DockerClientFactory] (main) Docker host IP address is localhost
2020-11-06 08:05:36,517 INFO  [org.tes.DockerClientFactory] (main) Connected to docker: 
  Server Version: 2.1.1
  API Version: 1.40
  Operating System: arch
  Total Memory: 15865 MB
2020-11-06 08:05:36,625 ERROR [com.tim.ria.sec.tes.KeycloakServer] (main) Failed to run Keycloak container: org.testcontainers.containers.ContainerLaunchException: Container startup failed
    at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:331)
    at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:312)
    at com.security.test.KeycloakServer.beforeAll(KeycloakServer.java:74)
    at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllCallbacks$8(ClassBasedTestDescriptor.java:368)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllCallbacks(ClassBasedTestDescriptor.java:368)
    at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:192)
    at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:78)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:136)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
    at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
    at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
    at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
    at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32)
    at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
    at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51)
    at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220)
    at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188)
    at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202)
    at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181)
    at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128)
    at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150)
    at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:116)
    at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
    at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
    at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
    at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.testcontainers.containers.ContainerFetchException: Can't get Docker image: RemoteDockerImage(imageName=quay.io/keycloak/keycloak:10.0.2, imagePullPolicy=DefaultPullPolicy())
    at org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1282)
    at org.testcontainers.containers.GenericContainer.logger(GenericContainer.java:616)
    at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:321)
    ... 39 more
Caused by: com.github.dockerjava.api.exception.InternalServerErrorException: Status 500: {"cause":"incorrect volume format, should be [host-dir:]ctr-dir[:option]","message":"CreateContainerFromCreateConfig(): unix:///run/user/1000/podman/podman.sock:/var/run/docker.sock:rw: incorrect volume format, should be [host-dir:]ctr-dir[:option]","response":500}
    at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.execute(DefaultInvocationBuilder.java:247)
    at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.post(DefaultInvocationBuilder.java:125)
    at org.testcontainers.shaded.com.github.dockerjava.core.exec.CreateContainerCmdExec.execute(CreateContainerCmdExec.java:33)
    at org.testcontainers.shaded.com.github.dockerjava.core.exec.CreateContainerCmdExec.execute(CreateContainerCmdExec.java:13)
    at org.testcontainers.shaded.com.github.dockerjava.core.exec.AbstrSyncDockerCmdExec.exec(AbstrSyncDockerCmdExec.java:21)
    at org.testcontainers.shaded.com.github.dockerjava.core.command.AbstrDockerCmd.exec(AbstrDockerCmd.java:35)
    at org.testcontainers.shaded.com.github.dockerjava.core.command.CreateContainerCmdImpl.exec(CreateContainerCmdImpl.java:595)
    at org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:89)
    at org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:201)
    at org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
    at org.testcontainers.LazyDockerClient.listImagesCmd(LazyDockerClient.java:12)
    at org.testcontainers.images.LocalImagesCache.maybeInitCache(LocalImagesCache.java:68)
    at org.testcontainers.images.LocalImagesCache.get(LocalImagesCache.java:32)
    at org.testcontainers.images.AbstractImagePullPolicy.shouldPull(AbstractImagePullPolicy.java:18)
    at org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:65)
    at org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:26)
    at org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:17)
    at org.testcontainers.utility.LazyFuture.get(LazyFuture.java:39)
    at org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1280)
    ... 41 more

I cannot explain the message "CreateContainerFromCreateConfig(): unix:///run/user/1000/podman/podman.sock:/var/run/docker.sock:rw: incorrect volume format, should be [host-dir:]ctr-dir[:option]".
I guess that something is missing in the system configuration that makes docker-java using DOCKER_HOST in the wrong way. Or I do not understand how testcontainers is working :(

The problem is that TestContainer is using the right socket "/run/user/1000/podman/podman.sock" and docker-java is using the wrong socket "/var/run/podman.sock".
I understand that docker-java supports this socket only

This is wrong at many levels. docker-java supports setting custom socket location just fine.

I am kindly asking you to do a bit of research of your setup.

Also note that Podman is an advanced tool. If you don't how to use Podman (especially with tools that expect Docker), I would recommend you to use Docker instead, as it is "just works".

From the logs above: is the mount correctly configured? Maybe the : at the end is wrong or is it part of the error log format? unix:///run/user/1000/podman/podman.sock:/var/run/docker.sock:rw:

Then: does the container try to use the default /var/run/docker.sock and the issue is a problem inside that container? Maybe it's neither Testcontainers, Docker-Java or Podman, but a simple dangling :?

Edit: and why is unix:// part of the host-dir in that volume mount? The protocol shouldn't appear there, should it?

simple test

context

testcontainers 1.15.2
ryuk disabled
podman version
Version:      3.0.1
API Version:  3.0.0
Go Version:   go1.15.8
Built:        Fri Feb 19 13:56:17 2021
OS/Arch:      linux/amd64
@Test
void testAlpine() {
   try (GenericContainer container = new GenericContainer("docker.io/library/alpine:3.5")) {
       container.start();
       container.withCommand("echo ok");
   }
}

check

podman run -it --rm -v $PWD:$PWD -w $PWD -v /run/user/1000/podman/podman.sock:/run/user/1000/podman/podman.sock docker.io/alpine:3.5 echo ok
ok

trace

22:52:25.449 [main] DEBUG o.t.u.TestcontainersConfiguration - Testcontainers configuration overrides will be loaded from file:/home/msa/.testcontainers.properties
22:52:25.469 [main] INFO  o.t.d.DockerClientProviderStrategy - Loaded org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy from ~/.testcontainers.properties, will try it first
22:52:25.856 [ducttape-0] DEBUG o.t.d.DockerClientProviderStrategy - Pinging docker daemon...
22:52:25.877 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: 
22:52:26.168 [main] INFO  o.t.d.DockerClientProviderStrategy - Found Docker environment with Environment variables, system properties and defaults. Resolved dockerHost=unix:///run/user/1000/podman/podman.sock
22:52:26.168 [main] DEBUG o.t.d.DockerClientProviderStrategy - Transport type: 'okhttp', Docker host: 'unix:///run/user/1000/podman/podman.sock'
22:52:26.168 [main] DEBUG o.t.d.DockerClientProviderStrategy - Checking Docker OS type for Environment variables, system properties and defaults. Resolved dockerHost=unix:///run/user/1000/podman/podman.sock
22:52:26.169 [main] INFO  o.testcontainers.DockerClientFactory - Docker host IP address is localhost
22:52:26.170 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: 
22:52:26.221 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: 
22:52:26.275 [main] INFO  o.testcontainers.DockerClientFactory - Connected to docker: 
  Server Version: 3.0.1
  API Version: 1.40
  Operating System: fedora
  Total Memory: 11862 MB
22:52:26.275 [main] DEBUG o.testcontainers.DockerClientFactory - Ryuk is disabled
22:52:26.275 [main] DEBUG o.testcontainers.DockerClientFactory - Checks are enabled
22:52:26.275 [main] INFO  o.testcontainers.DockerClientFactory - Checking the system...
22:52:26.276 [main] INFO  o.testcontainers.DockerClientFactory - 鉁旓笌 Docker server version should be at least 1.6.0
22:52:26.279 [main] INFO  o.t.utility.ImageNameSubstitutor - Image name substitution will be performed by: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor')
22:52:26.284 [main] DEBUG o.t.u.PrefixingImageNameSubstitutor - No prefix is configured
22:52:26.284 [main] DEBUG o.t.utility.ImageNameSubstitutor - Did not find a substitute image for alpine:3.5 (using image substitutor: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor'))
22:52:26.285 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: alpine:3.5
22:52:26.332 [main] DEBUG o.t.utility.RegistryAuthLocator - Looking up auth config for image: alpine:3.5 at registry: index.docker.io
22:52:26.332 [main] DEBUG o.t.utility.RegistryAuthLocator - RegistryAuthLocator has configFile: /home/msa/.docker/config.json (exists) and commandPathPrefix: 
22:52:26.335 [main] DEBUG o.t.utility.RegistryAuthLocator - registryName [index.docker.io] for dockerImageName [alpine:3.5]
22:52:26.336 [main] DEBUG o.t.utility.RegistryAuthLocator - No matching Auth Configs - falling back to defaultAuthConfig [null]
22:52:26.336 [main] DEBUG o.t.d.AuthDelegatingDockerClientConfig - Effective auth config [null]
22:52:26.338 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: org.testcontainers.shaded.com.github.dockerjava.core.command.CreateContainerCmdImpl@2cc3ad05[name=testcontainers-checks-5b62a74f-86f8-4894-8d74-ccbbdd53ef1c,hostName=<null>,domainName=<null>,user=<null>,attachStdin=<null>,attachStdout=<null>,attachStderr=<null>,portSpecs=<null>,tty=<null>,stdinOpen=<null>,stdInOnce=<null>,env=<null>,cmd={tail,-f,/dev/null},healthcheck=<null>,argsEscaped=<null>,entrypoint=<null>,image=alpine:3.5,volumes=com.github.dockerjava.api.model.Volumes@35e5d0e5,workingDir=<null>,macAddress=<null>,onBuild=<null>,networkDisabled=<null>,exposedPorts=com.github.dockerjava.api.model.ExposedPorts@73173f63,stopSignal=<null>,stopTimeout=<null>,hostConfig=HostConfig(binds=[], blkioWeight=null, blkioWeightDevice=null, blkioDeviceReadBps=null, blkioDeviceWriteBps=null, blkioDeviceReadIOps=null, blkioDeviceWriteIOps=null, memorySwappiness=null, nanoCPUs=null, capAdd=null, capDrop=null, containerIDFile=null, cpuPeriod=null, cpuRealtimePeriod=null, cpuRealtimeRuntime=null, cpuShares=null, cpuQuota=null, cpusetCpus=null, cpusetMems=null, devices=null, deviceCgroupRules=null, deviceRequests=null, diskQuota=null, dns=null, dnsOptions=null, dnsSearch=null, extraHosts=null, groupAdd=null, ipcMode=null, cgroup=null, links=[], logConfig=LogConfig(type=null, config=null), lxcConf=null, memory=null, memorySwap=null, memoryReservation=null, kernelMemory=null, networkMode=null, oomKillDisable=null, init=null, autoRemove=true, oomScoreAdj=null, portBindings=null, privileged=null, publishAllPorts=null, readonlyRootfs=null, restartPolicy=null, ulimits=null, cpuCount=null, cpuPercent=null, ioMaximumIOps=null, ioMaximumBandwidth=null, volumesFrom=null, mounts=null, pidMode=null, isolation=null, securityOpts=null, storageOpt=null, cgroupParent=null, volumeDriver=null, shmSize=null, pidsLimit=null, runtime=null, tmpFs=null, utSMode=null, usernsMode=null, sysctls=null, consoleSize=null),labels={org.testcontainers=true, org.testcontainers.sessionId=5b62a74f-86f8-4894-8d74-ccbbdd53ef1c},shell=<null>,networkingConfig=<null>,ipv4Address=<null>,ipv6Address=<null>,aliases=<null>,authConfig=<null>]
22:52:26.563 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: c34c5c618ec938189acac116019c4ba64bc87613e163626759ee13f56b1d8d18
22:52:26.908 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: c34c5c618ec938189acac116019c4ba64bc87613e163626759ee13f56b1d8d18,<null>,true,<null>,<null>,<null>,<null>,{df,-P},<null>,<null>
22:52:27.286 [main] INFO  o.testcontainers.DockerClientFactory - 鉁旓笌 Docker environment should have more than 2GB free disk space
22:52:27.290 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: c34c5c618ec938189acac116019c4ba64bc87613e163626759ee13f56b1d8d18,true,true
22:52:27.894 [main] DEBUG o.testcontainers.DockerClientFactory - Swallowed exception while removing container
com.github.dockerjava.api.exception.InternalServerErrorException: Status 500: {"cause":"container has already been removed","message":"error saving container c34c5c618ec938189acac116019c4ba64bc87613e163626759ee13f56b1d8d18 state: container has already been removed","response":500}

    at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.execute(DefaultInvocationBuilder.java:247)
    at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.delete(DefaultInvocationBuilder.java:56)
    at org.testcontainers.shaded.com.github.dockerjava.core.exec.RemoveContainerCmdExec.execute(RemoveContainerCmdExec.java:28)
    at org.testcontainers.shaded.com.github.dockerjava.core.exec.RemoveContainerCmdExec.execute(RemoveContainerCmdExec.java:11)
    at org.testcontainers.shaded.com.github.dockerjava.core.exec.AbstrSyncDockerCmdExec.exec(AbstrSyncDockerCmdExec.java:21)
    at org.testcontainers.shaded.com.github.dockerjava.core.command.AbstrDockerCmd.exec(AbstrDockerCmd.java:35)
    at org.testcontainers.shaded.com.github.dockerjava.core.command.RemoveContainerCmdImpl.exec(RemoveContainerCmdImpl.java:67)
    at org.testcontainers.DockerClientFactory.runInsideDocker(DockerClientFactory.java:365)
    at org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:226)
    at org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
    at org.testcontainers.LazyDockerClient.authConfig(LazyDockerClient.java:12)
    at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:310)
    at sa.mp.test.testcontainers.ContainersTest.testAlpine(ContainersTest.java:60)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:564)
    at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:133)
    at org.testng.internal.TestInvoker.invokeMethod(TestInvoker.java:598)
    at org.testng.internal.TestInvoker.invokeTestMethod(TestInvoker.java:173)
    at org.testng.internal.MethodRunner.runInSequence(MethodRunner.java:46)
    at org.testng.internal.TestInvoker$MethodInvocationAgent.invoke(TestInvoker.java:824)
    at org.testng.internal.TestInvoker.invokeTestMethods(TestInvoker.java:146)
    at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:146)
    at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:128)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
    at org.testng.TestRunner.privateRun(TestRunner.java:794)
    at org.testng.TestRunner.run(TestRunner.java:596)
    at org.testng.SuiteRunner.runTest(SuiteRunner.java:377)
    at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:371)
    at org.testng.SuiteRunner.privateRun(SuiteRunner.java:332)
    at org.testng.SuiteRunner.run(SuiteRunner.java:276)
    at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:53)
    at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:96)
    at org.testng.TestNG.runSuitesSequentially(TestNG.java:1212)
    at org.testng.TestNG.runSuitesLocally(TestNG.java:1134)
    at org.testng.TestNG.runSuites(TestNG.java:1063)
    at org.testng.TestNG.run(TestNG.java:1031)
    at org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.java:135)
    at org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.executeSingleClass(TestNGDirectoryTestSuite.java:112)
    at org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.execute(TestNGDirectoryTestSuite.java:99)
    at org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider.java:146)
    at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
    at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
    at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
    at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
22:52:27.895 [main] DEBUG o.t.u.PrefixingImageNameSubstitutor - No prefix is configured
22:52:27.896 [main] DEBUG o.t.utility.ImageNameSubstitutor - Did not find a substitute image for docker.io/library/alpine:3.5 (using image substitutor: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor'))
22:52:27.909 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: ListImagesCmdImpl[imageNameFilter=<null>,showAll=false,filters=org.testcontainers.shaded.com.github.dockerjava.core.util.FiltersBuilder@0]
22:52:28.030 [main] DEBUG o.t.images.AbstractImagePullPolicy - Using locally available and not pulling image: docker.io/library/alpine:3.5
22:52:28.031 [main] DEBUG 馃惓 [docker.io/library/alpine:3.5] - Starting container: docker.io/library/alpine:3.5
22:52:28.032 [main] DEBUG 馃惓 [docker.io/library/alpine:3.5] - Trying to start container: docker.io/library/alpine:3.5 (attempt 1/1)
22:52:28.032 [main] DEBUG 馃惓 [docker.io/library/alpine:3.5] - Starting container: docker.io/library/alpine:3.5
22:52:28.033 [main] INFO  馃惓 [docker.io/library/alpine:3.5] - Creating container for image: docker.io/library/alpine:3.5
22:52:28.033 [main] DEBUG o.t.utility.RegistryAuthLocator - Looking up auth config for image: docker.io/library/alpine:3.5 at registry: docker.io
22:52:28.034 [main] DEBUG o.t.utility.RegistryAuthLocator - RegistryAuthLocator has configFile: /home/msa/.docker/config.json (exists) and commandPathPrefix: 
22:52:28.034 [main] DEBUG o.t.utility.RegistryAuthLocator - registryName [docker.io] for dockerImageName [docker.io/library/alpine:3.5]
22:52:28.047 [main] DEBUG o.t.utility.RegistryAuthLocator - found existing auth config [AuthConfig{username=xxx, password=hidden non-blank value, auth=hidden non-blank value, email=null, registryAddress=docker.io, registryToken=blank}]
22:52:28.047 [main] DEBUG o.t.utility.RegistryAuthLocator - Cached auth found: [AuthConfig{username=xxx, password=hidden non-blank value, auth=hidden non-blank value, email=null, registryAddress=docker.io, registryToken=blank}]
22:52:28.047 [main] DEBUG o.t.d.AuthDelegatingDockerClientConfig - Effective auth config [AuthConfig{username=xxx, password=hidden non-blank value, auth=hidden non-blank value, email=null, registryAddress=docker.io, registryToken=blank}]
22:52:28.055 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: org.testcontainers.shaded.com.github.dockerjava.core.command.CreateContainerCmdImpl@4bff1903[name=<null>,hostName=<null>,domainName=<null>,user=<null>,attachStdin=<null>,attachStdout=<null>,attachStderr=<null>,portSpecs=<null>,tty=<null>,stdinOpen=<null>,stdInOnce=<null>,env={},cmd={},healthcheck=<null>,argsEscaped=<null>,entrypoint=<null>,image=docker.io/library/alpine:3.5,volumes=com.github.dockerjava.api.model.Volumes@62dae540,workingDir=<null>,macAddress=<null>,onBuild=<null>,networkDisabled=<null>,exposedPorts=com.github.dockerjava.api.model.ExposedPorts@5827af16,stopSignal=<null>,stopTimeout=<null>,hostConfig=HostConfig(binds=[], blkioWeight=null, blkioWeightDevice=null, blkioDeviceReadBps=null, blkioDeviceWriteBps=null, blkioDeviceReadIOps=null, blkioDeviceWriteIOps=null, memorySwappiness=null, nanoCPUs=null, capAdd=null, capDrop=null, containerIDFile=null, cpuPeriod=null, cpuRealtimePeriod=null, cpuRealtimeRuntime=null, cpuShares=null, cpuQuota=null, cpusetCpus=null, cpusetMems=null, devices=null, deviceCgroupRules=null, deviceRequests=null, diskQuota=null, dns=null, dnsOptions=null, dnsSearch=null, extraHosts=[], groupAdd=null, ipcMode=null, cgroup=null, links=[], logConfig=LogConfig(type=null, config=null), lxcConf=null, memory=null, memorySwap=null, memoryReservation=null, kernelMemory=null, networkMode=null, oomKillDisable=null, init=null, autoRemove=null, oomScoreAdj=null, portBindings={}, privileged=null, publishAllPorts=true, readonlyRootfs=null, restartPolicy=null, ulimits=null, cpuCount=null, cpuPercent=null, ioMaximumIOps=null, ioMaximumBandwidth=null, volumesFrom=[], mounts=null, pidMode=null, isolation=null, securityOpts=null, storageOpt=null, cgroupParent=null, volumeDriver=null, shmSize=null, pidsLimit=null, runtime=null, tmpFs=null, utSMode=null, usernsMode=null, sysctls=null, consoleSize=null),labels={org.testcontainers=true, org.testcontainers.sessionId=5b62a74f-86f8-4894-8d74-ccbbdd53ef1c},shell=<null>,networkingConfig=<null>,ipv4Address=<null>,ipv6Address=<null>,aliases=<null>,authConfig=AuthConfig(username=xxx, email=null, registryAddress=docker.io, stackOrchestrator=null)]
22:52:28.215 [main] INFO  馃惓 [docker.io/library/alpine:3.5] - Starting container with ID: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178
22:52:28.215 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178
22:52:28.484 [main] INFO  馃惓 [docker.io/library/alpine:3.5] - Container docker.io/library/alpine:3.5 is starting: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178
22:52:28.491 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:28.492 [main] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:28.814 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:28.814 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:29.819 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:29.820 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:30.832 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:30.833 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:31.850 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:31.851 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:32.859 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:32.860 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:33.868 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:33.869 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:34.877 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:34.878 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:35.887 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:35.888 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:36.899 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:36.900 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:37.913 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:37.914 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:38.927 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:38.928 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:39.940 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:39.941 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:40.954 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:40.954 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:41.960 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:41.961 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:42.973 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:42.975 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:44.000 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:44.001 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:45.009 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:45.010 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:46.025 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:46.026 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:47.037 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:47.038 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:48.053 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:48.054 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:49.064 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:49.064 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:50.068 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:50.069 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:51.078 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:51.079 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:52.088 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:52.089 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:53.096 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:53.098 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:54.108 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:54.108 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:55.113 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:55.114 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:56.120 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:56.121 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:57.128 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:57.129 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:58.134 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:58.135 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:58.815 [main] ERROR 馃惓 [docker.io/library/alpine:3.5] - Could not start container
org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
    at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:54)
    at org.rnorth.ducttape.unreliables.Unreliables.retryUntilTrue(Unreliables.java:100)
    at org.testcontainers.containers.startupcheck.StartupCheckStrategy.waitUntilStartupSuccessful(StartupCheckStrategy.java:35)
    at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:432)
    at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:325)
    at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
    at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:323)
    at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:311)
    at sa.mp.test.testcontainers.ContainersTest.testAlpine(ContainersTest.java:60)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:564)
    at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:133)
    at org.testng.internal.TestInvoker.invokeMethod(TestInvoker.java:598)
    at org.testng.internal.TestInvoker.invokeTestMethod(TestInvoker.java:173)
    at org.testng.internal.MethodRunner.runInSequence(MethodRunner.java:46)
    at org.testng.internal.TestInvoker$MethodInvocationAgent.invoke(TestInvoker.java:824)
    at org.testng.internal.TestInvoker.invokeTestMethods(TestInvoker.java:146)
    at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:146)
    at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:128)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
    at org.testng.TestRunner.privateRun(TestRunner.java:794)
    at org.testng.TestRunner.run(TestRunner.java:596)
    at org.testng.SuiteRunner.runTest(SuiteRunner.java:377)
    at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:371)
    at org.testng.SuiteRunner.privateRun(SuiteRunner.java:332)
    at org.testng.SuiteRunner.run(SuiteRunner.java:276)
    at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:53)
    at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:96)
    at org.testng.TestNG.runSuitesSequentially(TestNG.java:1212)
    at org.testng.TestNG.runSuitesLocally(TestNG.java:1134)
    at org.testng.TestNG.runSuites(TestNG.java:1063)
    at org.testng.TestNG.run(TestNG.java:1031)
    at org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.java:135)
    at org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.executeSingleClass(TestNGDirectoryTestSuite.java:112)
    at org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.execute(TestNGDirectoryTestSuite.java:99)
    at org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider.java:146)
    at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
    at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
    at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
    at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: java.lang.RuntimeException: Not ready yet
    at org.rnorth.ducttape.unreliables.Unreliables.lambda$retryUntilTrue$1(Unreliables.java:102)
    at org.rnorth.ducttape.unreliables.Unreliables.lambda$retryUntilSuccess$0(Unreliables.java:43)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
    at java.base/java.lang.Thread.run(Thread.java:832)
22:52:58.844 [main] ERROR 馃惓 [docker.io/library/alpine:3.5] - There are no stdout/stderr logs available for the failed container
22:52:58.847 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:58.847 [main] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:58.852 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:58.852 [main] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:58.858 [main] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,true,true
22:52:59.143 [ducttape-0] DEBUG o.t.s.c.g.d.c.command.AbstrDockerCmd - Cmd: be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178,false
22:52:59.143 [ducttape-0] DEBUG o.t.s.c.g.d.c.e.InspectContainerCmdExec - GET: DefaultWebTarget{path=[/containers/be0538230466b9dd2f34e4538af04bdaa5b4351f711e1410a4df188563e7c178/json], queryParams={}}
22:52:59.188 [main] DEBUG o.t.utility.ResourceReaper - Removed container and associated volume(s): docker.io/library/alpine:3.5
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 34.417 s <<< FAILURE! - in sa.mp.test.testcontainers.ContainersTest
testAlpine(sa.mp.test.testcontainers.ContainersTest)  Time elapsed: 33.968 s  <<< FAILURE!
org.testcontainers.containers.ContainerLaunchException: Container startup failed
    at sa.mp.test.testcontainers.ContainersTest.testAlpine(ContainersTest.java:60)
Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
    at sa.mp.test.testcontainers.ContainersTest.testAlpine(ContainersTest.java:60)
Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
    at sa.mp.test.testcontainers.ContainersTest.testAlpine(ContainersTest.java:60)
Caused by: org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
    at sa.mp.test.testcontainers.ContainersTest.testAlpine(ContainersTest.java:60)
Caused by: java.lang.RuntimeException: Not ready yet

@msausu please report to Podman's issue tracker. Also, you seems to be starting the container before setting the command.

I don't see any change if the test is run with or without the command:

@Test
void testAlpine() {
   try (GenericContainer container = new GenericContainer("docker.io/library/alpine:3.5")) {
       container.start();
   }
}

and

@Test
void testAlpine() {
   try (GenericContainer container = new GenericContainer("docker.io/library/alpine:3.5")) {
       container.withCommand("echo ok").start();
   }
}

return the same trace. btw the line

INFO  o.testcontainers.DockerClientFactory - 鉁旓笌 Docker environment should have more than 2GB 

indicates that the memory test succeeded.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

chomhanks picture chomhanks  路  3Comments

dabraham02124 picture dabraham02124  路  3Comments

ayedo picture ayedo  路  3Comments

aniketbhatnagar picture aniketbhatnagar  路  3Comments

itudoben picture itudoben  路  3Comments