Prisma1: Database - Connection is not available

Created on 3 Mar 2019  Â·  4Comments  Â·  Source: prisma/prisma1

Describe the bug
I am following this Get Started guide verbatim.

Prisma doesn't run on http://localhost:4466/ after running docker-compose up -d

Running docker ps gives me this (No port is allocated to the mysql container):

CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS                          PORTS                    NAMES
8289d0817837        mysql:5.7                   "docker-entrypoint.s…"   14 minutes ago      Restarting (2) 43 seconds ago                            backend_mysql_1
76aba55bffd4        prismagraphql/prisma:1.27   "/bin/sh -c /app/sta…"   14 minutes ago      Up 6 seconds                    0.0.0.0:4466->4466/tcp   backend_prisma_1

docker-compose ps

The system cannot find the path specified.
      Name                   Command               State              Ports
------------------------------------------------------------------------------------
backend_mysql_1    docker-entrypoint.sh mysqld   Restarting
backend_prisma_1   /bin/sh -c /app/start.sh      Up           0.0.0.0:4466->4466/tcp

Running docker logs on the prisma container gives me this:

Exception in thread "main" java.sql.SQLTransientConnectionException: database - Connection is not available, request timed out after 5000ms.
        at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:548)
            at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:186)
            at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:145)
            at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:83)
            at slick.jdbc.hikaricp.HikariCPJdbcDataSource.createConnection(HikariCPJdbcDataSource.scala:14)
            at slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:453)
            at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:46)
            at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:37)
            at slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:249)
            at slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:248)
            at slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:37)
            at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:274)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
            at java.lang.Thread.run(Thread.java:748)
    Caused by: java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=mysql)(port=3306)(type=master) : Connection refused (Connection refused)
            at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:161)
            at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.connException(ExceptionMapper.java:79)
            at org.mariadb.jdbc.internal.protocol.AbstractConnectProtocol.connectWithoutProxy(AbstractConnectProtocol.java:1040)
            at org.mariadb.jdbc.internal.util.Utils.retrieveProxy(Utils.java:490)
            at org.mariadb.jdbc.MariaDbConnection.newConnection(MariaDbConnection.java:144)
            at org.mariadb.jdbc.Driver.connect(Driver.java:90)
            at slick.jdbc.DriverDataSource.getConnection(DriverDataSource.scala:101)
            at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:341)
            at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:193)
            at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:430)
            at com.zaxxer.hikari.pool.HikariPool.access$500(HikariPool.java:64)
            at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:570)
            at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:563)
            at java.util.concurrent.FutureTask.run(FutureTask.java:266)
            ... 3 more
    Caused by: java.net.ConnectException: Connection refused (Connection refused)
            at java.net.PlainSocketImpl.socketConnect(Native Method)
            at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
            at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
            at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
            at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
            at java.net.Socket.connect(Socket.java:589)
            at java.net.Socket.connect(Socket.java:538)
            at org.mariadb.jdbc.internal.protocol.AbstractConnectProtocol.connect(AbstractConnectProtocol.java:398)
            at org.mariadb.jdbc.internal.protocol.AbstractConnectProtocol.connectWithoutProxy(AbstractConnectProtocol.java:1032)
            ... 14 more

Running docker logs on the mysql container gives me this:

M, instances = 1, chunk size = 128M
    2019-03-03T05:55:34.059073Z 0 [Note] InnoDB: Completed initialization of buffer pool
    2019-03-03T05:55:34.061007Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
    2019-03-03T05:55:34.072210Z 0 [ERROR] [FATAL] InnoDB: Table flags are 0 in the data dictionary but the flags in file ./ibdata1 are 0x4800!
    2019-03-03 05:55:34 0x7f25c2567740  InnoDB: Assertion failure in thread 139800150964032 in file ut0ut.cc line 942
    InnoDB: We intentionally generate a memory trap.
    InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
    InnoDB: If you get repeated assertion failures or crashes, even
    InnoDB: immediately after the mysqld startup, there may be
    InnoDB: corruption in the InnoDB tablespace. Please refer to
    InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
    InnoDB: about forcing recovery.
    05:55:34 UTC - mysqld got signal 6 ;
    This could be because you hit a bug. It is also possible that this binary
    or one of the libraries it was linked against is corrupt, improperly built,
    or misconfigured. This error can also be caused by malfunctioning hardware.
    Attempting to collect some information that could help diagnose the problem.
    As this is a crash and something is definitely wrong, the information
    collection process might fail.

    key_buffer_size=8388608
    read_buffer_size=131072
    max_used_connections=0
    max_threads=151
    thread_count=0
    connection_count=0
    It is possible that mysqld could use up to
    key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 68196 K  bytes of memory
    Hope that's ok; if not, decrease some variables in the equation.

    Thread pointer: 0x0
    Attempting backtrace. You can use the following information to find out
    where mysqld died. If you see no messages after this, something went
    terribly wrong...
    stack_bottom = 0 thread_stack 0x40000
    mysqld(my_print_stacktrace+0x2c)[0x55b550a9381c]
    mysqld(handle_fatal_signal+0x479)[0x55b5503be879]
    /lib/x86_64-linux-gnu/libpthread.so.0(+0x110c0)[0x7f25c21450c0]
    /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf)[0x7f25c08d1fff]
    /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7f25c08d342a]
    mysqld(+0x628efb)[0x55b550394efb]
    mysqld(_ZN2ib5fatalD1Ev+0x12d)[0x55b550c63b9d]
    mysqld(+0xfa4901)[0x55b550d10901]
    mysqld(+0xfa4f38)[0x55b550d10f38]
    mysqld(_Z6fil_ioRK9IORequestbRK9page_id_tRK11page_size_tmmPvS8_+0x2b0)[0x55b550d1a0f0]
    mysqld(_Z13buf_read_pageRK9page_id_tRK11page_size_t+0xce)[0x55b550ccf16e]
    mysqld(_Z16buf_page_get_genRK9page_id_tRK11page_size_tmP11buf_block_tmPKcmP5mtr_tb+0x4aa)[0x55b550c9e3ba]
    mysqld(_Z31trx_rseg_get_n_undo_tablespacesPm+0x143)[0x55b550c41d33]
    mysqld(+0x62806f)[0x55b55039406f]
    mysqld(_Z34innobase_start_or_create_for_mysqlv+0x2f3d)[0x55b550c0ebed]
    mysqld(+0xd6e963)[0x55b550ada963]
    mysqld(_Z24ha_initialize_handlertonP13st_plugin_int+0x4f)[0x55b55040959f]
    mysqld(+0xb14e16)[0x55b550880e16]
    mysqld(_Z40plugin_register_builtin_and_init_core_sePiPPc+0x2f0)[0x55b550884000]
    mysqld(+0x64af7e)[0x55b5503b6f7e]
    mysqld(_Z11mysqld_mainiPPc+0xc71)[0x55b5503b8b41]
    /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f25c08bf2e1]
    mysqld(_start+0x2a)[0x55b5503af21a]
    The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
    information that should help you find out what is causing the crash.
    2019-03-03T05:56:35.267390Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
    2019-03-03T05:56:35.268644Z 0 [Note] mysqld (mysqld 5.7.25) starting as process 1 ...
    2019-03-03T05:56:35.271475Z 0 [Note] InnoDB: PUNCH HOLE support available
    2019-03-03T05:56:35.271519Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
    2019-03-03T05:56:35.271525Z 0 [Note] InnoDB: Uses event mutexes
    2019-03-03T05:56:35.271529Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
    2019-03-03T05:56:35.271533Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
    2019-03-03T05:56:35.271536Z 0 [Note] InnoDB: Using Linux native AIO
    2019-03-03T05:56:35.271709Z 0 [Note] InnoDB: Number of pools: 1
    2019-03-03T05:56:35.271820Z 0 [Note] InnoDB: Using CPU crc32 instructions
    2019-03-03T05:56:35.273305Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
    2019-03-03T05:56:35.279100Z 0 [Note] InnoDB: Completed initialization of buffer pool
    2019-03-03T05:56:35.280747Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
    2019-03-03T05:56:35.291980Z 0 [ERROR] [FATAL] InnoDB: Table flags are 0 in the data dictionary but the flags in file ./ibdata1 are 0x4800!
    2019-03-03 05:56:35 0x7f1afdaee740  InnoDB: Assertion failure in thread 139753901975360 in file ut0ut.cc line 942
    InnoDB: We intentionally generate a memory trap.
    InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
    InnoDB: If you get repeated assertion failures or crashes, even
    InnoDB: immediately after the mysqld startup, there may be
    InnoDB: corruption in the InnoDB tablespace. Please refer to
    InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
    InnoDB: about forcing recovery.
    05:56:35 UTC - mysqld got signal 6 ;
    This could be because you hit a bug. It is also possible that this binary
    or one of the libraries it was linked against is corrupt, improperly built,
    or misconfigured. This error can also be caused by malfunctioning hardware.
    Attempting to collect some information that could help diagnose the problem.
    As this is a crash and something is definitely wrong, the information
    collection process might fail.

To Reproduce
Follow this guide exactly on a Windows 10 machine.

My docker-compose.yml file:

version: '3'
services:
  prisma:
    image: prismagraphql/prisma:1.27
    restart: always
    ports:
    - "4466:4466"
    environment:
      PRISMA_CONFIG: |
        port: 4466
        databases:
          default:
            connector: mysql
            host: mysql
            port: 3306
            user: root
            password: prisma
            migrations: true
  mysql:
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: prisma
    volumes:
      - mysql:/var/lib/mysql
volumes:
  mysql:

Expected behavior
I would expect Prisma to connect to the local MySQl database and run on http://localhost:4466

Versions (please complete the following information):

  • Connector: [ MySQL]
  • prisma CLI: [prisma/1.27.4]
  • docker-compose - 1.23.2
  • node - 10.15.1
  • npm - 6.8.0
  • OS: [Windows 10 Pro]

Most helpful comment

I just started seeing this as well, not sure what happened differently.

All 4 comments

prisma:1.28.0 has now fixed the issue

I removed all the containers, images and the mysql volume and reinstalled everything.

I've tried 1.24, 1.28, 1.29. I always get this error database - Connection is not available, request timed out after 5000ms., including 1.28. The error happens after letting my docker run for a random amount of time, from a couple of hours to a week. Can we reopen this issue?

I just started seeing this as well, not sure what happened differently.

I have same error.
My environment:
OS: Centos-7 SE (Security Enabled)
prisma:1.34
docker-compose: 1.23.2
node: 10.16.0
npm: 6.9.0
mysql: 5.7
Percona XtraDB Cluster: Release rel29, Revision 03540a3, WSREP version 31.37, wsrep_31.3

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Fi1osof picture Fi1osof  Â·  3Comments

schickling picture schickling  Â·  3Comments

notrab picture notrab  Â·  3Comments

marktani picture marktani  Â·  3Comments

marktani picture marktani  Â·  3Comments