go version)?1.6.4
MySQL 5.7
I noticed that if you set the MySQL connection count to a low amount (say 4) then launch 4 Go-routines and open transactions on all of them, the program seems to just lock-up.
I'm not sure what the expected behavior in this case should be, but if I set the connection count to be at least the number of Go-routines, it runs fine... as the connections cycle while the system loops through Go-routines, etc.
Has anyone else run into this issue?
Same here :(
Postgres 9.6
Go 1.9.2


So the reason was that we executed a query outside of transaction, while the current transaction was active(not commited), so it goes to the pool, and asks for a new connection for that query. If you have a pool of X connections, and X gorutines opened a transaction(each gorutines holds its own connection then) , then tries to evaluate a query outside of transaction, it needs a new coonection, so it will wait for a connection forever from the pool(lock)
Hi, Could anyone post a runnable code that could reproduce the issue?
这个问题真的存在,只要使用了 transaction,并且并发量比较高的时候就会出现。我项目中经常出现,但是我却没有能找到这个bug的最小化代码。建议给gorm加prometheus的metrics对连接数进行监控,然后看连接数变化。连接数的表现是只增不减,感觉就是连接没有被复用,或者是连接泄漏。@jinzhu
@duguying Hi, 你所遇到的问题最终解决了吗?这个问题是由什么引起的?
@DearMadMan 还没解决,暂时没时间去研究gorm,猜测应该是某个地方导致连接池泄露
@DearMadMan 最近我也遇到这个问题了,产生原因:开始事务后没 commit 或者Rollback,导致连接一直在等待。解决办法增加commit或者Rollback就可以了。
@guoyucheng 不是的,我说的绝对带commit和rollback了。正常提交了事务也会连接数增加。你没遇到可能是没有触发或者是并发量不够。因为我发现连接数只会增,不会减
怀疑有可能与mysql驱动有关,我补充一下我的mysql驱动版本,所在项目目前不由我维护了
{
"ImportPath": "----",
"GoVersion": "go1.10",
"GodepVersion": "v79",
"Deps": [
{
"ImportPath": "github.com/go-sql-driver/mysql",
"Comment": "v1.3-2-g2e00b5c",
"Rev": "2e00b5cd70399450106cec6431c2e2ce3cae5034"
},
{
"ImportPath": "github.com/jinzhu/gorm",
"Comment": "v1.9.1",
"Rev": "6ed508ec6a4ecb3531899a69cbc746ccf65a4166"
},
{
"ImportPath": "github.com/jinzhu/gorm/dialects/mysql",
"Comment": "v1.9.1",
"Rev": "6ed508ec6a4ecb3531899a69cbc746ccf65a4166"
},
{
"ImportPath": "github.com/jinzhu/inflection",
"Rev": "74387dc39a75e970e7a3ae6a3386b5bd2e5c5cff"
},
....
]
}
是不是链式api设计带来的问题?有没有抛弃链式调用的计划?
This issue has been automatically marked as stale as it missing playground pull request link, checkout https://github.com/go-gorm/playground for details, it will be closed in 2 days if no further activity occurs.
Most helpful comment
这个问题真的存在,只要使用了 transaction,并且并发量比较高的时候就会出现。我项目中经常出现,但是我却没有能找到这个bug的最小化代码。建议给gorm加prometheus的metrics对连接数进行监控,然后看连接数变化。连接数的表现是只增不减,感觉就是连接没有被复用,或者是连接泄漏。@jinzhu