we seem to have found a race condition
when deploying a contract, the transactionReceipt is (sometimes) available before the contract code of the freshly deployed contract. Parity returns empty code for the freshly deployed contract.
This causes an issue with web3 when it checks for the presence of contract code as a sanity check after deployment.
transactionReceipt
{
"id": 2433484711880063,
"jsonrpc": "2.0",
"params": [
"0xd82074ac2c0accc2f594cf7654c492f4e4372074cad841cd16302585e43691d3"
],
"method": "eth_getTransactionReceipt"
}
{
"jsonrpc": "2.0",
"result": {
"blockHash": null,
"blockNumber": null,
"contractAddress": "0x1a728d62831f24035b6c03b7915a7dcf6794f8e3",
"cumulativeGasUsed": "0x5caa7",
"gasUsed": "0x5caa7",
"logs": [],
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"root": "0x8b7d71e62f7ca0ce0a08f7114130c199a2bc6195b9a51338ad0947be435b7f39",
"transactionHash": "0xd82074ac2c0accc2f594cf7654c492f4e4372074cad841cd16302585e43691d3",
"transactionIndex": "0x0"
},
"id": 2433484711880063
}
md5-9a58380c74a1ff76422c234b40759ea1
```json
{
"jsonrpc": "2.0",
"result": "0x",
"id": 2433484711880065
}
I'd guess this is because we some pending changes end up in the BlockChain before writing the StateDB changes. The way to work around this should be the same way as it's done for light client import: have BlockChain return a Pending struct with all pending changes to avoid anything going into cache before disk, and then commit(pending) to caches after things are written to the database.
im not familiar with your priority system, but to give you an idea of the impact, this issue causes all contract deploys made through web3 to seem to fail
@kumavis Thanks for reporting, the priority is really only an internal thing to organize tickets. I'm pretty sure @rphmeier has already a fix in the pipe. I have to admit, the term _annoyance_ keeps confusing people.
(_F3-annoyance: The client behaves within expectations, however this "expected behaviour" itself is at issue._ / _F2-bug: The client fails to follow expected behavior._ wiki:Labeling)
@5chdn duly noted :)
I don't have a fix yet, but I'll try and get to this before the next release.
Adding this to the 1.7 milestone as @rphmeier promised to get this into the next release :)
I'm also encountering this bug. Since it still isn't fixed, is there an older version of parity I can run where it's known this problem doesn't occur ? (I can run it just long enough to get my contracts deployed, and then I can switch back to the latest.)
Actually it might be just a chain re-organization imho, the library shouldn't assume that if the receipt is ok, then the code will be there (cause the re-org can happen in the middle).
That's a good point. But my impression is that at this junction it would appear there's an incompatibility between parity & web3. And parity is the most popular server. And web3 is the most popular client. And an awful lot of people are deploying their contracts using web3 (either directly or indirectly via a tool like Truffle).
So if the parity team claims it's a problem with web3, and they're not going to fix it.
And the web3 team claims it's a problem with parity, and they're not going to fix it.
Then the community, as a whole, has a problem.
I'm not experienced enough (within the world of Ethereum) to know if all the statements above are indeed 100% accurate. But that's the impression I'm currently under. Does anybody know if web3 has addressed this problem? Is there a corresponding issue in their repo? (I searched, but didn't find one.)
I'm not suggesting that there are no issues in Parity. It's still open and marked as a bug and needs further investigation. Just pointed out that the current web3 approach might not work reliably as well.
Actually just looked at the initial report and it seems that the receipt is indeed returned but the transaction is still pending (note that blocknumber is missing in the receipt). This can be addressed by running in --geth compatibility mode.
Yeah, I completely misdiagnosed this issue initially. @tomusdrw another solution would be to fetch contract code with block pending instead of latest, right? But that's probably not useful if you are waiting for a contract to actually be deployed.
I too have run into this issue. Weirdly I was able to deploy a contract at one point but with the current truffle code I have it's just dying every time. When checking the docker logs, I see the transaction has been mined and has gone through...not sure why this is happening even with the comments here. Has anyone stewed further on this and figured out WHY it's happening?
To add also, the --geth compatibility helps...slightly...rather than just die right away instead it hangs.
I can verify this is an issue as well. I am using Parity and Truffle to deploy to Kovan. I switched to Geth and was able to deploy with Truffle to rinkeby with exactly the same settings (gas, etc.). The error I was getting was:
Error encountered, bailing. Network state unknown. Review successful transactions manually.
Error: Contract transaction couldn't be found after 50 blocks
I run into this issue as wel with the parity client running on Ropsten. The --geth option solves this issue for me with parity 1.8.1-beta client. Tested several times and Truffle can migrate without errors now.
Closing as it's not really a race condition in the traditional sense and has been fixed on the web3 side.
Most helpful comment
Adding this to the 1.7 milestone as @rphmeier promised to get this into the next release :)