Web3.js: Duplicate contract events on event.watch?

Created on 25 Feb 2016  路  9Comments  路  Source: ChainSafe/web3.js

I have simply sent one transaction on a contract which is a function which invokes one event which I use for notification. But when I put a watch on the contract event (without a filter) on my javascript, I get multiple, duplicate events. I have a private network with multiple nodes.

Does somebody have an explanation/reason for this? Happy to share more details which may assist in troubleshooting.

support

Most helpful comment

Like i wrote i can be chain reorganisations because your private network is small. Therefore they happen more often. This means that one peer i mining a block and creating the event, than a another one is doing the same block and your node suddenly accepts this new block as valid and makes the other a uncle, so it fires the event again.

We will add something which tells you about reverts in logs in the future.

In the real network you wont see that happen that often.
The best is you insert with id, so you can overwrite the entry without creating duplicates e.g.:

       var id = makeUniqueIDPerLog (e.g. tx hash + block hash or so)
        placingDocVersions.upsert({_id: id}, placingVersionDoc);

All 9 comments

How you put a watch without a filter? please send code :)

Apart from that it could be a chain reorg in your small testnet, which fires the events multiple times.

Do they have the exact same block etc?

I just put a watch on contract event without specifying a filter
The contract code is something like this:

    struct Document {
        bytes32 docType;
        string docName;
        string docURL;
        string docHash;
        string docDescription;
        uint docVersion;
        string docDateAdded;
        }
        Document placingSlipDocument;
event newPlacingDocumentVersion (string name, string url, uint docVersion, string docDescription, string dateAdded);

function addRiskDocument (string name, bytes32 docType, string url, string docHash, string docDescription, string dateAdded) public returns (bool ack){
        //upload of placing slip
        if (docType == "Placing Slip"){
            placingSlipDocument.docVersion++;
            placingSlipDocument.docName=name;
            placingSlipDocument.docURL=url;
            placingSlipDocument.docHash=docHash;
            placingSlipDocument.docDescription=docDescription;
            placingSlipDocument.docDateAdded=dateAdded;
            //event logging
newPlacingDocumentVersion (placingSlipDocument.docName,placingSlipDocument.docURL, placingSlipDocument.docVersion, placingSlipDocument.docDescription, placingSlipDocument.docDateAdded);
        }
}

On meteor javascript, I am doing something like this:

    var event = placingContractInstance.newPlacingDocumentVersion();
    event.watch(function(error, result){
    if (error) {
        console.log ("Error="+ error);
      }
    if (result) {
        console.log ("event address=" + result.address);
        console.log ("event block Number=" + result.blockNumber);
        var placingVersionDoc = {"riskRef":selectedUMR,"docName":result.args.name,"docURL":result.args.url,"docDescription":result.args.docDescription, "docDateAdded":result.args.dateAdded, "docVersion":new BigNumber(result.args.docVersion).toString(10)};
        console.log("placing doc version"+ JSON.stringify(placingVersionDoc));
        placingDocVersions.insert(placingVersionDoc);
      }
      });

The block number and the event address etc remains the same. Do I need to put a filter here as I am just watching for an event entry and don't really need a filter here? Sometimes, these entries in the watcher come 2 times, some of the times even 3 or 4 times. We have 3 or 4 peers in our private network. Is this related to it?

Like i wrote i can be chain reorganisations because your private network is small. Therefore they happen more often. This means that one peer i mining a block and creating the event, than a another one is doing the same block and your node suddenly accepts this new block as valid and makes the other a uncle, so it fires the event again.

We will add something which tells you about reverts in logs in the future.

In the real network you wont see that happen that often.
The best is you insert with id, so you can overwrite the entry without creating duplicates e.g.:

       var id = makeUniqueIDPerLog (e.g. tx hash + block hash or so)
        placingDocVersions.upsert({_id: id}, placingVersionDoc);

@frozeman thanks for the clarification. I experienced a similar case of many doubled logs on Morden and was wandering what happens.

However, this creates a bit of uncertainty when a Dapp starts relying on the logs. Imagine the app needs to respond to a payment action by a third party and does this by watching on DepositPaid() event. Then, having the event fired several times may trigger multiple event handling routines. Is it then true that a DApp should not rely on the events mechanism to perform "reactive" actions?

Thanks in advance!

In the latest beth we have a feature which adds a remove: true to a log, if a chain reorg happens and then you would get another log, which is the new one.

So you should anyway only act on logs, after you waited 12 blocks at least, otherwise you might screw yourself due to a chain reorg.

Good to know. Thanks!

In this context, does chain reorg affect constant functions results? For example, if the web ui polls for status using contract constant function:

function isDepositPaid(address depositor) constant returns (bool paid)

Is it possible that before chain org the function returns true, but after chain reorg it returns false if paid is populated via a storage var?

@frozeman Thanks for the clarifications. One more thing, let's say a redundant log is there due to chain reorg, now if I wait for 12-20 blocks and uncles/forked blocks are kept out. Then, the redundant logs due to these kept out blocks on the blockchain will be removed to give some guarantee?

I couldn't do anything on the web3 level so I have simply made sure in vue.js to store transactionHash and logIndex of each event. Before I do handle any event I make sure that it is not the one I already have on my list. I guess it is better than nothing.

Was this page helpful?
0 / 5 - 0 ratings