Cosmos-sdk: Double-check our gas prices

Created on 16 May 2018  路  22Comments  路  Source: cosmos/cosmos-sdk

Figure out a moderately coherent strategy for pricing compute & storage operations, answering the following questions:

  • [ ] Relative price of compute / storage? Do we need an underlying hardware model?
  • [ ] Should modules building on the SDK calculate their own gas prices or just use the GasKVStore / crypto wrapper functions (is that enough)?
  • [ ] How much gas does a one-sender, one-recipient, one-coin transfer transaction need (as a relative benchmark)?
  • [ ] Gas limit should be increased before transfers are enabled (current block gas limit allows for ~10 txs/block). What should be the new gas limit?

Put all gas prices in governance-controlled parameters.

Write this up in a spec doc (=> "Core SDK" spec doc).

docs spec

All 22 comments

I think we should construct a basic benchmark (store reads/writes/iteration, expensive cryptographic operations), run it on a node-representative VM (midrange dedicated server?), and use the result of the benchmark to choose gas prices, adding in an extra cost for storage writes due to the persistent disk space requirement.

This should provide a reasonable-enough initial configuration, which governance can vote to change later as necessary with ParameterChangeProposals.

Typical node: MacBook 13 inch latest gen. Run on multiple hardware setups, average somehow.

Make sure benchmark is easily extensible.

Benchmark should include overlapping execution, best-fit the results.

Also see https://github.com/cosmos/cosmos-sdk/issues/1013#issuecomment-393007025, I wonder if we need a scratch space / storage space separation a la memory / storage in the EVM.

We need to do a bit more rigorous of an analysis, but from plain benchmarks it seems that the only things that show up as significant amounts of time in block production are Golevel DB compaction, the way we do Reverse Iterators, and governance slashing for proposals. (Signature verification would also show up)

My current thought is that for launch we can just have a constant gas then based on signatures, and then just make the deposit for governance proposals very large. We should still set all the relevant gas parameters so that they can be changed via governance though. We still need to improve the simulation time metrics a bit, and do a bit more benchmarking (i.e. several orders of magnitude larger block heights, less frequent gov proposals), but I do feel relatively safe with the above for launch.

Per #2286 we probably should charge alot of gas for iterators. (Unless we get around to speeding that up prelaunch) My view atm is: proper gas for signatures, proper gas for iterator creation (e.g. nlogn on dirtyItems size), [significant] gas for submitProposal.

I propose that we benchmark computation with 1 gas = .1 ns, on some expected hardware specs. I think we should run this on cloud instances (e.g. AWS, Digital Ocean), as we expect most sentries to be a cloud instance. Therefore any strange performance things that could occur due to that should be accounted for.

With 1 gas = .1 ns, only using 52 bits of the 63 bits allocated for gas, we can already account for over computation thats greater than 24 hours. I don't think setting gas below .1ns make sense, since we cant benchmark that precisely in golang.

@ValarDragon What needs to be done here?

My current thought is that for launch we can just have a constant gas then based on signatures, and then just make the deposit for governance proposals very large. We should still set all the relevant gas parameters so that they can be changed via governance though. We still need to improve the simulation time metrics a bit, and do a bit more benchmarking (i.e. several orders of magnitude larger block heights, less frequent gov proposals), but I do feel relatively safe with the above for launch.

Per #2286 we probably should charge alot of gas for iterators as well

I propose that we benchmark computation with 1 gas = .1 ns, on some expected hardware specs.

These still need to be in the params store.

Related to #3248

Is it? How? How we construct the fee UX shouldn't really depend on particular gas costs.

Correct, not really related.

Will we have parameter change proposals at launch @sunnya97?

If not, that part of this isn't prelaunch, but we still should probably benchmark...

Possible concerns:

  • Raw cost to write data to disk (which must be stored) - WriteCostPerByte - currently 30, should be much higher.
  • Unbounded integer / decimal arithmetic - we don't really charge for this at all, would be limited by a size-proposal transaction gas cost (if the integers are inputs).
  • Disk IO DoS with lots of read/write - determined by store.Get / store.Set / store.Has flat costs.
  • Cost to store large transactions (Tendermint does by default) - will be helped by https://github.com/cosmos/cosmos-sdk/pull/3447

To calculate actual resource cost values we might find it most convenient to assume a minimum gas price.

Possible messages of concern:

  • x/bank: MsgSend, unbounded number of inputs / outputs with arbitrary-precision integers, iteration over them
  • x/gov: MsgDeposit, arbitrary-precision integer user input (sdk.Coins)
  • x/staking: MsgCreateValidator, MsgEditValidator - unbounded user input for descriptions, MsgDelegate, MsgUndelegate, MsgBeginRedelegate - user-input arbitrary-precision integers

Some suggestions:

  • Add a ValidateBasic to sdk.Coins. For now, this can check exactly one denom - since we only have one - and also put a maximum bound on the size of the integers.
  • Put specific length bounds on validator descriptions

Note that we want to include this in 0.34, as we should increase the gas limit.

Maybe along with that we should add a few more benchmarks.

Hopefully the gas prices are reasonable, given that the network is live. :)

Wait, unless I missed something we definitely didn't benchmark all the things we charge gas for and equalize them with time taken on some standard hardware.

Wait, unless I missed something we definitely didn't benchmark all the things we charge gas for and equalize them with time taken on some standard hardware.

I think the transaction-associated compute costs pale in comparison to the cost of writing & storing data (it would still be useful to benchmark though, sure).

Have we done this? I'm not convinced our gas prices have really been stress-tested.

I also don't think that they are normalized. (1 unit of CPU time = x gas)

Was this page helpful?
0 / 5 - 0 ratings