In the current protocol some votes are weighted by the share days destroyed by the minter. That means a vote cast by an old output will have more weight than a recent output. And a vote cast by an output with a lot of shares will also have more weight than a smaller output.
There are 2 problems:
- Shareholders who (intentionally or not) do not mint for a while will have a bigger impact on the vote when they start minting again. This can cause sudden changes in the result of some votes (notably on those calculating medians)
- Shareholders can trade reward for influence. If they choose to have large outputs they will find less blocks but each vote will have more weight. On the long term the difference will be low but on the interval of a vote the impact may be significant.
See the discussion here: Using share days destroyed as vote weight
So I suggest we adopt the following motion:
Starting from Nu protocol V06 all votes will have the same weight, regardless of the amount of shares involved or their age.
It’s generic enough to apply to all votes, including the planned and unplanned ones. Of course a future motion can always override it.
The actual protocol switch time will be decided just before the release of version 0.6.0.
To comply with this new rule the following existing votes will be modified:
- Park rate calculation. Currently the effective park rate is the median of the voted park rates weighted by the share days destroyed. It will be changed to just be the median of the voted park rates.
- Custodian grants. To be elected a custodian must have both 50% of the vote counts and 50% of the share days destroyed. After protocol V06 they will only require 50% of the vote counts.
- Motions. The result of the motions is not actually evaluated by the client because it doesn’t have any direct impact. But the RPC command returns the share days destroyed of each motion. It will be removed and only the percentage of blocks will be returned for each voted motion.
And the following upcoming vote will be changed:
Minimum fee. The fee was planned to be the median of the fee votes weighted by the share days destroyed. The fee will just be the median of the voted fees.
I did not include all these details in the motion itself because I think the general intent is clear enough. If I missed anything in these details we won’t have to pass another motion.
I think this will improve the validity of decisions made by the network and I don’t see any way the motion could be improved.
The RIPEMD-160 hash of the motion that people need to enter in their client to vote for this is:
This will also simplify our protocol, which means unintended effects are minimized.
my concern was always the risk of some shareholders that are keeping more than 50% of the shares and will never vote
to anything! what then?
i think this change can be a “fix” to above danger
but i was thinking, now we will need more than half of the shareholder addresses to vote. isn’t this a little difficult if a lot of shareholders are only for speculation
and not involved in the voting mechanism? i mean that shareholder with larger amounts
will have the intensive to be active. perhaps we need a mechanism to scan which addresses are voting and only for them the pos will be activated
"regardless of the amount of shares involved or their age"
Did I understand correctly that the amount of shares involved would no longer matter?
Then if I divide my shares into many addresses I get more votes?
If it’s so then I’m against this motion.
Coin age of course should not change the weight of a vote but the number of shares held by the voting address must definitely correlate positively to the influence of the vote. This needs clarifying.
Currently all send transactions will divide neshares into outputs of 10,000 NSR by the client by default. So everyone should have their shares stored in their wallets in 10,000 NSR “blocks” (unspent transaction outputs, UTXOs). If you split them into, say, 5000 NSR outputs, then since the smallest UTXO that can find a block (therefore can vote) is 10,000 NSR, your shares won’t be able to find blocks.
What’s more, if in the same address you have a UTXO of 10,000 NSR or more that finds a block, the output will combine those outputs that are smaller than 10,000 NSR and split them into 10,000 NSRs outputs (if there is a leftover less than 10,000, it will be combined with a 10,000 one)
Those who receive their shares in a big lump (more than 20k in an output) will loose opportunity to find blocks and should complain to the sender who must have changed the behavior of the client.
So, essentially the voter is responsible of making sure that their nushares are divided into 10000 NSR per address chunks? In that case I don’t see a problem with this motion. Would I have to use RPC command such as listunspent to see if any of my addresses contains far more than 10000 nushares?
this may be a bit offtopic now but since the client generates new addresses in order to maintain the 10000 shares per address distribution does that mean wallets will get out of synchronization at some point? In my case I use duplicate wallets concurrently. One wallet runs at a safe place and is unlocked for minting only and the other wallet I run on my PC and that is locked but it shows me the blocks I mint in real time. The problem would arise when I receive new funds. Which wallet gets to generate the new addresses needed for the distribution of nushares? Things may get messy there…
better: use the coincontrol function of the 0.5.2 wallet.
see related discussions here.
If both wallet own the same receiving addresses both will see the same funds. When not sending, clients are just blockchain browsers.
If the addresses are generated by one wallet and the walletS.dat are not shared between the two clients, you will need to use dumpprivkey / importprivekey.
So the inner workings of the client regarding the maintenance of 10000 chunks are deterministic? But the generated change addresses are not deterministic?
The amount of shares a shareholder owns still influences the global vote because with more share he’s able to find more blocks, and each block still allows you to cast exactly 1 vote. What changes is that these votes will all have the same weight.
They may do that for good reasons. For example splitting shares implies large fees. An exchange would have to deduce this fee from withdrawals and traders may not want that (because they don’t intend to mint, or not immediately).
You can still split the outputs yourself later by paying the fees.
It doesn’t generate new addresses. It just split the outputs. It means when you receive 50k it’s like you received 5 times 10k to the same address at the same time.
If the wallets are just receiving and minting you don’t have to worry, they’ll never generate addresses by themselves.
When you send share there’s no address generated for the change either, because we enabled avatar mode by default on the NSR wallet. That means the change is sent back the first sending address. Note that it’s not the case on the NBT wallet. But we enabled avatar mode by default to solve another problem, so it may change in the future if we find a better solution, so you probably should not rely on that.
I guess I answered that question above but if I didn’t let me know.
Ok thanks for that clarification. This motion has earned my trust now.
Or wait for 7 days until the first block is found by the “lump” and in its output all shares will be sent in 10k blocks without paying fees, right? Not being sent in 10k blocks will cause the receiver to loose the 8th-14th days’ minting reward. After the the 14th day, all shares are in 10k (plus some rewards) and minting regardless how the shares were received. Is this correct?
Except there are some limits on the number of outputs generated in this situation. It’s currently arbitrarily set to 5 maximum outputs when the CoinStake outputs are split. The reason is there’s a protocol limit of 1 kb for the CoinStake transaction (which includes the vote). So to avoid too large CoinStake this limit was added. We could take some time to make this more optimal and actually use the maximum available size, but we had more urgent developments to do.
So your outputs will eventually be split in 10k chunks but it may take some time.