[Passed] Motion proposal to give all votes the same weight

The motion passed: https://blockexplorer.nu/motions/success

3 Likes

The “voting PoS” of NuShares is very interesting, its basically a proof-of-utxo. I know this was decided long ago, still some questions:

Couldn’t the kind wallet say that transactions from an address to itself with a single input that is > 10k and n outputs with at least n - 1 of those having a value of 10k should be processed without tx fee? You can hardly spam the network with this rule, and it would speed up the splitting process.

With the new rule of the proposal, shouldn’t the leftover be put in a new output? Because then you avoid wasting the coin age of the 10k block containing the leftover if you have enough shares to create a new 10k block together with the leftover.

1 Like

What new rule? Why is there a 10k block containing the leftover? i am confused.

Sorry if I was unclear. With the new rule I mean that the amount of shares used in the staking transaction doesn’t influence the voting weight. Then you said

and I was referring to the edge case you mentioned in the brackets. So if I have a utxo of 27k your method will create one new output with 10k and another one with 17k. If I now get 3k more, I need to split the 17k output in order to form two new 10k outputs. If we however would have divided the original 27k output into 10k / 10k / 7k, we could keep the the existing 10k outputs untouched and therefore avoid wasting the coin age.

If you later have 3k utxo in the same address, then only when a utxo in this address finds a block this combination happens. Usually it is one of the many 10k blocks that finds a kernel and combine with the 3k to form an output that is origianl_block_size+40+3k . I don’t know what happens if it is the 17k block that finds a block next.

So you say that the 3k has to wait for the 17k utxo to find a block to be recombined into a new 10k utxo? But then we’re wasting the age of the 7k and the 3k since they never contribute to the voting process.

This all may sound negligible, thing is if the source will be open one day, people will start optimizing the staking process (within the limitations of the protocol), and therefore the default wallet should also design the staking process as efficient as possible such that there isn’t a small group that has an advantage.

the 3k one will be combined in the next block found by any utxo in the same address.
higher coin amount will allow a utx to find a block proportioally quicker.

But if there is only the 3k, several 10k’s and the 17k, and one of the 10k chunks finds a block, then it either has to split the 17k (wasting its age) or keep the 3k uncombined … thats what I am trying to say.

why does it have to split the 17k one when it combines with the 3k one?

How do you want to make two 10k outputs out of one 17k input and one 3k input without splitting the 17k input?

0 somehow you get a 3k output in the address that has 10k 10k 10k 17k outputs
1 a 10k ouput finds a kernel
2 it looks for outputs smaller than 10k
3 it finds the 3k one
4 it combine w/ the 3k one to make a 13040 output

now you have 10k 10k 10340 17k outputs

What? But thats horrible, because both the 13040 output and the 17k output will only cast one vote each with the new rule, although the shareholder has enough shares to cast three votes per week. Then everyone who is aware of this and knows what coin control is has a clear advantage.

but the 13040 and 17k blocks have proportionally bigger chance to find a block in the same period. So on average Your voting power is proportional to the total amount of shares you have in blocks >= 10k.

Why that? If this is true, why are we splitting the outputs in the first place? If a 20k chunk produces twice the amount of blocks in expectation, then I don’t see why I should want it to be divided into two 10k chunks if I want to improve voting power.

There is a monetary incentive of course, as in any PoS system, to split the balance into minimal chunks in order to maximize the number of candidates when searching for a kernel, but I thought the idea behind the splitting here is to optimize voting, not expected reward.

Even if the 20k chunk has twice the probability to find a block, two 10k chunks will still produce more blocks as long as each of them needs less than 3.5 days to find a kernel, which will basically always be the case considering the coin supply and the block time. And because of the same argument the 17k should be divided into 10k and 7k if 3k are available :blush:

so you always have many outputs minting at any time. in the extreme case if you could put all shares in one output, after finding a block all your shares will not be able to mint for 7 days.

i estimate that under current diff a 10k block needs about 30 days to find a block on average.

I see, all this is just done to smooth voting. Thanks so much for all the info.

Oh yeah, just missed a factor of 10 in the supply -.- OK now I understand why there is a global incentive to do that.

Sorry, one last question: I think the shareholder has an incentive to have the ability to make use of the full voting power for a particular motion. In order to achieve this at any time, all outputs should (in expectation) produce a block within each 10000 block window, which would correspond to an optimal chunk size of ~45k at the current difficulty. Is this correct?

i think it depends on the diff and how many shares you have in total. the same as in Asking for a low minting NuShares minimal

Yes exactly, since 10000 blocks is also the min age (coincidence?) the optimal chunk size the smallest amount of shares that will meet the target within 1 block (in expectation). Thanks, I’ll play with the splitshareoutputs parameter to see how it behaves.