[Draft] Frequency Voting

@dysconnect explore the convergence properties to your hearts content. I’m a little putoff I was not able to find a full algebraic solution, but a monte-carlo method isn’t terrible either. It is certainly illuminating for me.

Example: The idea behind these parameters is to make a big wave (all ‘no’ votes, then all ‘yes’ votes on a motion for 1000 blocks straight) and see how the autovoting handles it with a large apathy percentage (80%) and even a modest shareholder support (20%)


I love the ripples after the fact.

2 Likes

If we take the voter apathy to 0%, we can recover the simulation of the protocol as it exists presently. The following example compares a 0% apathy case to a 50% apathy case. If 50% apathy existed in the network as it currently stands the entire system would shut down and no motion would be passable. Also, please remember that these are inherently stochastic (i.e. random) processes and that the curve representing the 50% apathy case was intentionally selected such that the motion passed 50% to show contrast. I should probably be doing averages of many runs, but the program is slow. Anyway:

The intent of frequency voting is to simulate the 0% apathy case with only few voters. There is no direct analog to the current situation because currently we would be totally frozen out with a 50% apathy rate.

Also, there are tons of other benefits to using the frequency voting mechanism. Default voting is just one of the lowest hanging fruit.

1 Like

And, finally, the attack vector. If someone were to highjack the network for 10 consecutive blocks (would require ~1% minting shares if we’re saying they’re spread out throughout 1000 blocks, which is a weaker attack), even a 99.99% complacent network achieves convergence:

1 Like

Do I interpret the two graphs right if I understand

  • the second one as showing the results with 0% and 50% voter apathy on a similar or almost identical level
  • the third one as showing a resiliency of the frequency voting system against attack

?

If my understanding is right this looks like a very nice way to keep the Nu network able to make decisions even if some or many voters don’t care about voting and are fine with following the lead of voters.

edit: damn I didn’t scroll far enough up. Corrected the references to the graphs…

Yah, I’m basically showing that with reasonable parameters the risk of frequency voting diverging is very very small, like hyper small. What I’m getting at is that frequency voting has almost no down sides, is superior to the current voting method, and is easy to implement. With all the talk in the bitcoin community about how to achieve consensus, implementing frequency voting could make some serious waves.

Graph 1. Even at 80% apathy the network converges efficiently
Graph 2. 50% apathy simulates 0% apathy well, as intended
Graph 3. Convergence is achieved even with 99.99% apathy, which is of course crazy big. Like, that would mean for every minter that cares there are 10,000 other minters who don’t.

1 Like

If you ask the feed providers to slightly alter the motion hash, you know what percentage configures votes manually and which feed provider is responsible for how many votes.

Like all zeroes hash for manual configuration, 1 at beginning or end for @Cybnate’s feed, 2 at beginning or end for @cryptog’s feed.
And all with no extra bloating of the blockchain!

Assuming that the feed providers did support this test and the minters didn’t adjust their behaviour this could quite accurately measure the percentages.

That’s great. Basically within 10000 blocks something with even just 30% approval rate could pass quite easily. Given your graphs and a few runs on my computer I guess the voting window has to be set to somewhere over 20000 if we were to implement this feature for 80% apathetic voting shares. Equivalently one can use a somewhat slower or biased averaging process.

Data feeds aren’t bad, we just need to develop transparency mechanisms or stuff like this in the long term for robustness.

1 Like

The following is a simulation with 80% of minters being apathetic and of the remaining 20%, 30% of voters show support for the motion. It was performed using the parameters I am suggesting (keep the protocol the same and just institute a 1000 block average for apathetic voters). As you can see, it does not really even come close to passing. To pass, a motion would still require very close to 50% support. Note that as it stands, if 49% of minters are voting for a motion it could potentially pass by random chance. Frequency voting does not change the protocol rules at all, so that is still true. With 30% support, however, a motion really doesn’t have any chance of passing with or without frequency voting.

1 Like

@dysconnect I could really use your consensus here, so I’d like to understand more about your last comment. Can you send me the parameters of your run and your text file where you saw a motion pass with 30% support?

I didn’t change anything apart from consent:

#Parameters
 #Defined by protocol
votingwindow=10000
 #Averaging window
window=1000
 # what % of voters are apathetic?
apathetic=.8
 # what % of non-apathetic voters consent to this motion?
consent=0.3
 # how many 'yes' blocks in a row to start?  (you can make your own initial array further down in the code if you want)
initial=1000
 # how many data points after initialization?
iterations=100000

It returns “passed? True” quite often, at the order of half of the time.

1 Like

Yah, so your initial is 1000, which means you’re starting out with 1000 yes blocks. That’s a highly improbable setup that I was just using as a proof of concept. Try it with initial=0 to get a much more realistic simulation.

You were simulating what happens when someone takes control over the network for 1000 blocks and votes for a motion 100%, then the shareholders proceed to also support it 30% with 80% apathetic voters. It’s understandable that a motion would pass under those extremely artificial conditions.

If you are running it a bunch, I’d be curious to know at what level of consent a motion starts passing with initial=0. I’m guessing 45%. Totally depends on your apathy level and random luck though.

I see. Setting initial=0 but apathetic=.9 still gave some high pass rates for consent = 0.45. It’s debatable whether that’s significant but I guess there may be cheap ways to patch up this small amount of loss of robustness.

1 Like

But is it a significant loss of robustness at all? At 0% apathy, which is the non-frequency voting case, I think you’ll find a motion will pass often at 48% consent. Even at 1% shareholder support in the current system, given infinite time everything has a statistical certainty to pass. If we really want, we can just increase the bar to 60% for passage. I don’t think we need to worry ourselves over a 3% difference at 90% apathy, however.

Thanks for verifying my plots and algorithm, I appreciate it.

It’s not that often a 48% motion passes with 0% apathy. The gap is 200 votes which is extremely hard to reach for a binomial distribution with n=10000 and p = 0.48 (200 is over 8 standard deviations). I don’t want to analyse the random process formally but my gut feeling tells me it’s going to take millions of blocks for it to pass with at least 1% probability. Whereas with 90% apathy this kind of probability might be seen with as low as 40% support.

As for setting a higher threshold, I agree it’s a matter of trade-off, but if we can make the gap between consent and motion passage as small as possible it is a good thing to think about, just not top-priority.

1 Like

You’re a Jedi now. Your analysis is spot on. So, would you vote to implement frequency voting with a 1000 block size averaging window?

I can definitely vote and rally for frequency voting but I prefer to also add a few (cheap) safeguards. For example, an apathetic client would not vote for a motion for the first 1000 blocks that the motion was seen, which has a small impact on the convergence time but gives some robustness without bloating the chain.

Although the formal 50% threshold of motion success is quite rigid, the consensus establishment and conflict resolution behind voting is somewhat softer, in that it is more flexible and is also hazardous beyond whether a motion succeeds with exactly 50%. So I won’t worry too much about these protocol-wise drawbacks as long as it is kept small. I also feel it is a good price to pay for avoiding the centralization of datafeeds.

2 Likes

We can also implement frequency voting on datafeeds such that someone is 30% cryptog, 30% cybnate and 40% apathetic. However, that would be a later implementation.

Glad you’re onboard! I’m not sure what you mean by a 1000 block wait to start so let’s spell it out:

Block 0 is the first block to vote for a motion. Clearly, you don’t want apathetic voting to start till block 1000. However, when apathetic voting starts does it average blocks 0 through 1000 in which there is any % support, or blocks -999 through 0, which contains the minimum support: exactly 0.1%?

I think I want apathetic voters to not vote in 0-999, but will cast a vote depending on the average votes over 0-999.

So let’s say there are 90% apathetic voters and consider a motion with 50% consent. For blocks 0-999, the percentage of votes would be 5%. Then at block 1000, apathetic voters will start voting, and the probability that a vote is cast is going to be about 0.050.9 + 0.50.1 = 9.5%.

1 Like

Totally. Like you show, it will cause a discontinuity in apathetic voter rates, but that’s not really an issue. I’ll put it in the code tomorrow and post some updated images. However, just thinking about it I think it should strictly decrease the voting rate until convergence is reached. Interestingly enough, such a modification would make the apathetic voting simulation insensitive to the ‘initial’ parameter, and therefore hostile takeovers of the system lasting less than 1000 blocks.

1 Like

@dysconnect here’s the issue: all that is required with the original idea is for the wallet to look at the past 1000 blocks. If we want them to ignore a motion for a period after it sees it, they have to remember all motions that have ever been on the blockchain. I’m not sure how much memory that would take, but it doesn’t seem like the best idea to me.

Can you think of a way to feasibly implement such a feature? Would we need to just say ‘if you see it on the blockchain while running, remember it but don’t act on it until 1,000 blocks later’? That would result in a smaller apathetic participation rate by an unknown amount, especially for a longer block time like peercoin.

Oh! We could say something like: If there is a stretch of 1000 blocks without the motion, then wait 1000 blocks to begin counting after seeing the motion. That would require tracking of 3000 blocks I think, which is pretty reasonable.