[draft] Increase Nu network block size to 2 megabytes

As a continuation of the discussion on governance, here is a draft of a motion that bumps Nu from 1 megabyte to 2.

— begin —

This motion increases the maximum block size allowed on the Nu network from 1 megabyte to 2 megabytes.

This will be a revision to the Nu network protocol.

As in previous protocol changes, and as a best practice that emerged in previous protocol changes, the new 2 megabyte limit will go into effect 20,160 blocks (approximately 14 days) after the first occurrence of 90% of the blocks minted in a previous 10,080 block window (approximately 7 days) indicate support for the new protocol version.

Should the 90% threshold (“soft deployment”) not be reached within 6 months after a production-class client is operating on the blockchain, then by way of the passage of this motion, the Nu organization will recognize the blockchain that supports or contains a 2 megabyte block 6 months after a production-class client is operating on the network that has at least a 10% minting rate (“hard fork”)

— end —

It is interesting to think about a way for the organization to dictate which blockchain is the official one, even if there is a gap in the enforcement by the Nu client. It’s kind of a chicken-egg problem.

2 Likes

I am in support of this idea, because actions are stronger than intentions or capabilities.

Something else: This is something that could be incrementally bumped. I think we should be conservative in how much the block size increased, as to be able to have a motion for it with some regularity. No need to jump to 10 megabytes right away.

And, ostensibly, this means that the Nu network would have capacity for 20x the number of transactions as Bitcoin, due to the faster minting interval.

I see no immediate need for that, because most blocks are rather empty whenever I look at a blockexplorer.

Then again it wouldn’t hurt to have empty blocks that could contain even more transactions.
It would help to show how Nu can handle this type of decisions which is about to tear Bitcoin apart.
It perfectly fits the possible agenda of 2016 that aims at showing the strengths of Nu governance!

I’d like to include the discussion, which @Nagalim started regarding dynamic fees, to this topic here (maybe even tie it to this), because it would help preventing (cheap) spam.

10 NBT per block at 1 MB size or 20 NBT per block at 2 MB size is some money a spammer needs to pay at least to fill blocks.
But 60 MB or 120 MB per hour is a size I’d rather not have added to the blockchain, if the intention is just to spam it cheaply.

1 Like

If the block size is raised to 2MB, the network would stop being Nu and therefore has no place being discussed on this forum. I will start banning anyone who discusses this here.

Just to avoid any ambiguity, the paragraph above is intended as a humorous dig at certain elements of the Bitcoin community.

I’d agree with @masterOfDisaster, there isn’t really any technical need for the raise in size but it can’t be denied that having the conversation and vote would be a powerful message about governance, especially against the back drop of the current state of Bitcoin (as alluded to above)

5 Likes

I don’t see a need to increase the block size at this stage. If anything we should half it until and with that decreasing the growth of the blockchain. But I suggest to just keep the status-quo until we are seeing multiple NBT transactions in each block consistently for weeks in a row.

Like the stints though.

there isn’t a need but it is a smart advertisement :wink:
We can decrease it again at any time.

What should impress us is, that here in this forum you wouldn’t have needed to write the next parapgraph to make people grasp your intention :wink:

1 Like

I was 99% sure, just wanted to be absolutely sure :slight_smile:

2 Likes

You could have banned the 1% people complaining about that until they had proved to have parked 10 NBT for 4 weeks at 0% :smiley:

People, besides making bitcoin community look like a bunch of amateurs, the OP wasn’t about raising block size now. It is a strategy to raise it, which is perfectly fine to discuss now.

Great. A complete blocksize strategy includes both raising and decreasing aspects.

Variable block size in Nu 3.0! Vote it in now! Way better than just snubbing bitcoin!

3 Likes

And variable fee based on block fill level!

2 Likes

I like the idea of variable block size. Further, we can tell people that Nu has solved this issue every time the block size debate is mentioned.

One convenient aspect of a bounded block size is it helps establish a model around the a node’s compute, storage, and network needs. It gives a developer something to design for and a node operator something to procure for.

As an example: with a 2 mbyte block, a node with 100 connections could need about 27 mbits/sec sustained bandwidth. This is out of the realm of most consumer broadband connections – especially on the upstream side. Unbounded demands could easily drive centralization. In a worst case, an unbounded block size could open a potential point of attack.

If my math is right, at 2 mbytes/block a node needs to process 140 transactions per second – or 12 million per day. Storage would grow by 1 Terabyte per year (though the B&C launch has proven a way to ‘garbage-collect’ the blockchain).

What would these design and operations parameters be in a variable block size?

But surely you aren’t suggesting that hardware is made specifically with running the Nu network in mind? If we vote on block size, you can be assured that these concerns are precisely what we will be thinking about. These concerns are indeed the only thing shareholders should think about when voting on block size.

A fee like I’ve laid out would basically ensure that the blocks are never full except in an ongoing spam attempt. Thereby, increased demand for txn volume would directly increase the fees, rather than pushing on us to increase the blocksize. This allows shareholders to concern themselves solely with hardware capabilities when voting on the block size.

By voting on a block size, I think you’ll find that the block size (and therefore the hardware demands) will decrease in the short term because we all know the blocks aren’t filling right now.

Perhaps hardware not created specifically for Nu, but Nu shareholders identify the hardware to put to use for the purpose of running the net.

In other words, should the Nu network run on Raspberry Pi level hardware, commodity desktop hardware, enterprise-class servers? What amount of connectivity should we require of a Nu shareholder?

Presently some Nu shareholders are minting off a Raspberry Pi. At scale even with a 1 megabyte block (with 10x the ‘transaction’ capacity of Bitcoin per day due to the 1-minute interval vs 10 minutes), I do not think the Raspberry Pi’s would keep up, and these Nu shareholders would be excluded from participating in the network. This would eliminate their ability to mint and therefore vote.

Also from the standpoint of architecture – such as database layer tuning / implementation – these are things that the devs would have to engineer to.

Nu client memory usage and database performance showed itself to be a problem already, and has required developer work. The impact has been significant. It has delayed development effort on B&C and other features that the shareholders have already approved.

This is assuredly not a question for the devs, but a question for the shareholders. Why would we ask the devs to answer this question for us? Clearly the shareholders are the only ones who can make a statement regarding this.

A 1 megabyte block does not mean that every block is 1 megabyte. On the contrary, I’ve been speaking of a filling target of 25%.

We currently have 1 megabyte blocks. Are these shareholders excluded now? In that case, why would voting on an increase to 2 megabytes not steamroll them just as much as voting on literally any number? If your statement is that we cannot vote on a block size, then there is no mechanism by which we can change the block size other than dev dictatorship. On the other hand, if we allow shareholders to continuously vote on block size they will intelligently look at the number of nodes and pick block sizes that will keep the system healthy.

What I’m saying is that things like segwit and better hardware and other methods of dealing with higher Txn volumes will happen independent of our network. We can respond to them intelligently, such that as hardware capacity grows in its staggered unpredictable fashion we can lift the blocksize at will without an additional hardfork.

In my opinion, continuously voting on block size is in fact the only fair thing we can do with regards to shareholder hardware capabilities.

It would be great to get developer input based on professional experience and any testing that has been done. i.e.: hardware of capacity X can perform to a level Y. I suspect that a RasPi would be CPU-limited first, then disk, then network. A desktop-class machine with spinning disk could be disk-limited first, then network, then CPU. With an SSD, it may be network-limited (assuming a residential connection).

There are other factors, too, such as simply downloading and validating the blockchain. i.e.: if the lowest common denominator is a RasPi and it can only do 10 tps, perhaps we can only run the network at 5, so that a node actually has the ability to “catch up” and be able to mint in the first place.

I feel that the risk for them to be excluded exists, yes. Shareholders running on RasPi should be aware. I will be prepared to bring sufficient hardware and connectivity to keep minting up to the design capacity of the protocol. I do foresee the need to invest in an array of SSD’s to keep up with the I/O needs, as well as needing to get business-grade connectivity.

Can you educate me on this 25% fill idea further?

I concur that this is another responsibility that the shareholders have, and perhaps this is bringing larger awareness to that.

I think the difference is in how it is implemented. It sounds as if you are advocating a new type of voting that is part of the protocol, whereas I was advocating more of a ‘step’ function that would be implemented thru code. Once any change is implemented once by a senior developer, I would think it be straightforward for a junior developer to see the code diff and compile a new client.

You make a good point, that hardforks are easy for Nu. However, what about client adoption? Are we to be known as the project where you have to update your client every time a new piece of hardware comes out?

The idea is that the shareholders pick the blocksize, most likely forming consensus around developer opinion, then make the fee sensitive to the block filling. If the blocks fill, increase the fee. If they’re empty, decrease the fee down to the minimum. Aim for 25%, good for DOS protection.

How long until the voting starts?