My Ras-pi becomes very slow lately. I take a look at debug.log and see lines like this
Flushed wallet.dat 58432ms
Does it mean the pi takes almost a minute to flush the wallet? I seems that this happpens everytime the client accepts a new block. So the client is spending all its time flushing instead of minting?
On my laptop it takes 80ms to do the flushing.
Can other people check? (type less ~/.nu/debug.log then press “>” to go to the end. It’s easy to spot the line.
can I configure the wallet so it flushes wallet less often?
Mine also seems to be quite slow… Its a B with a fresh setup (no NSR on it yet), just finished catching up with the blockchain a few hours ago. It’s very slow to execute commands such as nud getinfo…
I tried to grep the flushing log, here is the output (execution of this command took more than 10 minutes and didn’t finish)
nud getinfo is very slow on my pi, too. Takes like 30 sec. My pi is a version B. It’s a somewhat slow version but it wasn’t like this a couple of months ago.
If you run top you will see in the cpu line “wa” is very high, which means that the system spends most of its time waiting for I/O (maybe the file system flushing)
If I use an empty S wallet the flushing time is 500-2000ms
The system could mint at healthy rate up to about a day ago, judging from how many block the wallet should get a day. So the slowness might not be a severe issue for minting (having its own thread?). It became VERY slow in the last 30 hours or so and stopped finding blocks.
I don’t quite like groko’s tinkering with minting parameters. He doesn’t regard POS important and he doesn’t seem interested in the economic aspect of the minting parameters.
His fix of the inefficient code is good. Not sure the Nu or Peercoin has the problem he fixed, though.
I blamed (intentional) excessive use for that by having lots of different applications run on it that all write blockchains (e.g. peercoin, emercoin, slimcoin, nu).
Maybe it was not only the intentional heavy use, but an unintentional one as well by those application creating more I/O on the SD card than expected.
The symptoms were different: not being able to login via SSH (I run the RaPi only in headless mode), commands that can’t be executed when logged in, etc.
The reason for the symptoms was always the same: a corrupt files system that couldn’t be fixed by fschk. And the reason for that corrupt file system became obvious after having dd’ed a backup image to the SD card: file system corrupt right after writing it; an SD card beyond its wear level capabilities.
I thought about switching from ext4 to another file system or disabling the journal. But as I play with the RaPi more for fun and to learn something (learning to compile stuff, efficiently operate such a headless device (shell scripting, tools), learn to use RPC of the running programs) I simply grabbed another SD card and started all over
The only thing running on my RaPi that is not only for fun are the TLLP bots for liquidbits, nupond and nupool. They run on a dedicated RaPi which doesn’t suffer from broken SD cards each several weeks to few months. That was the lesson I learned after the last SD card crash.
Minting is still a computationally intensive job when there are many outputs. There is certainly room for more improvements, but a Raspberry Pi may not be enough when there is a lot of outputs to handle. A solution could be to split the shares on multiple nodes.
If I send shares that have generated a lot of transactions over the last 10 months to a new address in an empty wallet, will minting on the new wallet be computationally less intensive?
Another solution for now is getting a Raspberry Pi 2, which has a fast 4 core CPU and 1G RAM – for the same price and less power.
If you use the same split amount then the difference will be negligible because you’ll have the same amount of unspent outputs. The only difference will be that the new outputs will all have the same stake modifier so at start up you’ll have only one cache entry to calculate. But once the cache is filled for all the outputs there should be no difference.
Will the wallet be easier to flush because there are less tx history associated with the new address?
The performance of the Pi seems to be degrading so I am suspecting the tx history stored in the wallet makes updating/flushing it increasingly slow. According to top the CPU is only 40-60% busy.
I’m not sure. As I understand it, flushing should only take time if there was many changes made to the database. I suspect the long time reported in the log is actually spent waiting for some locks, and not doing actual calculation or I/O. You can try disabling minting to see whether it improves the flushing delay. If it does, then changing your wallet history will probably not change much.
From the links in my post above the flushing is unlikely just waiting. It severely degrade performance. The flushing strategy was made for small bitcoin wallets and block time is10 min.
Anyway I have moved my shares to two new addresses in a new wallet. The flushing time has been dramatically reduced from 50-80sec to 200-600ms.
I hope it takes several months as the outputs find new blocks.
I see a lot of activities receiving liquidity info from debug.log
Liquidity info seems useless for a headless minting node on Ras-pi. Do you think ignoring liquidity info would reduce CPU and memory useage much? I could tinker with the code. Where would you suggest to look?
It may be for you, but not for the health of the network. If many people stop propagating liquidity info it will be less reliable.
CPU: probably a little as these messages are signed and verifying a signature is costly. A client could probably skip the signature verification though.
Memory: very little, unless we start having lots custodians with lots of long identifiers. A liquidity info is less than 1 kB.