NuBits v5.0.1 Release

I’ve in several cases gathered detailed data as well.
I’m grateful for your feedback and accommodate you with admitting that you provided different points of view.
Whether statements are false or true often is subject to interpretation. We can agree to disagree for the most parts, if you like.

No, but I’m not agreeing that people in important roles should be left alone if there’s a lot going on that requires clearing up.
But instead of clearing things up, Phoenix hides and only crawls hout his hole to accuse people.

Agreed. It would be hard to deny that.
And what does that make him?
My interpretation is that this makes him unreliable in more than just one way.
Others may interpret that differently.

Really? That must have slipped my attention and I’m good at reading and remembering. The easiest thing would have been for Phoenix to tell, because he should remember best what he did, why and when.
Sadly he remains silent and leaves it to you to provide us with possible explanations of what might have been.
That’s the way to run a business and lead! Let others do the dirty work. Always transparent.

Thank you for your insights.
You wanna whitewash @Phoenix by saying he can do as he pleases, if only one may assume it could be in the interest of B&C sahreholders, this is funny. Or naive.
If he had decided to trade the NBT to BTC that would have been fine?
Oh how wonderful for BKS holders, if the B&C dev fund were still holding BTC…
Some might see him as a leader. Others might see him as something different.

And that inherits the “in the interest of” part of the motion that made Phoenix fund keeper?
Strangely the motion to trade NBT for NSR doesn’t have such an “in the interest of” clause.

Regarding the trade of NBT to NSR.
BKS holders were caught between a rock and a hard place and decided to trade NBT for NSR.
That was a game well played.

They received $0.70 per NBT. They paid a part of the price of Nu’s failure. What a compelling benefit. If there’s so much confidence in the inner workings of Nu, why trading NBT in the first place? Phoenix is felicitous, but he’s a snake in the grass.

Nu’s business scheme hasn’t been adjusted, although the need for that as well as ways for that have been laid out.
It’s still based on selling NSR to cover operational expenses like the losses from trading NBT/BTC all hoping for a future in which there will be miraculous revenues. For the foreseeable future this makes Nu a ponzi scheme.
Nu doesn’t have any kind of useful accounting. This is necessary to keep investors and customers blindfolded.
Phoenix hides behind a cloak of transparency, when thing’s aren’t transparent at all and he is silent as the grave.
He’s in control of the minting majority at Nu apparently, holds B&C ransom with the NBT and NSR games he plays and at the same time he’s super anonymous.
He’s the dictator that’s out of reach.

I spent a lot of time researching Nu. I was impressed by the potential. I was close to losing money at Nu. But I was lucky enough to find smelly things before I put money in NSR. I realized that the only real potential to make money is for those who can see behind the curtain. That was never the NSR holders.

Both NSR holders and customers (NBT holders like B&C) paid the price for Nu’s flawed business scheme.
With all I’ve read and understood I can’t stay silent and keep my peace of conscience. Future investors and customers have the chance to know better. It’s up to them what they make with this information.
Please try to understand that.

Wallet is unstable. Keeps crashing.

1 Like

Shit! Is there android wallet for nsr holding?

If the wallet keeps crashing, Cryptopia will delist NSR.

From my experience the wallet crashes due to lack of resources (memory?) after a while when it has to process multiple requests. Haven’t been able to pin down which requests causes the issue. I’m not a coder, just sharing experience.
When used as a ‘home’ wallet for minting there is no problem and it is stable.
A simple autorestart using e.g. something like supervisor works around this although I agree it is not ideal.

2 Likes

it does not crash here.

it runs out of memory on the server twice a day on average.

The NSR wallet is not resource efficient at all. Out of the 40 wallets, this is the only one that constantly crashes. I would not be surprised if Cryptopia delist NSR. I just hope this is fixed soon. At this stage I can’t even use this wallet without having to put it on an another server by iteslf.

@sigmike ?

The way the wallet works is that it makes a ton of UTXOs, ruining the output table for the chain and making it very resource intensive. There is no fix at this point aside from scrapping the chain/protocol and starting again.

I have 2 nodes, one minting on a VPS with 3 GB memory, and one full node on a 2 GB VPS. They have been both running without interruption since the last client upgrade on May 27. They are both on linux, one 32 bits and the other 64 bits. So it can be stable. I haven’t done any transfer for a long time though.

So what exactly is happening to those who get crashes (how does it crash, the end of debug.log, etc.) and in what situation (what you were doing, what platform, etc.)?

1 Like

Here is the end of debug.log from the client I’m running for the svr1 blockexplorer. Not sure it is relevant. It crashes between twice a day and twice a week. I can’t figure it out. Only seeing that it slowly takes more resources and then just exits. Please let me know what other logs you like to see as I would greatly appreciate to get this long standing issue sorted.

BTW As said before, I have no problems with my non minting node or minting node. They are indeed stable. It is only where the client is actively used through it APIs where the trouble starts. I saw something similar when running Liquidbits.

received getdata for: block ee1686da461de8267b8b
received getdata for: block 061902b00cb142f26905
received getdata for: block 1279da63a57e61cc7b01
received getdata for: block 5c5f0d68a8ddd9aed175
received getdata for: block 4dcab0376dc43e30f28f
received getdata for: block f6cd3ec5b63c59a5f78b
received getdata for: block b913b2048bee2033280b
received getdata for: block 5b8f6ee20b420c4de200
received getdata for: block 4f9dd836fa0137a18840
getblocks 1163520 to 4f9dd836fa0137a18840 limit 1021
getblocks stopping at limit 1164540 cb49a5019f27275122bc (725704 bytes)
received getdata for: block 7d953c1ef34d94d42301
getblocks 1163520 to 7d953c1ef34d94d42301 limit 1021
getblocks stopping at limit 1164540 cb49a5019f27275122bc (725704 bytes)
ResendWalletTransactions()
received getdata for: block 53c129646abcab434f1d
getblocks 1163279 to 53c129646abcab434f1d limit 1533
getblocks stopping at limit 1164811 c3dec2a16f7145c020ff (1038470 bytes)
ResendWalletTransactions()
ThreadRPCServer S method=getblockhash
ThreadRPCServer S method=getblockhash
ThreadRPCServer S method=getblock
ThreadRPCServer S method=getrawtransaction
ThreadRPCServer S method=getrawtransaction
ThreadRPCServer S method=getrawmempool
received getdata for: block 0b98e3a90136891e8972
askfor block c44950df1fd2236d7bc7 0
sending getdata: block c44950df1fd2236d7bc7
getblocks 1163520 to 0b98e3a90136891e8972 limit 1021
getblocks stopping at limit 1164540 cb49a5019f27275122bc (725704 bytes)
received block c44950df1fd2236d7bc702e4be31cc7d1f307ff0bb27ed076212c27579c0bcec

The blockexplorer code is open source and can be found here: https://github.com/Cybnate/NuBits-Abe-explorer

do you have swap space enabled?
It can make the client slow and sluggish but can prevent it from crashing.
easiest way to do that on Ubuntu is sudo apt-get install dphys-swapfile as then all the setup is dome for you
A look at /var/log/syslog for the same time period might be helpful too. If the process uses too much memory and is killed by the system I doubt anything would register that in the client debug.log but syslog might show the system killing it.

Ya. I think Bitcoin should be the minimum bar for wallets. If a behemoth like bitcoin can run on absolutely minimum spec system without issues. So should other wallets. just sayin’.

running dmesg -T would tell you the exact time process was killed by out of memory.

1 Like

Yes

Here is my latest kill event:

[Wed Jul 5 13:54:55 2017] Out of memory: Kill process 612 (nud500) score 391 or sacrifice child
[Wed Jul 5 13:54:55 2017] Killed process 612 (nud500) total-vm:3725308kB, anon-rss:2428568kB, file-rss:0kB

Output from syslog:

[Wed Jul 5 13:54:55 2017] Out of memory: Kill process 612 (nud500) score 391 or sacrifice child
[Wed Jul 5 13:54:55 2017] Killed process 612 (nud500) total-vm:3725308kB, anon-rss:2428568kB, file-rss:0kB

[Wed Jul 5 13:54:55 2017] Out of memory: Kill process 612 (nud500) score 391 or sacrifice child
[Wed Jul 5 13:54:55 2017] Killed process 612 (nud500) total-vm:3725308kB, anon-rss:2428568kB, file-rss:0kB
13:54:55 svr1 kernel: [2096969.096228] python invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_ad

13:54:55 svr1 kernel: [2096969.096745] python cpuset=/ mems_allowed=0
13:54:55 svr1 kernel: [2096969.096980] Pid: 6035, comm: python Not tainted 3.2.0-4-amd64 #1 Debian 3.2.54-2
13:54:55 svr1 kernel: [2096969.097417] Call Trace:
13:54:55 svr1 kernel: [2096969.097618] [] ? dump_header+0x78/0x1bd
13:54:55 svr1 kernel: [2096969.097890] [] ? _raw_spin_unlock_irqrestore+0xe/0xf
13:54:55 svr1 kernel: [2096969.098186] [] ? delayacct_end+0x72/0x7d
13:54:55 svr1 kernel: [2096969.098451] [] ? security_real_capable_noaudit+0x40/0x4f
13:54:55 svr1 kernel: [2096969.098737] [] ? _raw_spin_unlock_irqrestore+0xe/0xf
13:54:55 svr1 kernel: [2096969.099018] [] ? oom_kill_process+0x49/0x271
13:54:55 svr1 kernel: [2096969.099296] [] ? out_of_memory+0x2ea/0x337
13:54:55 svr1 kernel: [2096969.099642] [] ? __alloc_pages_nodemask+0x629/0x7aa
13:54:55 svr1 kernel: [2096969.100057] [] ? alloc_pages_current+0xc7/0xe4
13:54:55 svr1 kernel: [2096969.100421] [] ? filemap_fault+0x24f/0x33e
13:54:55 svr1 kernel: [2096969.106779] [] ? page_add_file_rmap+0x1/0x30
13:54:55 svr1 kernel: [2096969.107229] [] ? __do_fault+0xc8/0x3ac
13:54:55 svr1 kernel: [2096969.107657] [] ? handle_pte_fault+0x298/0x79f
13:54:55 svr1 kernel: [2096969.108140] [] ? pte_offset_kernel+0x16/0x35
13:54:55 svr1 kernel: [2096969.108533] [] ? do_page_fault+0x320/0x345
13:54:55 svr1 kernel: [2096969.108874] [] ? kvm_clock_read+0x17/0x1a
13:54:55 svr1 kernel: [2096969.109227] [] ? timekeeping_get_ns+0xd/0x2a
13:54:55 svr1 kernel: [2096969.109495] [] ? ktime_get_ts+0x5c/0x82
13:54:55 svr1 kernel: [2096969.109753] [] ? should_resched+0x5/0x23
13:54:55 svr1 kernel: [2096969.110022] [] ? _cond_resched+0x7/0x1c
13:54:55 svr1 kernel: [2096969.110284] [] ? poll_select_copy_remaining+0xda/0xf9
13:54:55 svr1 kernel: [2096969.110570] [] ? async_page_fault+0x25/0x30
13:54:55 svr1 kernel: [2096969.110837] Mem-Info:
13:54:55 svr1 kernel: [2096969.111040] Node 0 DMA per-cpu:
13:54:55 svr1 kernel: [2096969.111259] CPU 0: hi: 0, btch: 1 usd: 0
13:54:55 svr1 kernel: [2096969.111506] CPU 1: hi: 0, btch: 1 usd: 0
13:54:55 svr1 kernel: [2096969.111751] Node 0 DMA32 per-cpu:
13:54:55 svr1 kernel: [2096969.111975] CPU 0: hi: 186, btch: 31 usd: 0
13:54:55 svr1 kernel: [2096969.112285] CPU 1: hi: 186, btch: 31 usd: 0
13:54:55 svr1 kernel: [2096969.112535] Node 0 Normal per-cpu:
13:54:55 svr1 kernel: [2096969.112761] CPU 0: hi: 186, btch: 31 usd: 0
13:54:55 svr1 kernel: [2096969.113003] CPU 1: hi: 186, btch: 31 usd: 21
13:54:55 svr1 kernel: [2096969.113250] active_anon:776473 inactive_anon:196951 isolated_anon:0
13:54:55 svr1 kernel: [2096969.113251] active_file:34 inactive_file:249 isolated_file:0
13:54:55 svr1 kernel: [2096969.113252] unevictable:20 dirty:0 writeback:95 unstable:0
13:54:55 svr1 kernel: [2096969.113253] free:21412 slab_reclaimable:1590 slab_unreclaimable:6287
13:54:55 svr1 kernel: [2096969.113254] mapped:84 shmem:19 pagetables:6597 bounce:0
13:54:55 svr1 kernel: [2096969.114580] Node 0 DMA free:15912kB min:256kB low:320kB high:384kB active_anon:0kB inacti
13:54:55 svr1 kernel: [2096969.113253] free:21412 slab_reclaimable:1590 slab_unreclaimable:6287
13:54:55 svr1 kernel: [2096969.113254] mapped:84 shmem:19 pagetables:6597 bounce:0
13:54:55 svr1 kernel: [2096969.114580] Node 0 DMA free:15912kB min:256kB low:320kB high:384kB active_anon:0kB inacti
0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15688kB mlocked
ty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0
ble:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
13:54:55 svr1 kernel: [2096969.118336] lowmem_reserve[]: 0 3512 4017 4017
13:54:55 svr1 kernel: [2096969.118621] Node 0 DMA32 free:61172kB min:58860kB low:73572kB high:88288kB active_anon:29
inactive_anon:580344kB active_file:116kB inactive_file:724kB unevictable:80kB isolated(anon):0kB isolated(file):0kB
:3596504kB mlocked:80kB dirty:0kB writeback:304kB mapped:208kB shmem:32kB slab_reclaimable:2860kB slab_unreclaimabl
B kernel_stack:328kB pagetables:19844kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable

13:54:55 svr1 kernel: [2096969.120629] lowmem_reserve[]: 0 0 505 505
13:54:55 svr1 kernel: [2096969.120980] Node 0 Normal free:8564kB min:8460kB low:10572kB high:12688kB active_anon:204
active_anon:207460kB active_file:20kB inactive_file:272kB unevictable:0kB isolated(anon):0kB isolated(file):0kB pre
120kB mlocked:0kB dirty:0kB writeback:76kB mapped:128kB shmem:44kB slab_reclaimable:3500kB slab_unreclaimable:11120
l_stack:1176kB pagetables:6544kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
13:54:55 svr1 kernel: [2096969.123407] lowmem_reserve[]: 0 0 0 0
13:54:55 svr1 kernel: [2096969.123820] Node 0 DMA: 04kB 18kB 016kB 132kB 264kB 1128kB 1256kB 0512kB 11024kB
B 3
4096kB = 15912kB
13:54:55 svr1 kernel: [2096969.124756] Node 0 DMA32: 5614kB 3788kB 20616kB 9632kB 2264kB 52128kB 6256kB 8512
4kB 102048kB 34096kB = 61172kB
13:54:55 svr1 kernel: [2096969.125464] Node 0 Normal: 1894kB 1508kB 5516kB 2532kB 964kB 2128kB 0256kB 0512kB
B 02048kB 14096kB = 8564kB
13:54:55 svr1 kernel: [2096969.126196] 49367 total pagecache pages
13:54:55 svr1 kernel: [2096969.126464] 49081 pages in swap cache
13:54:55 svr1 kernel: [2096969.126782] Swap cache stats: add 17918106, delete 17869025, find 6009859/7417563
13:54:55 svr1 kernel: [2096969.127380] Free swap = 0kB
13:54:55 svr1 kernel: [2096969.127596] Total swap = 2097148kB
13:54:55 svr1 kernel: [2096969.139671] 1048560 pages RAM
13:54:55 svr1 kernel: [2096969.140135] 33169 pages reserved
13:54:55 svr1 kernel: [2096969.140647] 681 pages shared
13:54:55 svr1 kernel: [2096969.140917] 992837 pages non-shared
13:54:55 svr1 kernel: [2096969.141199] [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name
13:54:55 svr1 kernel: [2096969.141646] [ 292] 0 292 5386 1 0 -17 -1000 udevd
13:54:55 svr1 kernel: [2096969.142112] [ 406] 0 406 5381 1 0 -17 -1000 udevd
13:54:55 svr1 kernel: [2096969.142577] [ 407] 0 407 5385 0 1 -17 -1000 udevd
13:54:55 svr1 kernel: [2096969.143030] [ 1907] 0 1907 62343 69 0 0 0 rsyslogd
13:54:55 svr1 kernel: [2096969.143492] [ 1986] 0 1986 1059 0 0 0 0 acpid
13:54:55 svr1 kernel: [2096969.143940] [ 2024] 0 2024 15468 236 0 0 0 supervisord
13:54:55 svr1 kernel: [2096969.144422] [ 2049] 0 2049 25083 19 0 0 0 apache2
13:54:55 svr1 kernel: [2096969.144877] [ 2108] 0 2108 4198 5 1 0 0 atd
13:54:55 svr1 kernel: [2096969.145320] [ 2177] 102 2177 7482 0 1 0 0 dbus-daemon
13:54:55 svr1 kernel: [2096969.145785] [ 2243] 0 2243 12518 9 0 -17 -1000 sshd
13:54:55 svr1 kernel: [2096969.146239] [ 2246] 0 2246 33494 19 0 0 0 cron
etc.
Jul 5 13:54:55 svr1 kernel: [2096969.163044] Out of memory: Kill process 612 (nud500) score 391 or sacrifice child
Jul 5 13:54:55 svr1 kernel: [2096969.163494] Killed process 612 (nud500) total-vm:3725308kB, anon-rss:2428568kB, file-rss:
0kB
Jul 5 14:17:01 svr1 /USR/SBIN/CRON[6108]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)

Had to copy/paste the log together into a gui, so it is a bit messy, but I think it provides all the information about the kill event. I’ve removed most the processes from the log for security reasons.

yep, that’s what happens on nuexplorer server all the time as well, seems like some memory is never returned after running certain rpc calls.

If the daemon is running on Linux can you provide the result of the following command that will list which RPC are used?

grep ThreadRPCServer ~/.nu/debug.log | sort | uniq -c

21100 ThreadRPCServer S method=getblock
211 ThreadRPCServer S method=getblockcount
65863 ThreadRPCServer S method=getblockhash
19544 ThreadRPCServer S method=getcustodianvotes
210 ThreadRPCServer S method=getdifficulty
423 ThreadRPCServer S method=getinfo
20955 ThreadRPCServer S method=getmotions
210 ThreadRPCServer S method=getnetworkghps
210 ThreadRPCServer S method=getparkrates
210 ThreadRPCServer S method=getpeerinfo
10 ThreadRPCServer started

and here’s dmesg:

dmesg -T | grep Killed
[Fri Jul 7 01:22:29 2017] Killed process 24180 (nud) total-vm:3835104kB, anon-rss:2687612kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 01:32:20 2017] Killed process 5093 (nud) total-vm:2443472kB, anon-rss:2377400kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 01:40:19 2017] Killed process 5277 (nud) total-vm:2434888kB, anon-rss:2368540kB, file-rss:184kB, shmem-rss:0kB
[Fri Jul 7 01:47:46 2017] Killed process 5389 (nud) total-vm:2425088kB, anon-rss:2358824kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 01:54:06 2017] Killed process 5515 (nud) total-vm:2428652kB, anon-rss:2362400kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 02:02:21 2017] Killed process 5618 (nud) total-vm:2430100kB, anon-rss:2363744kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 02:09:15 2017] Killed process 5756 (nud) total-vm:2427332kB, anon-rss:2360984kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 02:15:42 2017] Killed process 5964 (nud) total-vm:2418352kB, anon-rss:2352048kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 02:24:02 2017] Killed process 6061 (nud) total-vm:2428652kB, anon-rss:2362336kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 02:30:50 2017] Killed process 6223 (nud) total-vm:2420200kB, anon-rss:2353908kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 02:40:08 2017] Killed process 6331 (nud) total-vm:2422708kB, anon-rss:2356392kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 02:46:33 2017] Killed process 6475 (nud) total-vm:2421132kB, anon-rss:2354820kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 02:55:09 2017] Killed process 6610 (nud) total-vm:2420868kB, anon-rss:2354528kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 03:03:12 2017] Killed process 6721 (nud) total-vm:2419148kB, anon-rss:2352832kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 03:10:49 2017] Killed process 6876 (nud) total-vm:2414924kB, anon-rss:2348552kB, file-rss:164kB, shmem-rss:0kB
[Fri Jul 7 03:18:07 2017] Killed process 6997 (nud) total-vm:2416908kB, anon-rss:2350636kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 03:24:41 2017] Killed process 7166 (nud) total-vm:2412548kB, anon-rss:2346296kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 03:34:18 2017] Killed process 7248 (nud) total-vm:2419940kB, anon-rss:2353644kB, file-rss:380kB, shmem-rss:0kB
[Fri Jul 7 03:43:05 2017] Killed process 7431 (nud) total-vm:2413864kB, anon-rss:2347568kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 03:50:27 2017] Killed process 7554 (nud) total-vm:2404492kB, anon-rss:2338180kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 03:58:08 2017] Killed process 7680 (nud) total-vm:2412416kB, anon-rss:2346084kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 04:05:17 2017] Killed process 7795 (nud) total-vm:2408192kB, anon-rss:2341852kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 04:14:26 2017] Killed process 7936 (nud) total-vm:2409248kB, anon-rss:2342972kB, file-rss:268kB, shmem-rss:0kB
[Fri Jul 7 04:21:58 2017] Killed process 8056 (nud) total-vm:2405292kB, anon-rss:2338984kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 04:29:24 2017] Killed process 8199 (nud) total-vm:2405024kB, anon-rss:2338720kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 04:50:13 2017] Killed process 8309 (nud) total-vm:3703496kB, anon-rss:2607880kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 04:50:15 2017] Killed process 8449 (nud) total-vm:3703496kB, anon-rss:2608212kB, file-rss:0kB, shmem-rss:0kB
[Fri Jul 7 04:57:51 2017] Killed process 16483 (nud) total-vm:2672920kB, anon-rss:2561884kB, file-rss:16kB, shmem-rss:0kB
[Fri Jul 7 05:05:20 2017] Killed process 16632 (nud) total-vm:2606724kB, anon-rss:2540468kB, file-rss:0kB, shmem-rss:0kB

I tried all these RPC repeatedly and none of them make the memory used by the daemon grow.

Except for a few ones these lines suggest that when it’s killed the process has a quite regular memory usage (virtual around 2.42 GB and rss around 2.36 GB). And the lines with higher usage show a grow of 1.3 GB virtual and only 0.3 GB rss. These amounts do not seem extravagant, although they are higher than on my nodes (2.0 virtual and 1.75 rss, but it’s on 32 bits so it’s expected to be lower). What’s the total memory of the system? And is it 64 bits?

Linux doesn’t necessarily kill the process that is leaking memory. I think it usually kills the one that uses most memory, but some processes may be protected. Are you sure it’s not another process that is filling the memory? It could also be many smaller processes.