What's new

Transmission with extremely high memory usage

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

joegreat

Very Senior Member
I tried to refresh my torrents (mainly Linux distributions) on my main router, but now Transmission is going ballistic. It eats up all memory (incl. swap of 1 Gbyte) and then crashes with out of memory - see log details below:
Code:
Jan  7 16:38:35 kernel: transmission-da invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
Jan  7 16:38:36 kernel: [<c0045ee4>] (unwind_backtrace+0x0/0xf8) from [<c00b0d84>] (oom_kill_process+0x1ec/0x21c)
Jan  7 16:38:36 kernel: [<c00b0d84>] (oom_kill_process+0x1ec/0x21c) from [<c00b1284>] (out_of_memory+0x160/0x2ec)
Jan  7 16:38:36 kernel: [<c00b1284>] (out_of_memory+0x160/0x2ec) from [<c00b5638>] (__alloc_pages_nodemask+0x6dc/0x708)
Jan  7 16:38:36 kernel: [<c00b5638>] (__alloc_pages_nodemask+0x6dc/0x708) from [<c00b876c>] (__do_page_cache_readahead+0x13c/0x2b4)
Jan  7 16:38:36 kernel: [<c00b876c>] (__do_page_cache_readahead+0x13c/0x2b4) from [<c00b900c>] (ra_submit+0x28/0x30)
Jan  7 16:38:36 kernel: [<c00b900c>] (ra_submit+0x28/0x30) from [<c00ad24c>] (filemap_fault+0x3c8/0x3f8)
Jan  7 16:38:36 kernel: [<c00ad24c>] (filemap_fault+0x3c8/0x3f8) from [<c00c84b4>] (__do_fault+0x54/0x434)
Jan  7 16:38:36 kernel: [<c00c84b4>] (__do_fault+0x54/0x434) from [<c00cb87c>] (handle_mm_fault+0xdc/0x704)
Jan  7 16:38:36 kernel: [<c00cb87c>] (handle_mm_fault+0xdc/0x704) from [<c0047530>] (do_page_fault+0x174/0x250)
Jan  7 16:38:36 kernel: [<c0047530>] (do_page_fault+0x174/0x250) from [<c003e59c>] (do_PrefetchAbort+0x30/0x9c)
Jan  7 16:38:36 kernel: [<c003e59c>] (do_PrefetchAbort+0x30/0x9c) from [<c040f760>] (ret_from_exception+0x0/0x10)
Jan  7 16:38:36 kernel: Exception stack(0xcbafbfb0 to 0xcbafbff8)
Jan  7 16:38:36 kernel: bfa0:                                     ffffffff 00000001 00000000 00000000
Jan  7 16:38:36 kernel: bfc0: 00000000 00015a38 ffffffff 4022027c 00000000 00000001 00000001 00087240
Jan  7 16:38:36 kernel: bfe0: 00000000 bea008a0 401e4fbc 00015a38 600f0010 ffffffff
Jan  7 16:38:36 kernel: Mem-info:
Jan  7 16:38:36 kernel: DMA per-cpu:
Jan  7 16:38:36 kernel: CPU    0: hi:   42, btch:   7 usd:  35
Jan  7 16:38:36 kernel: CPU    1: hi:   42, btch:   7 usd:   0
Jan  7 16:38:36 kernel: Normal per-cpu:
Jan  7 16:38:36 kernel: CPU    0: hi:   42, btch:   7 usd:  16
Jan  7 16:38:36 kernel: CPU    1: hi:   42, btch:   7 usd:   0
Jan  7 16:38:36 kernel: active_anon:18355 inactive_anon:18409 isolated_anon:0
Jan  7 16:38:36 kernel:  active_file:168 inactive_file:581 isolated_file:0
Jan  7 16:38:36 kernel:  unevictable:3 dirty:0 writeback:15 unstable:0
Jan  7 16:38:36 kernel:  free:12453 slab_reclaimable:259 slab_unreclaimable:9215
Jan  7 16:38:36 kernel:  mapped:92 shmem:1 pagetables:835 bounce:0
Jan  7 16:38:36 kernel: DMA free:27092kB min:26448kB low:33060kB high:39672kB active_anon:34480kB inactive_anon:34564kB active_file:148kB inactive_file:540kB unevictable:4kB isolated(anon):0kB isolated(file):0kB present:130048kB mlocked:4kB dirty:0kB writeback:56kB mapped:144kB shmem:0kB slab_reclaimable:168kB slab_unreclaimable:27644kB kernel_stack:56kB pagetables:1268kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jan  7 16:38:36 kernel: lowmem_reserve[]: 0 109 109
Jan  7 16:38:36 kernel: Normal free:22944kB min:22700kB low:28372kB high:34048kB active_anon:38964kB inactive_anon:39052kB active_file:524kB inactive_file:1504kB unevictable:8kB isolated(anon):0kB isolated(file):0kB present:111616kB mlocked:8kB dirty:0kB writeback:4kB mapped:224kB shmem:4kB slab_reclaimable:868kB slab_unreclaimable:9216kB kernel_stack:680kB pagetables:2072kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jan  7 16:38:36 kernel: lowmem_reserve[]: 0 0 0
Jan  7 16:38:36 kernel: DMA: 63*4kB 25*8kB 487*16kB 179*32kB 23*64kB 7*128kB 2*256kB 4*512kB 0*1024kB 0*2048kB 2*4096kB = 27092kB
Jan  7 16:38:36 kernel: Normal: 108*4kB 65*8kB 47*16kB 28*32kB 9*64kB 10*128kB 6*256kB 5*512kB 2*1024kB 2*2048kB 2*4096kB = 22888kB
Jan  7 16:38:36 kernel: 905 total pagecache pages
Jan  7 16:38:36 kernel: 136 pages in swap cache
Jan  7 16:38:36 kernel: Swap cache stats: add 6616007, delete 6615871, find 8378814/8865155
Jan  7 16:38:36 kernel: Free swap  = 0kB
Jan  7 16:38:36 kernel: Total swap = 1048572kB
Jan  7 16:38:36 kernel: 65536 pages of RAM
Jan  7 16:38:36 kernel: 12778 free pages
Jan  7 16:38:36 kernel: 1748 reserved pages
Jan  7 16:38:36 kernel: 3009 slab pages
Jan  7 16:38:36 kernel: 461 pages shared
Jan  7 16:38:36 kernel: 136 pages swap cached

It looks like the issues is already discussed on GitHub - a solution is also outlined in the Debian bug report - and a recompile with different options could be the solution to the problem... :cool:

I posted a small comment already in the Entware-NG thread, but nobody reacted on it (maybe due to holiday season).

Any chance that Transmission gets updated/re-compiled in the Entware-NG repository?
 
Last edited:
I tried to refresh my torrents (mainly Linux distributions) on my main router, but now Transmission is going ballistic. It eats up all memory (incl. swap of 1 Gbyte) and then crashes with out of memory - see log details below:
Code:
Jan  7 16:38:35 kernel: transmission-da invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
Jan  7 16:38:36 kernel: [<c0045ee4>] (unwind_backtrace+0x0/0xf8) from [<c00b0d84>] (oom_kill_process+0x1ec/0x21c)
Jan  7 16:38:36 kernel: [<c00b0d84>] (oom_kill_process+0x1ec/0x21c) from [<c00b1284>] (out_of_memory+0x160/0x2ec)
Jan  7 16:38:36 kernel: [<c00b1284>] (out_of_memory+0x160/0x2ec) from [<c00b5638>] (__alloc_pages_nodemask+0x6dc/0x708)
Jan  7 16:38:36 kernel: [<c00b5638>] (__alloc_pages_nodemask+0x6dc/0x708) from [<c00b876c>] (__do_page_cache_readahead+0x13c/0x2b4)
Jan  7 16:38:36 kernel: [<c00b876c>] (__do_page_cache_readahead+0x13c/0x2b4) from [<c00b900c>] (ra_submit+0x28/0x30)
Jan  7 16:38:36 kernel: [<c00b900c>] (ra_submit+0x28/0x30) from [<c00ad24c>] (filemap_fault+0x3c8/0x3f8)
Jan  7 16:38:36 kernel: [<c00ad24c>] (filemap_fault+0x3c8/0x3f8) from [<c00c84b4>] (__do_fault+0x54/0x434)
Jan  7 16:38:36 kernel: [<c00c84b4>] (__do_fault+0x54/0x434) from [<c00cb87c>] (handle_mm_fault+0xdc/0x704)
Jan  7 16:38:36 kernel: [<c00cb87c>] (handle_mm_fault+0xdc/0x704) from [<c0047530>] (do_page_fault+0x174/0x250)
Jan  7 16:38:36 kernel: [<c0047530>] (do_page_fault+0x174/0x250) from [<c003e59c>] (do_PrefetchAbort+0x30/0x9c)
Jan  7 16:38:36 kernel: [<c003e59c>] (do_PrefetchAbort+0x30/0x9c) from [<c040f760>] (ret_from_exception+0x0/0x10)
Jan  7 16:38:36 kernel: Exception stack(0xcbafbfb0 to 0xcbafbff8)
Jan  7 16:38:36 kernel: bfa0:                                     ffffffff 00000001 00000000 00000000
Jan  7 16:38:36 kernel: bfc0: 00000000 00015a38 ffffffff 4022027c 00000000 00000001 00000001 00087240
Jan  7 16:38:36 kernel: bfe0: 00000000 bea008a0 401e4fbc 00015a38 600f0010 ffffffff
Jan  7 16:38:36 kernel: Mem-info:
Jan  7 16:38:36 kernel: DMA per-cpu:
Jan  7 16:38:36 kernel: CPU    0: hi:   42, btch:   7 usd:  35
Jan  7 16:38:36 kernel: CPU    1: hi:   42, btch:   7 usd:   0
Jan  7 16:38:36 kernel: Normal per-cpu:
Jan  7 16:38:36 kernel: CPU    0: hi:   42, btch:   7 usd:  16
Jan  7 16:38:36 kernel: CPU    1: hi:   42, btch:   7 usd:   0
Jan  7 16:38:36 kernel: active_anon:18355 inactive_anon:18409 isolated_anon:0
Jan  7 16:38:36 kernel:  active_file:168 inactive_file:581 isolated_file:0
Jan  7 16:38:36 kernel:  unevictable:3 dirty:0 writeback:15 unstable:0
Jan  7 16:38:36 kernel:  free:12453 slab_reclaimable:259 slab_unreclaimable:9215
Jan  7 16:38:36 kernel:  mapped:92 shmem:1 pagetables:835 bounce:0
Jan  7 16:38:36 kernel: DMA free:27092kB min:26448kB low:33060kB high:39672kB active_anon:34480kB inactive_anon:34564kB active_file:148kB inactive_file:540kB unevictable:4kB isolated(anon):0kB isolated(file):0kB present:130048kB mlocked:4kB dirty:0kB writeback:56kB mapped:144kB shmem:0kB slab_reclaimable:168kB slab_unreclaimable:27644kB kernel_stack:56kB pagetables:1268kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jan  7 16:38:36 kernel: lowmem_reserve[]: 0 109 109
Jan  7 16:38:36 kernel: Normal free:22944kB min:22700kB low:28372kB high:34048kB active_anon:38964kB inactive_anon:39052kB active_file:524kB inactive_file:1504kB unevictable:8kB isolated(anon):0kB isolated(file):0kB present:111616kB mlocked:8kB dirty:0kB writeback:4kB mapped:224kB shmem:4kB slab_reclaimable:868kB slab_unreclaimable:9216kB kernel_stack:680kB pagetables:2072kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jan  7 16:38:36 kernel: lowmem_reserve[]: 0 0 0
Jan  7 16:38:36 kernel: DMA: 63*4kB 25*8kB 487*16kB 179*32kB 23*64kB 7*128kB 2*256kB 4*512kB 0*1024kB 0*2048kB 2*4096kB = 27092kB
Jan  7 16:38:36 kernel: Normal: 108*4kB 65*8kB 47*16kB 28*32kB 9*64kB 10*128kB 6*256kB 5*512kB 2*1024kB 2*2048kB 2*4096kB = 22888kB
Jan  7 16:38:36 kernel: 905 total pagecache pages
Jan  7 16:38:36 kernel: 136 pages in swap cache
Jan  7 16:38:36 kernel: Swap cache stats: add 6616007, delete 6615871, find 8378814/8865155
Jan  7 16:38:36 kernel: Free swap  = 0kB
Jan  7 16:38:36 kernel: Total swap = 1048572kB
Jan  7 16:38:36 kernel: 65536 pages of RAM
Jan  7 16:38:36 kernel: 12778 free pages
Jan  7 16:38:36 kernel: 1748 reserved pages
Jan  7 16:38:36 kernel: 3009 slab pages
Jan  7 16:38:36 kernel: 461 pages shared
Jan  7 16:38:36 kernel: 136 pages swap cached

It looks like the issues is already discussed on GitHub - a solution is also outlined in the Debian bug report - and a recompile with different options could be the solution to the problem... :cool:

I posted a small comment already in the Entware-NG thread, but nobody reacted on it (maybe due to holiday season).

Any chance that Transmission gets updated/re-compiled in the Entware-NG repository?
You can try to compile latest version until is available in entware repo
https://www.hqt.ro/compile-and-install-latest-transmissionbt-through-debian/
 
Entware-ng & transmission were updated today. Please check it it was fixed.
Thank's a lot for the fast update!

First the good news: Transmission is running again! No oom crashes anymore (after watching for 4 hours).

But: The memory usage is still really high - compared to the previous version!

Normally I run around 30-35 active seeding torrents and 5-10 downloads in parallel.
Originally the swap usage was around ~25 MBytes and a 256 MBytes swap file never ran out of space.

Now I have still the 1 GBytes swap file in place and the usage is as follows:
- In the startup phase the swap file is filled up to 350 MBytes.
- Afterwards with only seeding it's lowered to ~40 MBytes usage with periodic peeks towards 150 MBytes - but only for a short time then lowered again to 40 MBytes level.
- With downloading five torrents in parallel the swap usage stays around 120 MBytes usage (did not see peeks as long as I watched).

So I can reduce to a 512 MBytes swap file to be safe.

Update: To early success message!
After reboot with 512 MBytes swap file Transmission again went ballistic and eat up all the available swap space before oom happened! Now back to 1 GBytes swap - let's wait and see...

Update 2: 24 hours running Transmission.
Transmission survived for 24h running at low swap usage (39 MBytes).
So far so good - seems that the high memory usage is only happening during startup (~140 Torrents) and could be SSL related.
 
Last edited:
High memory usage can be beacause of ssl enabled trackers and openssl - https://github.com/transmission/transmission/issues/313
Here is transmision using mbedtls (armv7) - http://pkg.entware.net/binaries/armv7/test/
I removed the OpenSSL version and installed the TLS one from above test-link.

This version seems more lightweight in respect of CPU and also memory usage. During startup there is no crazy high memory usage as it was before. It starts with moderate 40 MBytes swap usage and runs mostly in this area, but I saw also a peek swap usage up to 250 MBytes. :rolleyes:

Here the latest details when the swap usage went up to 170 MBytes usage:
Code:
--total-cpu-usage-- ------memory-usage----- ----swap--- --io/total- -dsk/total- -net/total-
usr sys idl wai stl| used  free  buff  cach| used  free| read  writ| read  writ| recv  send
  9   3  85   3   0|84.9M 62.3M 2280k 97.6M|  40M  984M|11.9  11.7 | 710k  610k|1606k  673k
  8   8  41  43   0|86.2M 75.8M 2024k 83.1M|  39M  985M|16.1  8.60 | 976k  666k|1678k  652k
  7   4  76  13   0|86.0M 81.7M 2168k 77.2M|  39M  985M|18.5  7.00 | 663k  671k|1499k  690k
 14   3  67  16   0| 112M 52.1M 1924k 81.4M|  38M  986M|21.9  10.5 | 842k  899k|7257k  782k
 19  10  27  43   0| 166M 49.5M 1332k 30.2M|  37M  987M|14.0  11.3 | 761k 1198k|  13M  879k
  8   3  21  68   0| 185M 49.0M  596k 13.0M|  68M  956M|53.3  18.0 |1543k 4002k|3584k  446k
  1   5  32  62   0| 177M 65.3M  800k 4220k| 128M  896M|97.3  27.1 |2242k 6306k| 117k   18k
  1   9  15  75   0| 171M 72.1M  952k 3940k| 170M  854M|77.2  23.6 |2158k 4415k|  21k 4520B
  7  15  35  43   0| 114M  113M 2488k 17.8M| 119M  905M| 222  14.5 |6883k 3690k| 180k  308k
 10   2  80   8   0| 116M 92.3M 2608k 37.0M| 116M  908M|31.5  0.40 |1741k 3277B|1625k  865k
  8   3  85   4   0| 116M 78.2M 2768k 50.4M| 114M  910M|13.6  11.4 | 842k 1332k|1354k  717k
  8   2  88   2   0| 117M 66.8M 2796k 60.6M| 112M  912M|9.10  3.10 | 377k  346k|1613k  685k
  9   4  62  26   0| 116M 67.1M 2380k 61.7M| 110M  914M|12.2  1.50 | 535k  115k|1428k  612k
  8   2  51  39   0| 117M 84.2M 1328k 45.1M| 109M  915M|14.2  13.7 | 808k 1652k|1661k  718k
  8   2  86   4   0| 117M 69.3M 1360k 59.4M| 107M  917M|12.3  2.40 | 807k  242k|1671k  644k
  7   6  72  15   0| 114M 68.7M 1524k 63.1M| 107M  917M|14.0  3.40 | 472k  417k|1567k  635k
  8   2  82   8   0| 114M 67.8M 1512k 64.1M| 106M  918M|13.4  12.4 | 616k 1339k|1738k  678k
  8   3  73  16   0| 114M 64.8M 1472k 66.9M| 105M  919M|10.9  3.60 | 477k  425k|1583k  612k
  9   2  79  10   0| 114M 70.0M 1096k 62.2M| 105M  919M|11.0  7.30 | 622k  873k|1649k  687k
  9   6  79   5   0|78.7M 90.2M 1940k 76.0M|  45M  979M|17.1  2.90 | 796k  343k|1420k  784k
 10   4  85   2   0|80.3M 77.7M 2124k 87.1M|  45M  979M|12.2  11.3 | 322k 1444k|1654k  711k
  9   3  86   2   0|79.1M 69.8M 2144k 96.1M|  44M  980M|7.38  2.38 | 310k  164k|1710k  665k

The good thing is that the swap usage goes not ballistic anymore and swings back to the 40 MBytes area after some time.

So far so good for me! :cool:

Update: TLS version runs fine for 24 hours with lower CPU and memory/swap usage.
The only issue I saw is during startup of Transmission: After some minutes running it needs a high amount of memory/swap - as this is above 512 MBytes, I need to stay with a 1 GByte swap file to be on the safe side.
Also during normal seeding (~40 torrents with max. 10 connections per Torrent and overall max. 150) the swap usage goes from time to time up to 250/400 MBytes - not sure why and how often but seems regularly.

Update 2: This week I was on a business trip and when I can back I found Transmission not running anymore. The syslog.log shows why:
Code:
Jan 16 22:44:08 kernel: transmission-da invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
Jan 16 22:44:09 kernel: Free swap  = 0kB
Jan 16 22:44:09 kernel: Total swap = 1048572kB
1 GByte Swap file again full... :(
 
Last edited:
Hi Joe, if you want to send the torrent traffic through the vpn is it just a case of adding the routers ip to policy based routing? Would non vpn clients dns requests made by the router go down the vpn in this scenario?
 
Hi Joe, if you want to send the torrent traffic through the vpn is it just a case of adding the routers ip to policy based routing? Would non vpn clients dns requests made by the router go down the vpn in this scenario?
May I google that for you?

AND: Please not not hijack threads with off-topic questions - open your own thread if you do not find the answer by searching... :eek:
 
Hi Joe and Zyxmon,

Did you ever get this problem resolved? I just upgraded to the latest Entware repo and it updated my transmission as well. It's gone ballistic like yours did and crashes after a short while. After every reboot it wants to do a data verification and I only have 9 torrents going. Anyway any more info would be helpful. Thanks very much.
 
Did you ever get this problem resolved? I just upgraded to the latest Entware repo and it updated my transmission as well. It's gone ballistic like yours did and crashes after a short while.
Well, with a 1 GByte swap file it worked finally after I re-downloaded all the Torrents the swap usage never went over ~500 MBytes.
The only problem was that periodically the swap was raised to server hundred MBytes and the lowered again below 100 MBytes - but during the high usage the router became unresponsive (all CPU cycle used up) and this was not acceptable.

So I switched over to Deluge which had again (but different) issues but memory usage is low (swap never goes above 100 MBytes).
After some trail end error with config and init.start files it works now OK - not fully satisfied but good enough (bandwidth limits are not respected well).

The Entware packages are:
deluge - 1.3.15-2 - BitTorrent client with a client/server model.
deluge-ui-web - 1.3.15-2 - A lightweight BitTorrent client (Web UI)

The init.start file need some attention as the default port for web is set oddly to 888, but should be 8112:
nano /opt/etc/init.d/S81deluge-web
start() {
echo "starting deluge-web"
deluge-web -f -p 8112
}


The config file are stored in /opt/etc/deluge and need some editing for file location, etc.

With running Deluge (39 torrents shared with ~95 connections) and max. 512 kBit bandwith (typically much lower) the router is OK (CPU idl >30% and swap used always below 100 MBytes):
Code:
--total-cpu-usage-- ------memory-usage----- ----swap--- --io/total- -dsk/total- -net/total-
usr sys idl wai stl| used  free  buff  cach| used  free| read  writ| read  writ| recv  send
  7   5  58  30   0| 142M 48.2M 1848k 60.3M|  79M  433M|82.0  0.33 |1333k   18k|   0     0
  8  13  30  49   0| 142M 52.6M 2192k 55.5M|  79M  433M| 169  0.50 |2870k   31k|  44k  614k
 10   8  28  53   0| 143M 52.0M 1728k 56.3M|  79M  433M| 184  2.20 |3254k   96k|  49k  614k
  7  12  32  49   0| 142M 53.6M 2132k 54.6M|  79M  433M| 157  0.30 |2567k 1638B|  49k  688k
  9  14  30  48   0| 143M 51.5M 1984k 56.6M|  79M  433M| 162  0.10 |2733k  410B|  52k  663k
  6  11  33  50   0| 142M 55.8M 2336k 52.1M|  79M  433M| 141  0.50 |2262k 2458B|  46k  633k
  6  13  32  49   0| 142M 55.7M 2388k 52.2M|  79M  433M| 158  0.70 |2584k 2867B|  45k  610k
  7  13  32  48   0| 143M 54.5M 2776k 52.9M|  79M  433M| 157     0 |2467k    0 |  46k  639k
  6   9  32  53   0| 143M 54.2M 2340k 53.6M|  79M  433M| 157     0 |2572k    0 |  46k  620k
  6  11  32  52   0| 142M 51.2M 2444k 56.7M|  79M  433M| 170     0 |2722k    0 |  47k  656k
  7   9  34  50   0| 142M 53.7M 2092k 54.5M|  79M  433M| 154     0 |2425k    0 |  47k  620k
  6   8  32  53   0| 142M 53.3M 1944k 55.0M|  79M  433M| 160     0 |2592k    0 |  43k  663k
  6  12  31  51   0| 143M 52.3M 2424k 55.3M|  79M  433M| 158     0 |2531k    0 |  43k  619k
  7   7  33  54   0| 143M 53.6M 1508k 55.0M|  79M  433M| 155     0 |2522k    0 |  48k  615k
  6  11  33  50   0| 143M 53.9M 1964k 54.1M|  79M  433M| 147     0 |2372k    0 |  47k  599k
  7  12  34  47   0| 143M 52.8M 2444k 54.9M|  79M  433M| 163     0 |2605k    0 |  41k  548k
 
Last edited:
Thank you very much Joe. Great info here for everyone with this issue.
 
Hi,

I've had this problem on a Raspberry Pi 2. I use it as a torrent/scan/print server, and it now runs Alpine Linux in swapless ramdisk mode (it's currently the default and simplest install option for a Pi). Previously, I ran Arch Linux with the boot partition on SD and the rest of the system on a USB drive.

I decided to try a simpler OpenRC Linux variant when Transmission began to exhibit the problem you've all encountered, because I felt that the solution might involve controlling the amount of memory Transmission had to play with, the Systemd docs for memory management looked painfully sparse, and Cgroups were deprecated because Systemd.

I installed it yesterday. I allowed it to run for a while until the system broke, in order to confirm that the problem persisted.

I set the OpenRC daemon launch options to hard limit memory to 300M, soft limit to 280, and process limit the daemon to one instance only.

It's been running overnight with a number of active torrents This is the system's current usage, with Transmission still active and working..

Code:
hermiod:~# free -m
             total       used       free     shared    buffers     cached
Mem:           939        928         10         71          0        863
-/+ buffers/cache:         65        873
Swap:            0          0          0
hermiod:~#

I expected Transmission to eat up the entirety of the allowed memory quota, but it seems to be happily using a small percentage of that.

The following is the content of /etc/conf.d/transmission-daemon. The relevant entry is 'cgroup_settings'.

Code:
# This is the transmission-daemon configuration file. For other options and
# better explanation, take a look at transmission-daemon manual page Note: it's
# better to configure some settings (like username/password) in
# /var/transmission/config/settings.json to avoid other users see it with `ps`

TRANSMISSION_OPTIONS="--encryption-required"

# Run daemon as another user (username or username:groupname)
# If you change this setting, chown -R /var/transmission/config <and download directory, check web settings>
runas_user=master

# Location of logfile (should be writeable for runas_user user)
# Set logfile=syslog to use syslog for logging
#logfile=/var/log/transmission/transmission.log

c_cgroup_settings="
memory.high 293601280
memory.max 314572800
pids.max 1
"

I like Transmission for its set up and forget attributes and small footprint. I've used it for years, and the memory annexation behaviour was an unpleasant change. I don't understand the behaviour, but it seems to be a relatively minor problem as long as the daemon can be launched with a hard quota.

For the record, I'm not anti-Systemd - it seems to work well enough for my desktop and laptop - but it does seem overcomplex, I dislike the fact that it seems to be gradually eating its way into the rest of the OS, and Poettering's attitude to bugs looks a little worrying.

Overall, I'm uncomfortable using a Systemd-based host for a net-facing device.
 
Last edited:

Similar threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top