[OSM-dev] posgresql/postgis leaking memory in tirex rendering setup

Stephan Knauss osm at stephans-server.de
Sat Mar 27 13:45:17 UTC 2021


I have recently updated my rendering stack to latest software releases.
Now I observe a memory leak of about 2GB of available memory per hour.

Memory seems to stay allocated with PostgreSQL processes. These have 
open connections with the tirex backends doing the rendering.
There is no idle time of the connections as I have a full rendering queue.

Is this something others face as well? I do not remember such memory 
leaks from my previous setup. Only versions of the software stack in 
place changed.

Stopping the rendering queue and restarting tirex-backends will release 
the memory.
I could do this as a work-around, but I am suspecting that this is no 
normal behavior.

Are there ways to debug what causes Postgresql to keep the memory?
Did I miss some configuration option?

Posgres/Postgis is from their latest docker image postgis/postgis (hash 
74a85c5bd6ac)

PostgreSQL 13.2 (Debian 13.2-1.pgdg100+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit

POSTGIS=""3.1.1 aaf4c79"" [EXTENSION] PGSQL=""130"" 
GEOS=""3.7.1-CAPI-1.11.1 27a5e771"" PROJ=""Rel. 5.2.0, September 15th, 
2018"" LIBXML=""2.9.4"" LIBJSON=""0.12.1"" LIBPROTOBUF=""1.3.1"" 
WAGYU=""0.5.0 (Internal)""


Relevant syslog of OOM:

postgres invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), 
order=0, oom_score_adj=0
CPU: 9 PID: 3322620 Comm: postgres Not tainted 5.4.0-67-generic #75-Ubuntu
Hardware name: Gigabyte Technology Co., Ltd. B360 HD3P-LM/B360HD3PLM-CF, 
BIOS F7 HZ 07/24/2020
Call Trace:
  dump_stack+0x6d/0x8b
  dump_header+0x4f/0x1eb
  oom_kill_process.cold+0xb/0x10
  out_of_memory.part.0+0x1df/0x3d0
  out_of_memory+0x6d/0xd0
  __alloc_pages_slowpath+0xd5e/0xe50
  __alloc_pages_nodemask+0x2d0/0x320
  alloc_pages_current+0x87/0xe0
  __page_cache_alloc+0x72/0x90
  pagecache_get_page+0xbf/0x300
  filemap_fault+0x6b2/0xa50
  ? unlock_page_memcg+0x12/0x20
  ? page_add_file_rmap+0xff/0x1a0
  ? xas_load+0xd/0x80
  ? xas_find+0x17f/0x1c0
  ? filemap_map_pages+0x24c/0x380
  ext4_filemap_fault+0x32/0x50
  __do_fault+0x3c/0x130
  do_fault+0x24b/0x640
  ? __switch_to_asm+0x34/0x70
  __handle_mm_fault+0x4c5/0x7a0
  handle_mm_fault+0xca/0x200
  do_user_addr_fault+0x1f9/0x450
  __do_page_fault+0x58/0x90
  do_page_fault+0x2c/0xe0
  page_fault+0x34/0x40
RIP: 0033:0x558ddf0beded
Code: Bad RIP value.
RSP: 002b:00007ffe5214a020 EFLAGS: 00010202
RAX: 00007fea26b16b28 RBX: 0000000000000028 RCX: 00007fea26b16b68
RDX: 0000000000000028 RSI: 0000000000000000 RDI: 00007fea26b16b28
RBP: 0000000000000010 R08: 00007fea26b16b28 R09: 0000000000000019
R10: 0000000000000001 R11: 0000000000000001 R12: 00000000ffffffff
R13: 00007fea26af76d8 R14: 00007fea26af7728 R15: 0000000000000000
Mem-Info:
active_anon:29797121 inactive_anon:2721653 isolated_anon:32
  active_file:323 inactive_file:83 isolated_file:0
  unevictable:16 dirty:14 writeback:0 unstable:0
  slab_reclaimable:85925 slab_unreclaimable:106003
  mapped:1108591 shmem:14943591 pagetables:69567 bounce:0
  free:148637 free_pcp:1619 free_cma:0
Node 0 active_anon:119188484kB inactive_anon:10886612kB 
active_file:1292kB inactive_file:332kB unevictable:64kB 
isolated(anon):128kB isolated(file):0kB mapped:4434364kB dirty:56kB 
writeback:0kB shmem:59774364kB shmem_thp: 0kB shmem_pmdmapped: 0kB 
anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
Node 0 DMA free:15904kB min:8kB low:20kB high:32kB active_anon:0kB 
inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB 
writepending:0kB present:15988kB managed:15904kB mlocked:0kB 
kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB 
free_cma:0kB
lowmem_reserve[]: 0 809 128670 128670 128670
Node 0 DMA32 free:511716kB min:424kB low:1252kB high:2080kB 
active_anon:328544kB inactive_anon:8540kB active_file:204kB 
inactive_file:0kB unevictable:0kB writepending:0kB present:947448kB 
managed:881912kB mlocked:0kB kernel_stack:0kB pagetables:416kB 
bounce:0kB free_pcp:1412kB local_pcp:288kB free_cma:0kB
lowmem_reserve[]: 0 0 127860 127860 127860
Node 0 Normal free:66928kB min:67148kB low:198076kB high:329004kB 
active_anon:118859812kB inactive_anon:10877988kB active_file:1812kB 
inactive_file:1480kB unevictable:64kB writepending:56kB 
present:133160960kB managed:130937320kB mlocked:64kB 
kernel_stack:18336kB pagetables:277852kB bounce:0kB free_pcp:5064kB 
local_pcp:384kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0 0
Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB 
(U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
Node 0 DMA32: 1388*4kB (UME) 1274*8kB (UME) 1772*16kB (UE) 1564*32kB 
(UME) 1082*64kB (UE) 514*128kB (UME) 160*256kB (UE) 46*512kB (UME) 
23*1024kB (UE) 11*2048kB (UME) 42*4096kB (UME) = 511808kB
Node 0 Normal: 715*4kB (UEH) 108*8kB (UEH) 1400*16kB (UMEH) 1156*32kB 
(UMEH) 58*64kB (UMEH) 7*128kB (UMEH) 1*256kB (U) 0*512kB 0*1024kB 
0*2048kB 0*4096kB = 67980kB
Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 
hugepages_size=1048576kB
Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 
hugepages_size=2048kB
14982418 total pagecache pages
38157 pages in swap cache
Swap cache stats: add 14086008, delete 14047525, find 102734363/105748939
Free swap  = 0kB
Total swap = 4189180kB
33531099 pages RAM
0 pages HighMem/MovableOnly
572315 pages reserved
0 pages cma reserved
0 pages hwpoisoned
oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=fae7f79d710f4449fd87c58f38eb164a470e3f837b33630c6c10a9fbca10a82b,mems_allowed=0,global_oom,task_memcg=/docker/e88b65d20ef39588c6bf9c00e7aa2946f134a61a6195c210f7081d7ed4d9a5fa,task=postgres,pid=2453393,uid=999
Out of memory: Killed process 2453393 (postgres) total-vm:10625568kB, 
anon-rss:6768188kB, file-rss:4kB, shmem-rss:3592772kB, UID:999 
pgtables:20756kB oom_score_adj:0
oom_reaper: reaped process 2453393 (postgres), now anon-rss:0kB, 
file-rss:0kB, shmem-rss:3592772kB





Stephan



More information about the dev mailing list