Bug 109718 - [CI][DRMTIP] igt@gem_mmap_gtt@fault-concurrent - dmesg-warn - page allocation failure
Summary: [CI][DRMTIP] igt@gem_mmap_gtt@fault-concurrent - dmesg-warn - page allocatio...
Status: RESOLVED WONTFIX
Alias: None
Product: DRI
Classification: Unclassified
Component: DRM/Intel (show other bugs)
Version: DRI git
Hardware: Other All
: medium normal
Assignee: Intel GFX Bugs mailing list
QA Contact: Intel GFX Bugs mailing list
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-02-21 12:33 UTC by Lakshmi
Modified: 2019-02-21 12:38 UTC (History)
1 user (show)

See Also:
i915 platform: PNV
i915 features: GEM/Other


Attachments

Description Lakshmi 2019-02-21 12:33:00 UTC
https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_221/fi-pnv-d510/igt@gem_mmap_gtt@fault-concurrent.html

	
<6> [121.736873] Console: switching to colour dummy device 80x25
<6> [121.737490] [IGT] gem_mmap_gtt: executing
<6> [121.757149] [IGT] gem_mmap_gtt: starting subtest fault-concurrent
<4> [129.196522] gem_mmap_gtt: page allocation failure: order:0, mode:0x40d0(__GFP_IO|__GFP_FS|__GFP_COMP|__GFP_RECLAIMABLE), nodemask=(null)
<4> [129.196770] CPU: 2 PID: 1246 Comm: gem_mmap_gtt Tainted: G     U            5.0.0-rc6-gbf979a24473a-drmtip_221+ #1
<4> [129.196783] Hardware name:  /D510MO, BIOS MOPNV10J.86A.0311.2010.0802.2346 08/02/2010
<4> [129.196795] Call Trace:
<4> [129.196820]  dump_stack+0x67/0x9b
<4> [129.196843]  warn_alloc+0xfa/0x180
<4> [129.196909]  __alloc_pages_nodemask+0xda7/0x1110
<4> [129.197000]  new_slab+0x3ba/0x560
<4> [129.197013]  ___slab_alloc.constprop.34+0x2d3/0x380
<4> [129.197013]  ? xas_nomem+0x38/0x50
<4> [129.197013]  ? trace_hardirqs_on_thunk+0x1a/0x1c
<4> [129.197013]  ? xas_nomem+0x38/0x50
<4> [129.197013]  ? __slab_alloc.isra.27.constprop.33+0x3d/0x70
<4> [129.197013]  __slab_alloc.isra.27.constprop.33+0x3d/0x70
<4> [129.197013]  ? xas_nomem+0x38/0x50
<4> [129.197013]  kmem_cache_alloc+0x21c/0x280
<4> [129.197013]  xas_nomem+0x38/0x50
<4> [129.197013]  add_to_swap_cache+0x2b2/0x370
<4> [129.197013]  __read_swap_cache_async+0xed/0x1d0
<4> [129.197013]  swap_cluster_readahead+0x16d/0x240
<4> [129.197013]  ? shmem_swapin+0x7b/0xa0
<4> [129.197013]  shmem_swapin+0x7b/0xa0
<4> [129.197013]  ? find_get_entry+0x1b5/0x2f0
<4> [129.197013]  shmem_getpage_gfp.isra.8+0x76b/0xd10
<4> [129.197013]  shmem_read_mapping_page_gfp+0x3e/0x70
<4> [129.197013]  i915_gem_object_get_pages_gtt+0x203/0x680 [i915]
<4> [129.197013]  ? __i915_gem_object_get_pages+0x18/0xb0 [i915]
<4> [129.197013]  ? lock_acquire+0xa6/0x1c0
<4> [129.197013]  ____i915_gem_object_get_pages+0x1d/0xa0 [i915]
<4> [129.197013]  __i915_gem_object_get_pages+0x59/0xb0 [i915]
<4> [129.197013]  i915_gem_fault+0x335/0x860 [i915]
<4> [129.197013]  __do_fault+0x2c/0xb0
<4> [129.197013]  ? rwsem_down_read_failed+0xf7/0x1b0
<4> [129.197013]  __handle_mm_fault+0x98c/0xfa0
<4> [129.197013]  handle_mm_fault+0x196/0x3a0
<4> [129.197013]  __do_page_fault+0x246/0x500
<4> [129.197013]  ? page_fault+0x8/0x30
<4> [129.197013]  page_fault+0x1e/0x30
<4> [129.197013] RIP: 0033:0x55b965da5651
<4> [129.197013] Code: 10 48 8b 45 f8 8b 50 08 8b 45 f4 01 c2 89 d0 c1 f8 1f c1 e8 1b 01 c2 83 e2 1f 29 c2 89 d0 48 98 48 c1 e0 03 48 01 c8 48 8b 00 <8b> 00 89 45 f0 83 45 f4 01 83 7d f4 1f 0f 8e 7a ff ff ff b8 00 00
<4> [129.197013] RSP: 002b:00007f41b0e04cb0 EFLAGS: 00010202
<4> [129.197013] RAX: 00007f41d2e0c000 RBX: 0000000000000000 RCX: 00007fff68b232d0
<4> [129.197013] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007fff68b23460
<4> [129.197013] RBP: 00007f41b0e04cb0 R08: 00007f41b0e05700 R09: 00007f41b0e05700
<4> [129.197013] R10: 00007f41b0e059d0 R11: 0000000000000202 R12: 00007f41b0e04d80
<4> [129.197013] R13: 0000000000000000 R14: 00007fff68b23460 R15: 00007fff68b23230
<4> [129.197013] Mem-Info:
<4> [129.197013] active_anon:28086 inactive_anon:108062 isolated_anon:65
 active_file:24956 inactive_file:23843 isolated_file:0
 unevictable:4083 dirty:28 writeback:146 unstable:0
 slab_reclaimable:9616 slab_unreclaimable:11879
 mapped:28287 shmem:109102 pagetables:1507 bounce:0
 free:23304 free_pcp:483 free_cma:0
<4> [129.197013] Node 0 active_anon:112344kB inactive_anon:432248kB active_file:99824kB inactive_file:95372kB unevictable:16332kB isolated(anon):260kB isolated(file):0kB mapped:113148kB dirty:112kB writeback:584kB shmem:436408kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 55296kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
<4> [129.197013] DMA free:4404kB min:744kB low:928kB high:1112kB active_anon:8kB inactive_anon:11412kB active_file:0kB inactive_file:4kB unevictable:0kB writepending:4kB present:15984kB managed:15900kB mlocked:0kB kernel_stack:16kB pagetables:8kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
<4> [129.197013] lowmem_reserve[]: 0 921 921 921
<4> [129.197013] DMA32 free:88812kB min:89364kB low:100440kB high:111516kB active_anon:112336kB inactive_anon:420944kB active_file:99824kB inactive_file:95368kB unevictable:16332kB writepending:344kB present:1015244kB managed:947640kB mlocked:0kB kernel_stack:3072kB pagetables:6020kB bounce:0kB free_pcp:1932kB local_pcp:620kB free_cma:0kB
<4> [129.197013] lowmem_reserve[]: 0 0 0 0
<4> [129.197013] DMA: 3*4kB (UME) 3*8kB (UE) 3*16kB (UME) 3*32kB (UME) 2*64kB (UM) 2*128kB (UM) 3*256kB (UME) 2*512kB (ME) 2*1024kB (UM) 0*2048kB 0*4096kB = 4404kB
<4> [129.197013] DMA32: 3060*4kB (MH) 1529*8kB (UME) 687*16kB (M) 330*32kB (UM) 140*64kB (UME) 53*128kB (UM) 38*256kB (UME) 31*512kB (UME) 1*1024kB (M) 0*2048kB 0*4096kB = 88392kB
<6> [129.197013] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
<4> [129.197013] 161322 total pagecache pages
<4> [129.197013] 3430 pages in swap cache
<4> [129.197013] Swap cache stats: add 23104, delete 19674, find 0/1
<4> [129.197013] Free swap  = 937212kB
<4> [129.197013] Total swap = 1030140kB
<4> [129.197013] 257807 pages RAM
<4> [129.197013] 0 pages HighMem/MovableOnly
<4> [129.197013] 16922 pages reserved
<4> [129.197013] SLUB: Unable to allocate memory on node -1, gfp=0xc0(__GFP_IO|__GFP_FS)
<4> [129.197013]   cache: radix_tree_node, object size: 576, buffer size: 912, default order: 2, min order: 0
<4> [129.197013]   node 0: slabs: 312, objs: 4940, free: 2
<6> [131.631690] gem_mmap_gtt (1241) used greatest stack depth: 12016 bytes left
<6> [133.152614] gem_mmap_gtt (1243) used greatest stack depth: 11512 bytes left
<6> [135.973501] gem_mmap_gtt (1246) used greatest stack depth: 11256 bytes left
<6> [151.077716] perf: interrupt took too long (3958 > 3943), lowering kernel.perf_event_max_sample_rate to 50000
<6> [151.958095] gem_mmap_gtt (1262) used greatest stack depth: 11016 bytes left
<6> [185.464741] [IGT] gem_mmap_gtt: exiting, ret=0
<6> [185.895361] Console: switching to colour frame buffer device 128x48
Comment 1 Lakshmi 2019-02-21 12:35:33 UTC
Its a different test failure similar to Bug 107753.
Comment 2 CI Bug Log 2019-02-21 12:36:48 UTC
The CI Bug Log issue associated to this bug has been updated.

### New filters associated

* PNV: igt@gem_mmap_gtt@fault-concurrent - dmesg-warn - page allocation failure
  - https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_221/fi-pnv-d510/igt@gem_mmap_gtt@fault-concurrent.html
Comment 3 Chris Wilson 2019-02-21 12:38:05 UTC
There really isn't much we can do to stop those warns; but note that no information was lost and no error detected.

In the far distant future, maybe gemfs will have finer control.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.