Bug 106739 - [CI] igt@gem_exec_schedule@wide-bsd1 - dmesg-fail - Failed assertion: __gem_context_create(fd, &ctx_id) == 0 / gem_exec_schedu: page allocation failure: order:0, mode:0x8402(__GFP_HIGHMEM|__GFP_RETRY_MAYFAIL|__GFP_ZERO), nodemask=(null)
Summary: [CI] igt@gem_exec_schedule@wide-bsd1 - dmesg-fail - Failed assertion: __gem_c...
Status: CLOSED WORKSFORME
Alias: None
Product: DRI
Classification: Unclassified
Component: DRM/Intel (show other bugs)
Version: XOrg git
Hardware: Other All
: medium normal
Assignee: Intel GFX Bugs mailing list
QA Contact: Intel GFX Bugs mailing list
URL:
Whiteboard: ReadyForDev
Keywords:
Depends on:
Blocks:
 
Reported: 2018-05-30 21:25 UTC by Martin Peres
Modified: 2018-11-01 15:39 UTC (History)
1 user (show)

See Also:
i915 platform: SKL
i915 features: GEM/Other


Attachments

Description Martin Peres 2018-05-30 21:25:24 UTC
https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_49/fi-skl-gvtdvm/igt@gem_exec_schedule@wide-bsd1.html

(gem_exec_schedule:1217) i915/gem_context-CRITICAL: Test assertion failure function gem_context_create, file ../lib/i915/gem_context.c:106:
(gem_exec_schedule:1217) i915/gem_context-CRITICAL: Failed assertion: __gem_context_create(fd, &ctx_id) == 0
(gem_exec_schedule:1217) i915/gem_context-CRITICAL: error: -12 != 0
Subtest wide-bsd1 failed.

[  111.727309] gem_exec_schedu: page allocation failure: order:0, mode:0x8402(__GFP_HIGHMEM|__GFP_RETRY_MAYFAIL|__GFP_ZERO), nodemask=(null)
[  111.727333] CPU: 0 PID: 1217 Comm: gem_exec_schedu Tainted: G     U            4.17.0-rc6-ga8727d3fe037-drmtip_49+ #1
[  111.727335] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.10.1-0-g8891697-prebuilt.qemu-project.org 04/01/2014
[  111.727337] Call Trace:
[  111.727342]  dump_stack+0x67/0x9b
[  111.727346]  warn_alloc+0xee/0x170
[  111.727367]  __alloc_pages_nodemask+0xe80/0x1250
[  111.727371]  ? ___slab_alloc.constprop.34+0x232/0x3e0
[  111.727416]  setup_scratch_page+0x173/0x200 [i915]
[  111.727448]  gen8_ppgtt_init+0x5e/0x4f0 [i915]
[  111.727452]  ? kmem_cache_alloc_trace+0x282/0x2e0
[  111.727481]  i915_ppgtt_create+0x65/0x1d0 [i915]
[  111.727507]  i915_gem_create_context+0x129/0x2b0 [i915]
[  111.727533]  i915_gem_context_create_ioctl+0x5f/0x150 [i915]
[  111.727559]  ? i915_gem_switch_to_kernel_context+0x1b0/0x1b0 [i915]
[  111.727562]  drm_ioctl_kernel+0x7c/0xf0
[  111.727566]  drm_ioctl+0x2e6/0x3a0
[  111.727591]  ? i915_gem_switch_to_kernel_context+0x1b0/0x1b0 [i915]
[  111.727597]  ? finish_task_switch+0x96/0x270
[  111.727600]  ? _raw_spin_unlock_irq+0x24/0x50
[  111.727603]  ? trace_hardirqs_on_caller+0xe0/0x1b0
[  111.727615]  do_vfs_ioctl+0xa0/0x6c0
[  111.727619]  ? __schedule+0x351/0xbe0
[  111.727623]  ksys_ioctl+0x35/0x60
[  111.727627]  __x64_sys_ioctl+0x11/0x20
[  111.727630]  do_syscall_64+0x55/0x190
[  111.727633]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
[  111.727635] RIP: 0033:0x7f79c461a5d7
[  111.727637] RSP: 002b:00007ffd33a58528 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[  111.727640] RAX: ffffffffffffffda RBX: 00005632553f23a8 RCX: 00007f79c461a5d7
[  111.727642] RDX: 00007ffd33a58580 RSI: 00000000c008646d RDI: 0000000000000003
[  111.727643] RBP: 00007ffd33a58580 R08: 0000000000000077 R09: 0000000000000000
[  111.727645] R10: 00005632553c6010 R11: 0000000000000246 R12: 00000000c008646d
[  111.727646] R13: 0000000000000003 R14: 0000000000000000 R15: 0000000000000000
[  111.727696] Mem-Info:
[  111.727700] active_anon:43235 inactive_anon:22297 isolated_anon:0
                active_file:33900 inactive_file:289953 isolated_file:32
                unevictable:0 dirty:34 writeback:0 unstable:0
                slab_reclaimable:67384 slab_unreclaimable:20366
                mapped:16533 shmem:2238 pagetables:1448 bounce:0
                free:13205 free_pcp:192 free_cma:0
[  111.727704] Node 0 active_anon:172940kB inactive_anon:89188kB active_file:135600kB inactive_file:1159812kB unevictable:0kB isolated(anon):0kB isolated(file):128kB mapped:66132kB dirty:136kB writeback:0kB shmem:8952kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 57344kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[  111.727707] DMA free:8148kB min:356kB low:444kB high:532kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:6564kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  111.727708] lowmem_reserve[]: 0 1948 1948 1948
[  111.727718] DMA32 free:44672kB min:44696kB low:55868kB high:67040kB active_anon:172940kB inactive_anon:89188kB active_file:135600kB inactive_file:1153248kB unevictable:0kB writepending:136kB present:2080640kB managed:1999028kB mlocked:0kB kernel_stack:2672kB pagetables:5792kB bounce:0kB free_pcp:768kB local_pcp:768kB free_cma:0kB
[  111.727719] lowmem_reserve[]: 0 0 0 0
[  111.727726] DMA: 9*4kB (UME) 8*8kB (UM) 5*16kB (ME) 5*32kB (E) 2*64kB (ME) 2*128kB (UE) 1*256kB (E) 2*512kB (ME) 2*1024kB (UM) 2*2048kB (ME) 0*4096kB = 8148kB
[  111.727755] DMA32: 232*4kB (ME) 532*8kB (UME) 300*16kB (UME) 322*32kB (ME) 201*64kB (UME) 80*128kB (UME) 5*256kB (M) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 44672kB
[  111.727782] 326119 total pagecache pages
[  111.727786] 0 pages in swap cache
[  111.727788] Swap cache stats: add 0, delete 0, find 0/0
[  111.727789] Free swap  = 4193276kB
[  111.727790] Total swap = 4193276kB
[  111.727792] 524158 pages RAM
[  111.727793] 0 pages HighMem/MovableOnly
[  111.727794] 20424 pages reserved
Comment 1 Chris Wilson 2018-05-31 08:47:43 UTC
So what it is saying is that it has over 10,000 free pages, some free in each zone, and yet it doesn't want to let us have one of them.

The issue here is that we switched to __GFP_RETRY_MAYFAIL to let these allocations fail under mempressure; but it's meant to at least try direct-reclaim first.
Comment 2 Martin Peres 2018-11-01 15:39:17 UTC
Seen twice within 6 runs, then nothing since drmtip_55 (5 months / 80 runs ago). Closing!


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.