Summary: | [BAT] igt@drv_selftest@mock_hugepages - dmesg-fail - Failed assertion: err == 0 | ||
---|---|---|---|
Product: | DRI | Reporter: | Martin Peres <martin.peres> |
Component: | DRM/Intel | Assignee: | Intel GFX Bugs mailing list <intel-gfx-bugs> |
Status: | CLOSED FIXED | QA Contact: | Intel GFX Bugs mailing list <intel-gfx-bugs> |
Severity: | normal | ||
Priority: | medium | CC: | intel-gfx-bugs |
Version: | XOrg git | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | ReadyForDev | ||
i915 platform: | G33 | i915 features: | GEM/Other |
Description
Martin Peres
2018-07-17 07:59:51 UTC
If my guess is correct that this is self-inflicted (as opposed to external fragmentation), this should be prevented by: commit d778847208c016f66a44d4c40baa74ca3bf724fd (HEAD -> drm-intel-next-queued, drm-intel/for-linux-next, drm-intel/drm-intel-next-queued) Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Tue Jul 17 09:23:34 2018 +0100 drm/i915/selftests: Free the backing store between iterations In the huge pages tests, we may have lots of objects being trapped on the freelist as we hold the struct_mutex allowing the free worker no opportunity to recover the backing store. We also have stricter requirements and the desire for large contiguous pages, further increasing the allocation pressure. To reduce the chance of running out of memory, we could either drop the mutex and flush the free worker, or we could release the backing store directly. We do the latter in this patch for simplicity. References: https://bugs.freedesktop.org/show_bug.cgi?id=107254 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180717082334.18774-1-chris@chris-wilson.co.uk This failure hasn't been since CI_DRM_4473. Martin, time to close this? Closing as this is not seen anymore. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.