Created attachment 137021 [details] dmesg log Running 32bit OpenGL applications on my AMD card with DRI_PRIME=1 produces the following error, running 64bit applications works just fine. Somehow I suspect virtual memory is the cause. libGL: pci id for fd 5: 1002:6823, driver radeonsi libGL: OpenDriver: trying /usr/lib32/dri/tls/radeonsi_dri.so libGL: OpenDriver: trying /usr/lib32/dri/radeonsi_dri.so /usr/share/libdrm/amdgpu.ids version: 1.0.0 radeonsi: Failed to create a context. libGL: Using DRI3 for screen 0 radeonsi: Failed to create a context. X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 154 (GLX) Minor opcode of failed request: 3 (X_GLXCreateContext) Value in failed request: 0x0 Serial number of failed request: 31 Current serial number in output stream: 33 At the same time, the following messages are logged in dmesg: (See similar but apparently harmless: https://bugs.freedesktop.org/show_bug.cgi?id=104082) [433611.485600] amdgpu 0000:03:00.0: swiotlb buffer is full (sz: 2097152 bytes) [433611.485602] amdgpu 0000:03:00.0: swiotlb: coherent allocation failed, size=2097152 Full relevant dmesg log of the accident is attached Various VRAM/GTT info from boot: [ 10.485945] [drm] amdgpu kernel modesetting enabled. [ 10.486189] amdgpu 0000:03:00.0: enabling device (0000 -> 0003) [ 10.669296] amdgpu 0000:03:00.0: VRAM: 2048M 0x000000F400000000 - 0x000000F47FFFFFFF (2048M used) [ 10.669298] amdgpu 0000:03:00.0: GTT: 1024M 0x0000000000000000 - 0x000000003FFFFFFF [ 10.669387] [drm] amdgpu: 2048M of VRAM memory ready [ 10.669388] [drm] amdgpu: 3072M of GTT memory ready. [ 10.670190] amdgpu 0000:03:00.0: PCIE GART of 1024M enabled (table at 0x000000F400040000). [ 10.685934] [drm] amdgpu: dpm initialized [ 11.043323] [drm] Initialized amdgpu 3.23.0 20150101 for 0000:03:00.0 on minor 1 Sysinfo: - Gentoo x86_64 - llvm/clang/mesa/libdrm and friends, including drivers all pulled from git as of ~2 days ago. - Intel HD 4400 Gen7.5 Haswell - i915 - AMD Radeon HD 8850M - amdgpu - Both drivers are using DRI3 for PRIME
This is a Mesa or xserver (or maybe GLVND?) issue, the dmesg messages are unrelated and harmless. Please attach the glxinfo output and the Xorg log file. Did it work with an older version of Mesa / xserver / GLVND?
Created attachment 137024 [details] Output of DRI_PRIME=1 glxinfo
Created attachment 137025 [details] Xorg.0.log
(In reply to Michel Dänzer from comment #1) > This is a Mesa or xserver (or maybe GLVND?) issue, the dmesg messages are > unrelated and harmless. > > Please attach the glxinfo output and the Xorg log file. > > Did it work with an older version of Mesa / xserver / GLVND? The last time I tested this (and it worked fine) was maybe over a year ago. So I'm guessing downgrading packages that much would be infeasible. Sadly, due the nature of my setup (where I have all graphics packages/drivers from git), I'm pretty sure pulling something like this would be next to impossible, not to take into account the time it would take, time which I sadly currently do not have.
What version of GCC are you using?
(In reply to Mike Lothian from comment #5) > What version of GCC are you using? Testing was done with GCC 7 and GCC 8.0 alpha
I just rebuilt libdrm from git. It seems like it was the only one I missed while rebuilding the whole stack 2 days ago. Which means my libdrm version lacked following commit by one day: 1cc17744b988106b4fe71ee9d3d17b651d6adb40 - "amdgpu: fix high VA mask" This commit appears to fix the issue, and I can confirm that 32bit glxgears now works again.
Resolving per comment 7, thanks for the followup, glad it's working now.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.