Created attachment 70163 [details]
backtrace of the segfault
Due to the lack of documentation I don't really know what I'm doing but I think the following should work...
Software: Linux 3.7-rc5, libdrm from git, mesa from git, xorg from git (here with -O0 and -g3), xf86-video-ati from git and xf86-video-intel from git.
00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09)
01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI WIMBLEDON XT [Radeon HD 7970M] (rev ff)
This is what xrandr says:
$ xrandr --listproviders
Providers: number : 2
Provider 0: id: 112 cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 3 outputs: 8 associated providers: 0 name:Intel
Provider 1: id: 69 cap: 0xd, Source Output, Source Offload, Sink Offload crtcs: 6 outputs: 0 associated providers: 0 name:radeon
I'm trying to set the radeon card to being the one stuff is going to be rendered with:
$ xrandr --setprovideroffloadsink 69 112
Then I run
$ DRI_PRIME=1 glxinfo
and X segfaults. The backtrace is attached. Please tell me if you need more variable values etc.
I'm not sure how well the HD 7970M is supported yet but glxinfo should run...
With DRI_PRIME=0 it works fine on the intel card.
Again, this is the quite new AMD Enduro stuff, so maybe it behaves differently than PowerXpress.
First of all, please (always) attach the full Xorg.0.log file.
> I'm not sure how well the HD 7970M is supported yet but glxinfo should run...
For it to say anything other than software rendering though, you need to build xf86-video-ati with glamor support and enable it with
Option "AccelMethod" "glamor"
Created attachment 70282 [details]
Created attachment 70283 [details]
backtrace that fits the Xorg.0.log attachement
I don't think Prime can work without acceleration being enabled for the Radeon card. The catch being that glamor doesn't work with xserver 1.13 or newer yet... So I'm afraid this really cannot work yet at this point.
Obviously, this should be handled more gracefully than by crashing, but I'm not sure offhand if that's the responsibility of the server or the driver.
Thank you for looking into this.
I tried a bit creating a config with both the intel and radeon ddx so I could try to load glamor for radeon but I either ended up with radeon not appearing in xrandr --listproviders or with X not starting. Starting X with no Device section (= no glamor setting) was the only way I found to try prime.
So I guess everybody who wants to try prime with a southern island card will end up in the same place and considering the x.org 1.13 server has been released 05-Sep-2012 there are probably quite some distributions and people already using it.
When the segfault gets fixed and an error message is logged or displayed somewhere I would appreciate it if it could be specific and give a hint that the missing glamor acceleration is the reason.
Created attachment 76394 [details]
new Xorg.0.log with x.org 1.14
So considering the recent discussion on the x.org mailing list, should this be "solved"?
I still get a very similar segfault with git mesa, xorg, xf86-video-intel with uxa, radeon with glamor.
Is it a problem when radeon says this?
[ 3798.558] (II) RADEON(G0): EXA: Driver will allow EXA pixmaps in VRAM
I.e. is there a way to make radeon use glamor by default (and not via configuration) like intel has --with-default-accel=...?
(In reply to comment #6)
> I.e. is there a way to make radeon use glamor by default (and not via
> configuration) like intel has --with-default-accel=...?
Not yet. You can try these patches: http://lists.x.org/archives/xorg-driver-ati/2013-March/024523.html
Created attachment 76444 [details]
Xorg.0.log with patches
I have a similar setup - an intel integrated chip and a Radeon HD7970M and had the same problem, while trying to use prime.
I have now tried to apply the patches and the result was that now X refuses to start if the radeon kernel module is loaded. Here is the log.
Created attachment 76445 [details]
I had the same. This was bothersome to me:
[ 1235.513] (WW) RADEON(G0): Glamor is using GLES2 but GLX needs GL. Indirect GLX may not work correctly.
So I compiled glamor with this:
--disable-static --disable-glamor-gles2 --disable-glx-tls
and still got the same crash.
Can you run Xorg with the environment variable EGL_LOG_LEVEL=debug and attach its stderr output? Are libEGL and radeonsi_dri.so installed?
(In reply to comment #9)
> This was bothersome to me:
> [ 1235.513] (WW) RADEON(G0): Glamor is using GLES2 but GLX needs GL.
> Indirect GLX may not work correctly.
> So I compiled glamor with this:
> --disable-static --disable-glamor-gles2 --disable-glx-tls
FWIW, --disable-glamor-gles2 should be the default and should perform better than --enable-glamor-gles2.
Created attachment 76459 [details]
libEGL is in
radeonsi_dri.so is in
I ran Xorg like this:
xmg tmp # modprobe radeon
xmg tmp # export EGL_LOG_LEVEL=debug
xmg tmp # Xorg 2> log
xmg tmp #
attached is log
The EGL debug output looks the same as for me up to that point...
Can you run Xorg in gdb and attach the output of bt full after the crash? (With debugging symbols in libglamor(egl).so)
Created attachment 76466 [details]
Xorg gdb session
Here's the whole gdb session with the Backtrace in the end
Created attachment 76467 [details]
backtrace with patches from comment #7 and debug symbols in glamor
This is glamor from git master.
(Damn, you beat me for a few seconds!)
Created attachment 76472 [details] [review]
glamoregl: Use xf86ScreenToScrn
This glamor patch should get you across this particular cliff.
P.S. For next time, I did mean 'bt full' when I wrote that. :)
I really have no Idea what I am doing, but I felt adventurous and tried to debug X and saw that something is wrong with the screens.
In glamor_gl_dispatch_init there is this line:
ScrnInfoPtr scrn = xf86Screens[screen->myNum];
at this point screen->myNum is 256 and xf86Screens[screen->myNum] is zero. I took a look at xf86Screens as an array and saw that it contained the intel screen at index 0 and some garbage adress at index 1. Everything else was zero. I assumed that scrn is supposed to point to the ScrnInfo struct for the radeon screen and noted the adress previously in RADEONScreenInit_KMS, replaced the garbage adress in xf86Screens with it and changed screen->myNum to 1. This had the consequence that X didn't segfault where it segfaulted before. Instead I got this:
Mesa: User error: GL_INVALID_OPERATION in glAttachShader
Failed to link: error: fragment shader lacks `main'
Fatal server error:
GLSL link failure
whoops, didn't refresh before posting!
used patch instead of changing values with gdb - same result as with my... uninformed solution x)
(In reply to comment #16)
> Mesa: User error: GL_INVALID_OPERATION in glAttachShader
Please set a breakpoint on _mesa_error and attach the output of 'bt full' when it hits for AttachShader.
Created attachment 76479 [details]
bt full till src/glamor_gradient.c:615
Doesn't break. Maybe there is something wrong with my mesa debug symbols. I'll check that.
I stepped through manually and the mesa error appears at src/glamor_gradient.c:615 ->
Attached is bt full till there, I'll try to fix my mesa
Created attachment 76480 [details]
bt full on _mesa_error
managed to get into _mesa_error with stepi, don't really know why it wont let me just break.
attached is bt full at the end of _mesa_error
(In reply to comment #20)
> attached is bt full at the end of _mesa_error
Looks like your glamor is still built with --enable-glamor-gles2, does it work better without that?
Created attachment 76482 [details]
Xorg log after crash
yes - Xorg starts. However - if I try to use glxinfo, with or without prime, it crashes without any error in the log.
(In reply to comment #22)
> it crashes without any error in the log.
Anything in stderr? If yes, and it's not an unresolved symbol, please get a backtrace from gdb.
Created attachment 76483 [details]
Xorg log after glxinfo
I forgot to remove "LIBGL_DRIVERS_PATH=/opt/xorg/lib/dri/" from /etc/environment from some earlier attempts at compiling radeon. Removing that fixed glxinfo without DRI_PRIME
With DRI_PRIME=1 I am getting a valid glxinfo output and then
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0.0"
after 33 requests (33 known processed) with 0 events remaining.
Attached is the Xorg log.
I'l run it through gdb now
Created attachment 76484 [details] [review]
glamoregl: Use xf86ScreenToScrn, take 2
Does it work better with this patch? Turns out it helps if I actually test my patch. :\
Created attachment 76487 [details]
backtrace for X crash after xrandr
I applied the second patch instead of the first. Now it crashes even after xrandr.
I hooked up another machine to remote debug the laptop, ran gdb X in one session and xrandr --listproviders in another, I'm getting a segfault. Attached is a backtrace for the X crash after xrandr
Created attachment 76488 [details]
segfault when enabling compositing while radeon "renders"
I applied the patch on top of the first two patches you posted and got pretty good results.
~ % xrandr --listproviders
Providers: number : 2
Provider 0: id: 0x70 cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 3 outputs: 8 associated providers: 1 name:Intel
Provider 1: id: 0x45 cap: 0xd, Source Output, Source Offload, Sink Offload crtcs: 6 outputs: 0 associated providers: 1 name:radeon
~ % xrandr --setprovideroffloadsink 1 0
~ % DRI_PRIME=0 glxinfo | grep OpenGL
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile
OpenGL core profile version string: 3.1 (Core Profile) Mesa 9.2-devel (git-4dca602)
OpenGL core profile shading language version string: 1.40
OpenGL core profile context flags: (none)
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 9.2-devel (git-4dca602)
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
~ % DRI_PRIME=1 glxinfo | grep OpenGL
OpenGL vendor string: X.Org
OpenGL renderer string: Gallium 0.4 on AMD PITCAIRN
OpenGL version string: 2.1 Mesa 9.2-devel (git-4dca602)
OpenGL shading language version string: 1.20
Seems like it is pretty close to working now. :)
I was not able to actually render with it though.
I have tried glxgears and the window was there, but the contents were black with openbox. I think I have read somewhere that it might work better/worse with compositing enabled so I also tried with kwin.
As soon as compositing is enabled while the radeon card "renders" X segfaults, (backtrace attached) but compositing worked fine before I did start glxgears on the radeon card.
Looks like the radeon glamor code needs to grow hooks like RADEONEXASharePixmapBacking / RADEONEXASetSharedPixmapBacking . I won't get around to looking into that before Friday, anyone please feel free to beat me to it. :)
Ok, with a proper X Session I get the same - X runs and if I try to render something, there is a crash - the two functions are missing...
I'd gladly contribute but I have no Idea how anything of this works, is there a proper way to learn?
Took a look at the exa code and tried just doing the same with the glamor functions - set the function pointers in radeon_glamor_init, but obviously it doesn't work, radeon_get_pixmap_private() returns NULL and I have no Idea what to do with it.
Wrote stubs that always return FALSE - now Xorg doesn't crash, but also nothing is rendered.
Is there some hidden documentation I can't find, or do you guys just know everything by heart? :D
Created attachment 76582 [details] [review]
Initial glamor pixmap sharing hooks
Does this patch get you further? Only compile tested.
(In reply to comment #29)
> I'd gladly contribute but I have no Idea how anything of this works, is
> there a proper way to learn?
> Is there some hidden documentation I can't find, or do you guys just know
> everything by heart? :D
Neither, it just takes time and/or experience I'm afraid... Hang in there, everybody had to start once. :) BTW, if you post the actual changes you're trying, others can point out mistakes or suggest improvements.
Created attachment 76589 [details] [review]
non-functioning pixmap sharing hooks
Ok. This is the final state of what I have been trying to do...
I'll try your patch now.
Created attachment 76591 [details]
backtrace for new patch
What is the correct way to apply this patch? If I try to use epatch in a gentoo ebuild,
PATCH COMMAND: patch -p1 -g0 -E --no-backup-if-mismatch <
and fails with:
checking file src/radeon_exa.c
Hunk #1 FAILED at 315.
1 out of 1 hunk FAILED
I've applied manually with patch -p1, it failed at radeon_exa.c, too, so I edited radeon_exa.c by hand. The parts that did apply all have wrong indentation. Did I break something?
Your patch gives me a SIGSEGV, because uh... no private parts have been allocated for the pixmap, so radeon_get_pixmap_private returns NULL and then &priv->surface is passed to RADEONSetSharedPixmapBacking.
Attached is bt full
Question: How do you debug? I have another machine and run ddd over ssh with X forwarding and attach to the process when the session is running on the laptop.
Created attachment 76595 [details] [review]
glamor pixmap sharing hooks and private allocation
I've added an allocation of a radeon_pixmap during pixmap creation.
Now the server just crashes in random places - mostly in some copy functions, but I also had a sigsegv in some evdev code.
Created attachment 76682 [details] [review]
Initial glamor pixmap sharing hooks v2
This version should handle the lack of a pixmap private in the hooks.
(In reply to comment #33)
> checking file src/radeon_exa.c
> Hunk #1 FAILED at 315.
Which version of xf86-video-ati are you using? My patches are against current Git master.
> Question: How do you debug? I have another machine and run ddd over ssh with
> X forwarding and attach to the process when the session is running on the
That (or starting Xorg from the debugger in the first place) is basically the way to do it.
(In reply to comment #34)
> Now the server just crashes in random places - mostly in some copy
> functions, but I also had a sigsegv in some evdev code.
We'd need to see actual backtraces to make sense of that, but does it work better if you just take the same paths for CREATE_PIXMAP_USAGE_SHARED as for RADEON_CREATE_PIXMAP_DRI2 in radeon_glamor_create_pixmap()?
Created attachment 76688 [details]
backtrace for v2 patch
I have applied your patch. Xorg still crashes in random places. I don't know how much of a help a backtrace is in this case, but I attached the most recent one.
I also use the current git master. I'm not sure why I have problems with patches - I now switched to just installing the driver from the source directory and applying patches there without putting it through portage - that usually works.
Created attachment 76690 [details]
3 Backtraces for crashes after glxgears
Here are 3 more backtraces.
Xorg crashes every time I try to start glxgears with DRI_PRIME=1, and almost every time in different parts of code.
(In reply to comment #38)
> I have applied your patch. Xorg still crashes in random places.
That usually indicates memory corruption. Can you try running Xorg in valgrind and see if that gives any hints?
Created attachment 76712 [details]
valgrind Xorg log
I ran Xorg with valgrind, then started xfwm4, xterm and on the laptop:
xrandr --setprovideroffloadsink 1 0
This is the Valgrind log of Xorg. There are invalid writes.
Created attachment 76745 [details] [review]
Initial glamor pixmap sharing hooks v3
* Use the same paths for PRIME as for DRI2 in radeon_glamor_create_pixmap.
* Flesh out fallback path in radeon_glamor_share_pixmap_backing.
* Adapt radeon_glamor_set_shared_pixmap_backing to radeon_set_pixmap_bo creating
a new private.
Created attachment 76868 [details]
some backtraces of the random crashes with patch from comment #42
I don't know the code of the X.org drivers at all, so I doubt I could do anything useful in there, unfortunately.
The first patch "[PATCH 1/2] glamor: Bail if the glamoregl module wasn't loaded early" is already in xf86-video-ati master, right?
So I currently run only with "[PATCH 2/2] glamor: Enable by default on SI" and with the latest patch from your comment #42.
With this, I also get random segfaults and aborts. For your viewing pleasure I have attached some more or less different backtraces with full backtraces, in case it happened in code where I had debugging symbols enabled. Maybe you can make sense of it.
Created attachment 76869 [details]
another valgrind log from v3 of the patch
Also, here's a valgrind log from me with the latest patch applied.
Created attachment 76870 [details] [review]
Initial glamor pixmap sharing hooks v4
Try creating a glamor textured pixmap in the other hook as well, and bail if that fails.
Created attachment 76871 [details]
Created attachment 76876 [details]
glxgears working with DRI_PRIME=1
Can confirm - this works!
The hooks are in xf86-video-ati Git now. The question is what to do with this report, as the original crash is probably still there without acceleration...
Since http://cgit.freedesktop.org/mesa/mesa/commit/?id=9320c8fea947fd0f6eb723c67f0bdb947e45c4c3 neither glamor nor xf86-video-ati is not needed for offloading anymore when using DRI3 (and render nodes I think).
In x.org 1.16 glamor will be built in, but can be disabled/enabled at compile time.
If there are to be meaningful error messages, this should probably go in. Or maybe just make a catchall that just prevents the segfault and give a meaningless error message. :)
But now that it's at this point, let's see if I can lower the importance of this bug.
Created attachment 108225 [details]
Comment on attachment 108225 [details]
Xorg 1.16.1 with xf86-video-ati 7.5.0 still fails in the same manner as described above, while trying to run glxinfo with DRI_PRIME=1
(In reply to Barvinok from comment #51)
> Xorg 1.16.1 with xf86-video-ati 7.5.0 still fails in the same manner as
> described above, while trying to run glxinfo with DRI_PRIME=1
If it's really the same crash, you need to find out why hardware acceleration isn't enabled for you, and fix that.
Created attachment 108226 [details] [review]
radeon: Don't advertise PRIME offload capabilities when acceleration is disabled
This patch should prevent the crash by not advertising the corresponding PRIME capabilities when acceleration is disabled. Does it work?
> This patch should prevent the crash by not advertising the corresponding
> PRIME capabilities when acceleration is disabled. Does it work?
Not quite. With this patch it is no longer possible to call
# DISPLAY=:0 xrandr --setprovideroffloadsink radeon Intel
to configure Radeon as offload sink for Intel, as it replies:
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 139 (RANDR)
Minor opcode of failed request: 34 ()
Value in failed request: 0x41
Serial number of failed request: 16
Current serial number in output stream: 17
Hence, DRI_PRIME=1 has no effect and glxinfo still reports Intel as renderer.
That is not what I expected. I'd like radeon to work.
(In reply to Michel Dänzer from comment #52)
> (In reply to Barvinok from comment #51)
> > Xorg 1.16.1 with xf86-video-ati 7.5.0 still fails in the same manner as
> > described above, while trying to run glxinfo with DRI_PRIME=1
> If it's really the same crash, you need to find out why hardware
> acceleration isn't enabled for you, and fix that.
How do I tell if it is enabled or not?
(In reply to Barvinok from comment #55)
> > If it's really the same crash, you need to find out why hardware
> > acceleration isn't enabled for you, and fix that.
> How do I tell if it is enabled or not?
By looking at the Xorg.0.log file.
(In reply to Barvinok from comment #54)
> Hence, DRI_PRIME=1 has no effect and glxinfo still reports Intel as renderer.
Anyway, from this it seems clear that the radeon driver fails to enable acceleration for you, and that the patch is working as intended to prevent the crash in that case.
Author: Michel Dänzer <firstname.lastname@example.org>
Date: Wed Aug 6 11:08:00 2014 +0900
PRIME: Don't advertise offload capabilities when acceleration is disabled
Xorg tends to crash if the user tries to actually use the offload
capabilities with acceleration disabled.