LD_LIBRARY_PATH="." ./EoCApp [S_API FAIL] SteamAPI_Init() failed; no appID found. Either launch the game from Steam, or put the file steam_appid.txt containing the correct appID in your game folder. Thread "EoCApp" (3053090816) received signal 11 Call stack: (0) /usr/lib/libpthread.so.0 : +0x10d60 [0x7f1fb1a53d60] Ошибка сегментирования (core dumped) Radeon HD 7950, ArchLinux 64, kernel 4.3.1, mesa|llvm from git. Program received signal SIGSEGV, Segmentation fault. 0x0000000000000000 in ?? () (gdb) bt #0 0x0000000000000000 in ?? () #1 0x00007ffff47fa563 in api::OpenGLRenderer::ChangeShader(ls::ObjectHandle, bool) () from ./libOGLBinding.so #2 0x00007ffff44a01f5 in rf::Renderer::Apply(bool) () from ./libRenderFramework.so #3 0x00007ffff47ec7bd in api::OpenGLRenderer::OpenGLRenderer(api::IAPI*, void*) () from ./libOGLBinding.so #4 0x00007ffff47ebe19 in api::OpenGLAPI::CreateRenderer() () from ./libOGLBinding.so #5 0x00007ffff47eba38 in api::OpenGLAPI::Init() () from ./libOGLBinding.so #6 0x00007ffff45fa28a in BaseApp::InitAPI() () from ./libGameEngine.so #7 0x00007ffff45f8e58 in BaseApp::Start(ls::InitStruct*) () from ./libGameEngine.so #8 0x00000000006d4410 in main ()
With llvm-pipe same crash LIBGL_ALWAYS_SOFTWARE=1 LD_LIBRARY_PATH="." ./EoCApp [S_API FAIL] SteamAPI_Init() failed; no appID found. Either launch the game from Steam, or put the file steam_appid.txt containing the correct appID in your game folder. Thread "EoCApp" (2800719872) received signal 11 Call stack: (0) /usr/lib/libpthread.so.0 : +0x10d60 [0x7fd9a29a3d60] Ошибка сегментирования (core dumped) (gdb) bt #0 0x0000000000000000 in ?? () #1 0x00007ffff47fa563 in api::OpenGLRenderer::ChangeShader(ls::ObjectHandle, bool) () from ./libOGLBinding.so #2 0x00007ffff44a01f5 in rf::Renderer::Apply(bool) () from ./libRenderFramework.so #3 0x00007ffff47ec7bd in api::OpenGLRenderer::OpenGLRenderer(api::IAPI*, void*) () from ./libOGLBinding.so #4 0x00007ffff47ebe19 in api::OpenGLAPI::CreateRenderer() () from ./libOGLBinding.so #5 0x00007ffff47eba38 in api::OpenGLAPI::Init() () from ./libOGLBinding.so #6 0x00007ffff45fa28a in BaseApp::InitAPI() () from ./libGameEngine.so #7 0x00007ffff45f8e58 in BaseApp::Start(ls::InitStruct*) () from ./libGameEngine.so #8 0x00000000006d4410 in main ()
Thanks for the report, but to be honest, the backtrace looks like this is a game bug. The last function on the stack belongs to the game, and then it jumps to 0. This is what you would see if a game doesn't properly check extension support, for example.
I run with MESA_GL_VERSION_OVERRIDE=4.2 MESA_GLSL_VERSION_OVERRIDE=420 and receive call stack: (0) /usr/lib/libpthread.so.0 : +0x10d60 [0x7f360f099d60] (1) ./libOGLBinding.so : api::OpenGLRenderer::ApplyConstants()+0x65 [0x7f360ffd57d5] (2) ./libRenderFramework.so : rf::Renderer::Apply(bool)+0x57 [0x7f360fc74207] (3) ./EoCApp : ig::IggyBinding::Swap(rf::Renderer*)+0xfc [0xebf16c] (4) ./libGameEngine.so : BaseApp::EndDrawGUI(rf::Renderer*)+0x9b [0x7f360fdd288b] (5) ./libGameEngine.so : BaseApp::MakeFrame()+0x3a4 [0x7f360fdd2db4] (6) ./libGameEngine.so : BaseApp::OnIdle()+0xe0 [0x7f360fdd1590] (7) ./EoCApp : main+0x170 [0x6d4430] (8) /usr/lib/libc.so.6 : __libc_start_main+0xf0 [0x7f360ed01610] (9) ./EoCApp : _start+0x29 [0x6d41a9]
(0) /lib64/libpthread.so.0 : +0x109f0 [0x7f7dde77b9f0] (1) ./libOGLBinding.so : api::OpenGLRenderer::ApplyConstants()+0x65 [0x7f7ddf6bf7d5] (2) ./libRenderFramework.so : rf::Renderer::Apply(bool)+0x57 [0x7f7ddf35e207] (3) ./EoCApp : ig::IggyBinding::Swap(rf::Renderer*)+0xfc [0xebf16c] (4) ./libGameEngine.so : BaseApp::EndDrawGUI(rf::Renderer*)+0x9b [0x7f7ddf4bc88b] (5) ./libGameEngine.so : BaseApp::MakeFrame()+0x3a4 [0x7f7ddf4bcdb4] (6) ./libGameEngine.so : BaseApp::OnIdle()+0xe0 [0x7f7ddf4bb590] (7) ./EoCApp : main+0x170 [0x6d4430] (8) /lib64/libc.so.6 : __libc_start_main+0xf0 [0x7f7dde3c3580] (9) ./EoCApp : _start+0x29 [0x6d41a9] I get the above in a window when trying to start the game with MESA_GL_VERSION_OVERRIDE=4.2 MESA_GLSL_VERSION_OVERRIDE=420, after clicking OK, I get a black screen with the mouse cursor from the game and a bit of sound, and the game exits.
Thanks for the backtrace, but again, without further evidence to the contrary I'd say that this is a game bug. The crash happens in pthread, which is called directly by the game. I suspect that the game isn't using the OpenGL API correctly.
I disassembled ApplyConstants() where the game crashes when using OpenGL override to 4.2. It is indeed a game bug, caused by possible NULL dereference. Crash occurs at line marked by => 0x00007ffff48467e0 <+0>: push r15 0x00007ffff48467e2 <+2>: push r14 0x00007ffff48467e4 <+4>: push r12 0x00007ffff48467e6 <+6>: push rbx 0x00007ffff48467e7 <+7>: push rax 0x00007ffff48467e8 <+8>: mov r14,rdi 0x00007ffff48467eb <+11>: mov eax,DWORD PTR [r14+0x6f8] 0x00007ffff48467f2 <+18>: mov rcx,QWORD PTR [rip+0xd297] # 0x7ffff4853a90 0x00007ffff48467f9 <+25>: cmp eax,DWORD PTR [rcx] 0x00007ffff48467fb <+27>: je 0x7ffff4846959 <_ZN3api14OpenGLRenderer14ApplyConstantsEv+377> 0x00007ffff4846801 <+33>: xor r15d,r15d 0x00007ffff4846804 <+36>: test eax,0x3ff0000 0x00007ffff4846809 <+41>: je 0x7ffff4846845 <_ZN3api14OpenGLRenderer14ApplyConstantsEv+101> 0x00007ffff484680b <+43>: movzx ecx,ax 0x00007ffff484680e <+46>: mov edx,DWORD PTR [r14+0xc4] 0x00007ffff4846815 <+53>: xor r15d,r15d 0x00007ffff4846818 <+56>: cmp rcx,rdx 0x00007ffff484681b <+59>: jae 0x7ffff4846845 <_ZN3api14OpenGLRenderer14ApplyConstantsEv+101> 0x00007ffff484681d <+61>: shr eax,0x10 0x00007ffff4846820 <+64>: mov rdx,QWORD PTR [r14+0xe0] 0x00007ffff4846827 <+71>: xor r15d,r15d 0x00007ffff484682a <+74>: movzx edx,WORD PTR [rdx+rcx*2] 0x00007ffff484682e <+78>: and eax,0x3ff 0x00007ffff4846833 <+83>: cmp eax,edx 0x00007ffff4846835 <+85>: jne 0x7ffff4846845 <_ZN3api14OpenGLRenderer14ApplyConstantsEv+101> 0x00007ffff4846837 <+87>: imul r15,rcx,0x110 0x00007ffff484683e <+94>: add r15,QWORD PTR [r14+0xb8] => 0x00007ffff4846845 <+101>: mov rcx,QWORD PTR [r15+0x10] ...... //0x00007ffff48467eb: eax = this->variable_at_offset_0x6f8 //0x00007ffff48467f2: rcx = some_related_global_or_static_variable //0x00007ffff48467fb: if (eax != rcx) { // 0x00007ffff4846801: r15 = NULL if ((eax & 0x3ff0000) != 0) { // ... // r15 is set in this block to valid value // ... } // 0x00007ffff4846845: rcx = r15->variable_at_offset_0x10 // crash here because r15 can be NULL } // function end
See https://lists.freedesktop.org/archives/mesa-dev/2016-March/109789.html
Would somebody care to contact the game developers and describe the issue in detail? And if you did, do you have a link (to a forum topic, public bug report, etc)? Thanks.
(In reply to Kamil Páral from comment #8) > Would somebody care to contact the game developers and describe the issue in > detail? And if you did, do you have a link (to a forum topic, public bug > report, etc)? Thanks. I posted it as a reply to nouveau issue topic in forum. Maybe someone will pick it up there. http://larian.com/forums/ubbthreads.php?ubb=showflat&Number=580880&#Post580880
(In reply to smidjar2.reg from comment #6) > I disassembled ApplyConstants() where the game crashes when using OpenGL > override to 4.2. I spent a while poking at this crash in gdb, and I was definitely seeing the same segfault at the same instruction and call-stack. I've sent a (one-line!) patch to mesa-dev that fixes this segfault on startup: https://lists.freedesktop.org/archives/mesa-dev/2016-April/114614.html And a Piglit patch that tests for the non-conforming behavior that led to this crash: https://lists.freedesktop.org/archives/mesa-dev/2016-April/114613.html Thanks to Karol Herbst's mesa-dev post, linked from comment #7, for pointing me in the right direction to find this Mesa bug. Granted, the game developers ought to check for errors returned from glLinkProgram and fail more gracefully than a segfault, but I doubt we're going to get them to do *that*... I can now play this game somewhat successfully on i965 with MESA_GL_VERSION_OVERRIDE=4.2. There are still plenty of rendering bugs I haven't dug into yet, but I played for an hour without crashes, at least! I don't have (or particularly want) a commit bit on Mesa or Piglit, so now we need somebody to review and hopefully merge these patches.
If I apply the "change shader" fix with RadeonSI I get a bunch of these glNamedStringARB errors when running the game with APITrace: 2833: message: major api error 1: GL_INVALID_OPERATION in unsupported function called (unsupported extension or deprecated function?) 2833 @0 glNamedStringARB(type = GL_SHADER_INCLUDE_ARB, namelen = -1, name = "/Shaders/GlobalConstants_OGL.shdh", stringlen = 1553, string = "#define HD4000 0 glNamedStringARB is from ARB_shading_language_include, which is not implemented in Mesa afaict? Probably because the extension is not part of any GL specification?
(In reply to Ernst Sjöstrand from comment #11) > If I apply the "change shader" fix with RadeonSI I get a bunch of these > glNamedStringARB errors when running the game with APITrace: But you can confirm that my patch fixes the crash on startup, right? I've opened bug #95215 for implementing ARB_shading_language_include. I think *this* bug can be resolved once my patch is merged in Mesa.
Fixed by: author Jamey Sharp <jamey@minilop.net> committer Timothy Arceri <timothy.arceri@collabora.com> commit 595d56cc866638f371626cc1d0137a6a54a7d0f8 glShaderSource must not change compile status. OpenGL 4.5 Core Profile section 7.1, in the documentation for CompileShader, says: "Changing the source code of a shader object with ShaderSource does not change its compile status or the compiled shader code." According to Karol Herbst, the game "Divinity: Original Sin - Enhanced Edition" depends on this odd quirk of the spec. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93551 Signed-off-by: Jamey Sharp <jamey@minilop.net> Reviewed-by: Ian Romanick <ian.d.romanick@intel.com> Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
I still get the ApplyConstants() crash, both with and without your fix it seems like. Is your fix for the ChangeShader() crash?
(In reply to Ernst Sjöstrand from comment #14) > I still get the ApplyConstants() crash, both with and without your fix it > seems like. > Is your fix for the ChangeShader() crash? Hmm. No, I'm still just working around the ChangeShader crash with MESA_GL_VERSION_OVERRIDE=4.2. That patch fixed the ApplyConstants crash for me. I hate to ask, but are you sure you're testing with the patched Mesa?
Yes I know because when I merged in https://github.com/karolherbst/mesa/commits/ARB_shading_language_include I suddenly got a loading progress bar and a splash screen. However at 95% it crashed like this instead: shader 778 count 1 string 103C70B511F7679091DC56132979D359 length -1 shader 652 count 1 string 624FCDF076807F156DBD30FB6C3F4668 length -1 shader 780 count 1 string CC5F8835705A6A4CFBBA629BE97C3F72 length -1 shader 225 count 1 string DF99049D80233C85B78294C6A86F5E82 length -1 shader 265 count 1 string 19EAE983A5FB81AC8E9D4B850BB124B4 length -1 Thread "EoCApp" (4215990208) received signal 11 Call stack: (0) /lib/x86_64-linux-gnu/libpthread.so.0 : +0x113d0 [0x7f8afe63c3d0] (1) ./libOGLBinding.so : api::OpenGLRenderer::ApplyConstants()+0x65 [0x7f8aff570845] (2) ./libRenderFramework.so : rf::Renderer::Apply(bool)+0x57 [0x7f8aff216437] (3) ./libRenderFramework.so : rf::RCB_ApplyCommand::Execute(rf::Renderer*, void const*)+0xd [0x7f8aff231f4d] (4) ./libRenderFramework.so : rf::RendererCommandBuffer::ExecuteCommandBuffer(bool)+0x37 [0x7f8aff21ec77] (5) ./libGameEngine.so : ls::PostProcessStage::Execute(rf::RenderView const*)+0x44 [0x7f8aff37cde4] (6) ./libRenderFramework.so : rf::StageGroup::Execute(rf::RenderView const*) const+0x31 [0x7f8aff2284d1] (7) ./EoCApp : ecl::EoCRenderView::Execute()+0x114 [0x95b664] (8) ./libRenderFramework.so : rf::RenderFrame::Execute()+0x60 [0x7f8aff21d570] (9) ./libGameEngine.so : BaseApp::ExecuteFrame(rf::Renderer*)+0x1c [0x7f8aff370d9c] (10) ./libGameEngine.so : BaseApp::MakeFrame()+0x33b [0x7f8aff37146b] (11) ./libGameEngine.so : BaseApp::OnIdle()+0xe0 [0x7f8aff36fcb0] (12) ./EoCApp : main+0x170 [0x6d5180] (13) /lib/x86_64-linux-gnu/libc.so.6 : __libc_start_main+0xf0 [0x7f8afe282830] (14) ./EoCApp : _start+0x29 [0x6d4ef9] Segmentation fault
(In reply to Ernst Sjöstrand from comment #16) > Yes I know because when I merged in > https://github.com/karolherbst/mesa/commits/ARB_shading_language_include I > suddenly got a loading progress bar and a splash screen. However at 95% it > crashed like this instead: Oh, I never tested Karol's patches and didn't entirely understand them, to be honest. I don't think those patches actually tried to fix the ApplyConstants crash, just the lack of ARB_shading_language_include. Would you try my patch instead? It's merged to Mesa git master now (thanks Timothy!), so you can just build stock Mesa. I've just re-tested that Mesa commit cf6dadb00b93b828e8cc95e8136d4c2013d76e40, which was the newest when I checked a few minutes ago, works with this game for me on Intel graphics. I'm using two workarounds in addition to the Mesa fix. One is MESA_GL_VERSION_OVERRIDE=4.2. The other is to configure the game to cap its frame-rate to 15 fps, because if the game renders frames too quickly, I get all sorts of wrong drawing. (Sarah glanced at it and thought it might be missing sync fences or some such thing, leading to some rendering finishing after the buffer swap.) Even then I have some textures getting sprayed on the screen in places they shouldn't be and UI icons often failing to show up at all, but the game is fairly playable for me. If you still get the ApplyConstants crash with the git master version of Mesa, I'm afraid you're going to have to do more debugging to find out why, since I can't reproduce the problem any more.
Jamey: I have had your patch applied the whole time, now I tested with you patch + Karol's patches. It seems highly unlikely to me that the game would have any chance of working without ARB_shading_language_include if those features are actually used, which they seem to be in my case at least. Radeon exposes 4.2 by default, I tried with 4.3 also just to experiment.
mesa git 5541e11 [behem0th@ArchLinux Divinity Original Sin Enhanced Edition]$ LANG=C ./start.sh Running Divinity: Original Sin - Enhanced Edition Language detected: English [S_API FAIL] SteamAPI_Init() failed; no appID found. Either launch the game from Steam, or put the file steam_appid.txt containing the correct appID in your game folder. Thread "EoCApp" (1988364224) received signal 11 Call stack: (0) /usr/lib/libpthread.so.0 : +0x10e80 [0x7f7b722fbe80] (1) ./libOGLBinding.so : api::OpenGLRenderer::ApplyConstants()+0x65 [0x7f7b73247845] (2) ./libRenderFramework.so : rf::Renderer::Apply(bool)+0x57 [0x7f7b72ee6437] (3) ./EoCApp : ig::IggyBinding::Swap(rf::Renderer*)+0xfc [0xececfc] (4) ./libGameEngine.so : BaseApp::EndDrawGUI(rf::Renderer*)+0x9b [0x7f7b73043dcb] (5) ./libGameEngine.so : BaseApp::MakeFrame()+0x3a4 [0x7f7b730442f4] (6) ./libGameEngine.so : BaseApp::OnIdle()+0xe0 [0x7f7b73042ad0] (7) ./EoCApp : main+0x170 [0x6d4dc0] (8) /usr/lib/libc.so.6 : __libc_start_main+0xf0 [0x7f7b71f63710] (9) ./EoCApp : _start+0x29 [0x6d4b39] ./runner.sh: line 3: 6081 Segmentation fault (core dumped) LD_LIBRARY_PATH="." ./EoCApp
I just tried with latest Mesash->CompileStatus = GL_FALSE;
(In reply to kilobug from comment #20) > I just tried with latest Mesash->CompileStatus = GL_FALSE; Hrm sorry didn't finish my comment, sorry for the noise :/ So, I just tried with latest Mesa from Oibaf's PPA on my Radeon R9 380X which contains the patch for "sh->CompileStatus = GL_FALSE;" and I still get : (kilobug@drizzt) ~/gog/Divinity Original Sin Enhanced Edition $ ALSOFT_DRIVERS=pulse MESA_GL_VERSION_OVERRIDE=4.2 ./start.sh Running Divinity: Original Sin - Enhanced Edition Language detected: English Thread "EoCApp" (2444249152) received signal 11 Call stack: (0) /lib/x86_64-linux-gnu/libpthread.so.0 : +0x10d30 [0x7f8d8d2ebd30] (1) ./libOGLBinding.so : api::OpenGLRenderer::ApplyConstants()+0x65 [0x7f8d91b87845] (2) ./libRenderFramework.so : rf::Renderer::Apply(bool)+0x57 [0x7f8d91b3e437] (3) ./EoCApp : ig::IggyBinding::Swap(rf::Renderer*)+0xfc [0xed032c] (4) ./libGameEngine.so : BaseApp::EndDrawGUI(rf::Renderer*)+0x9b [0x7f8d8e01bfab] (5) ./libGameEngine.so : BaseApp::MakeFrame()+0x3a4 [0x7f8d8e01c4d4] (6) ./libGameEngine.so : BaseApp::OnIdle()+0xe0 [0x7f8d8e01acb0] (7) ./EoCApp : main+0x170 [0x6d5180] (8) /lib/x86_64-linux-gnu/libc.so.6 : __libc_start_main+0xf0 [0x7f8d8cf53610] (9) ./EoCApp : _start+0x29 [0x6d4ef9] Segmentation fault
this is due the game requiring a 4.2 context. They asked for it and then it crashes because they don't error check. Then the game crashes because there is no ARB_shading_language_include. try out: https://github.com/karolherbst/mesa.git branch: ARB_shading_language_include but you will get graphical glitches, because you need to spoof the GL vendor to ATI to force the ATI rendering path
(In reply to Jamey Sharp from comment #17) > (In reply to Ernst Sjöstrand from comment #16) > > Yes I know because when I merged in > > https://github.com/karolherbst/mesa/commits/ARB_shading_language_include I > > suddenly got a loading progress bar and a splash screen. However at 95% it > > crashed like this instead: > > Oh, I never tested Karol's patches and didn't entirely understand them, to > be honest. I don't think those patches actually tried to fix the > ApplyConstants crash, just the lack of ARB_shading_language_include. Would > you try my patch instead? It's merged to Mesa git master now (thanks > Timothy!), so you can just build stock Mesa. > > I've just re-tested that Mesa commit > cf6dadb00b93b828e8cc95e8136d4c2013d76e40, which was the newest when I > checked a few minutes ago, works with this game for me on Intel graphics. > > I'm using two workarounds in addition to the Mesa fix. One is > MESA_GL_VERSION_OVERRIDE=4.2. The other is to configure the game to cap its > frame-rate to 15 fps, because if the game renders frames too quickly, I get > all sorts of wrong drawing. (Sarah glanced at it and thought it might be > missing sync fences or some such thing, leading to some rendering finishing > after the buffer swap.) > > Even then I have some textures getting sprayed on the screen in places they > shouldn't be and UI icons often failing to show up at all, but the game is > fairly playable for me. > > If you still get the ApplyConstants crash with the git master version of > Mesa, I'm afraid you're going to have to do more debugging to find out why, > since I can't reproduce the problem any more. not entirely true. Yes they partly implement ARB_shading_language_include, But I also hacked around the compile status thing. And then you need a 4.2 context. Anyhow, their engine requires ARB_shading_language_include, GL4.2 and 595d56cc866638f371626cc1d0137a6a54a7d0f8
(In reply to Karol Herbst from comment #23) > Anyhow, their engine requires ARB_shading_language_include, GL4.2 and > 595d56cc866638f371626cc1d0137a6a54a7d0f8 I'm still super confused about this. I'm playing the game more or less successfully on Intel graphics, without ARB_shading_language_include, and while disassembling I found code paths in the game to handle the case where that extension isn't supported. Also, all the extensions between OpenGL 4.0 and 4.2 that are implemented on Intel are also implemented on radeonsi, according to mesamatrix.net. So what's different when the game runs on radeonsi? Why does it crash there but work on Intel? I can provide a short apitrace from a successful run on Intel if you want to try replaying it on your drivers; e-mail me if you want it. BTW, the game developers tell me that all Linux development has stopped and we shouldn't expect any new patches from them. :-(
Perhaps it's due to LLVM version, for the Intel vs radeonsi ? I tried with llvm 3.8, and some of OpenGL 4.2 extensions require llvm 3.9... but I've to admit it scares me a bit to recompile llvm and all of mesa with an experimental llvm... I'll see what I can do. And yeah, I'm a bit disappointed by how Larian handled the Linux version... they made us wait for years (despite us backing the game on Kickstarter because it advertised Linux support) and then they deliver this buggy thing that uses non-standard OpenGL extensions, and stop support :/ But well... I've full faith that radeonsi will one day support it, Mesa devs are amazing :)
(In reply to kilobug from comment #25) > Perhaps it's due to LLVM version, for the Intel vs radeonsi ? I tried with > llvm 3.8, and some of OpenGL 4.2 extensions require llvm 3.9... but I've to > admit it scares me a bit to recompile llvm and all of mesa with an > experimental llvm... I'll see what I can do. > > And yeah, I'm a bit disappointed by how Larian handled the Linux version... > they made us wait for years (despite us backing the game on Kickstarter > because it advertised Linux support) and then they deliver this buggy thing > that uses non-standard OpenGL extensions, and stop support :/ But well... > I've full faith that radeonsi will one day support it, Mesa devs are amazing > :) Well there are simple hacks now to get divinity running. https://github.com/karolherbst/mesa/commit/aad2543bf6cfbd7df795d836e5ff4ec8686e4fdf (by Laurent Carlier) and https://gist.github.com/karolherbst/b279233f8b13c9db1f3e1e57c6ecfbd2 (by Kenneth Graunke) recent mesa master + both patches + forcing gl 4.2 + forcing glsl 420 should work for you.
as it turns out, divinity also needs allow_glsl_extension_directive_midshader
(In reply to Karol Herbst from comment #26) > Well there are simple hacks now to get divinity running. > > https://github.com/karolherbst/mesa/commit/ > aad2543bf6cfbd7df795d836e5ff4ec8686e4fdf > (by Laurent Carlier) > > and > > https://gist.github.com/karolherbst/b279233f8b13c9db1f3e1e57c6ecfbd2 > (by Kenneth Graunke) > > recent mesa master + both patches + forcing gl 4.2 + forcing glsl 420 should > work for you. It works on radeonsi with those two patches and https://gist.github.com/karolherbst/56a548cf74b514baf0889b9ad5d7cf48 With recent enough Mesa, forcing OpenGL 4.2 and GLSL 420 is not required.
> With recent enough Mesa, forcing OpenGL 4.2 and GLSL 420 is not required. Did you use llvm 3.8 or 3.9 ?
(In reply to kilobug from comment #29) > > With recent enough Mesa, forcing OpenGL 4.2 and GLSL 420 is not required. > > Did you use llvm 3.8 or 3.9 ? 3.9.
Seeing a crash on start as well with the following stack (Debian testing as a base): GPU: Hawaii PRO [Radeon R9 290] (ChipID = 0x67b1) Mesa: Git:master/cee459d84d libdrm: 2.4.68-1 LLVM: SVN:trunk/r271192 (3.9 devel) X.Org: 2:1.18.3-1 Linux: 4.5.4 Firmware: firmware-amd-graphics/20160110-1 libclc: Git:master/20d977a3e6 DDX: 1:7.7.0-1
Created attachment 125302 [details] Simple LD_PRELOAD shim to apply necessary patches for divos Game works great for me with the above patches (thanks to those who figured this out!). However, since they are not likely to be incorporated into Mesa, and patching my system Mesa just for one poorly written game is a bad idea, I think one of two alternate solutions needs to be provided. My preference would be to patch the game binaries. I don't really want to mess with that right now, though (especially since I can't easily locate where it does the vendor check). The other would be to provide the patches in the form of an LD_PRELOAD shim. I have attached the source code for one that seems to work for me.
Created attachment 125311 [details] [review] divos-hack.patch (In reply to Thomas J. Moore from comment #32) > Created attachment 125302 [details] > Simple LD_PRELOAD shim to apply necessary patches for divos > > Game works great for me with the above patches (thanks to those who figured > this out!). However, since they are not likely to be incorporated into > Mesa, and patching my system Mesa just for one poorly written game is a bad > idea, I think one of two alternate solutions needs to be provided. My > preference would be to patch the game binaries. I don't really want to mess > with that right now, though (especially since I can't easily locate where it > does the vendor check). The other would be to provide the patches in the > form of an LD_PRELOAD shim. I have attached the source code for one that > seems to work for me. Your code calls dlsym at every call of glGetString/glXGetProcAddressARB instead of call it only at startup. Fix in attachments.
(In reply to Thomas J. Moore from comment #32) > Created attachment 125302 [details] > Simple LD_PRELOAD shim to apply necessary patches for divos > > Game works great for me with the above patches (thanks to those who figured > this out!). However, since they are not likely to be incorporated into > Mesa, and patching my system Mesa just for one poorly written game is a bad > idea, I think one of two alternate solutions needs to be provided. My > preference would be to patch the game binaries. I don't really want to mess > with that right now, though (especially since I can't easily locate where it > does the vendor check). The other would be to provide the patches in the > form of an LD_PRELOAD shim. I have attached the source code for one that > seems to work for me. As for vendor check location: [ /media/Storage/Games/GOG/Linux/DivinityOriginalSinEnhancedEdition/game ] $ GAME_DIR="/media/Storage/Games/GOG/Linux/DivinityOriginalSinEnhancedEdition" MESA_GL_VERSION_OVERRIDE=4.2 MESA_GLSL_VERSION_OVERRIDE=420 LD_PRELOAD="${GAME_DIR}/workaround/divos-hack-f.so" LD_LIBRARY_PATH="${GAME_DIR}/game" gdb -q ${GAME_DIR}/game/EoCApp Reading symbols from /media/Storage/Games/GOG/Linux/DivinityOriginalSinEnhancedEdition/game/EoCApp...(no debugging symbols found)...done. (gdb) b divos-hack-f.c:38 No symbol table is loaded. Use the "file" command. Make breakpoint pending on future shared library load? (y or [n]) y Breakpoint 1 (divos-hack-f.c:38) pending. (gdb) ru Starting program: /media/Storage/Games/GOG/Linux/DivinityOriginalSinEnhancedEdition/game/EoCApp [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". [New Thread 0x7fffe4827700 (LWP 1452)] Thread 1 "EoCApp" hit Breakpoint 1, glGetString (name=7936) at divos-hack-f.c:38 38 return (const GLubyte *)vendor; (gdb) bt #0 glGetString (name=7936) at divos-hack-f.c:38 #1 0x00007ffff4624dce in api::OpenGLRenderer::OpenGLRenderer(api::IAPI*, void*) () from /media/Storage/Games/GOG/Linux/DivinityOriginalSinEnhancedEdition/game/libOGLBinding.so #2 0x00007ffff4623f79 in api::OpenGLAPI::CreateRenderer() () from /media/Storage/Games/GOG/Linux/DivinityOriginalSinEnhancedEdition/game/libOGLBinding.so #3 0x00007ffff4623b23 in api::OpenGLAPI::Init() () from /media/Storage/Games/GOG/Linux/DivinityOriginalSinEnhancedEdition/game/libOGLBinding.so #4 0x00007ffff44309aa in BaseApp::InitAPI() () from /media/Storage/Games/GOG/Linux/DivinityOriginalSinEnhancedEdition/game/libGameEngine.so #5 0x00007ffff442f578 in BaseApp::Start(ls::InitStruct*) () from /media/Storage/Games/GOG/Linux/DivinityOriginalSinEnhancedEdition/game/libGameEngine.so #6 0x00000000006d5160 in main () (gdb)
(In reply to Mikhail Korolev from comment #33) > Your code calls dlsym at every call of glGetString/glXGetProcAddressARB > instead of call it only at startup. If so, your version of gcc is broken. Just in case my gcc is broken as well, I checked with gdb and the lines of code executing dlsym are executed exactly once, while the wrappers themselves are called more than once. The only error I can see on reviewing my code is that I misspelled Enhanced, which is not worth correcting. That's not to say my code is perfect, though. > Fix in attachments. If you say so. Thanks for trying, at least. Seems more complex to me with little benefit (the benefit being no longer executing the NULL check every call, which probably takes a few nanoseconds). In fact, using _init like that makes me uncomfortable, since the time of symbol resolution is less obvious. If it works, it works, though. (In reply to Mikhail Korolev from comment #33) > As for vendor check location: Thanks. I guess what I really meant to say is that I no longer have the patience and dedication needed to properly reverse engineer and provide patches for the code. Last time I did that was over 20 years ago (http://aminet.net/package/disk/misc/cdfix).
(In reply to Thomas J. Moore from comment #35) > (In reply to Mikhail Korolev from comment #33) > > Your code calls dlsym at every call of glGetString/glXGetProcAddressARB > > instead of call it only at startup. > > If so, your version of gcc is broken. Just in case my gcc is broken as > well, I checked with gdb and the lines of code executing dlsym are executed > exactly once, while the wrappers themselves are called more than once. The > only error I can see on reviewing my code is that I misspelled Enhanced, > which is not worth correcting. That's not to say my code is perfect, though. My fault. I missed `static` part. Sorry for distraction.
(In reply to Thomas J. Moore from comment #32) > Created attachment 125302 [details] > Simple LD_PRELOAD shim to apply necessary patches for divos > > Game works great for me with the above patches (thanks to those who figured > this out!). However, since they are not likely to be incorporated into > Mesa, and patching my system Mesa just for one poorly written game is a bad > idea, I think one of two alternate solutions needs to be provided. My > preference would be to patch the game binaries. I don't really want to mess > with that right now, though (especially since I can't easily locate where it > does the vendor check). The other would be to provide the patches in the > form of an LD_PRELOAD shim. I have attached the source code for one that > seems to work for me. Thanks for the LD_PRELOAD shim. I just tried it on my R9 R380X using padoka's ppa (so git Mesa 12.1~git1600727202100.29d70cc~x~padoka0 + svn LLVM 1:4.0~svn276446-0~x~padoka0), and there is a significant improvement, the game starts and displays the loading screen... but when the loading bar is full, then it crashes with : (0) /lib/x86_64-linux-gnu/libpthread.so.0 : +0x10ed0 [0x7f209566ded0] (1) ./libOGLBinding.so : api::OpenGLRenderer::ApplyConstants()+0x65 [0x7f209a129845] (2) ./libRenderFramework.so : rf::Renderer::Apply(bool)+0x57 [0x7f209a0e0437] (3) ./libRenderFramework.so : rf::RCB_ApplyCommand::Execute(rf::Renderer*, void const*)+0xd [0x7f209a0fbf4d] (4) ./libRenderFramework.so : rf::RendererCommandBuffer::ExecuteCommandBuffer(bool)+0x37 [0x7f209a0e8c77] (5) ./libGameEngine.so : ls::PostProcessStage::Execute(rf::RenderView const*)+0x44 [0x7f20963b1de4] (6) ./libRenderFramework.so : rf::StageGroup::Execute(rf::RenderView const*) const+0x31 [0x7f209a0f24d1] (7) ./EoCApp : ecl::EoCRenderView::Execute()+0x114 [0x95b664] (8) ./libRenderFramework.so : rf::RenderFrame::Execute()+0x60 [0x7f209a0e7570] (9) ./libGameEngine.so : BaseApp::ExecuteFrame(rf::Renderer*)+0x1c [0x7f20963a5d9c] (10) ./libGameEngine.so : BaseApp::MakeFrame()+0x33b [0x7f20963a646b] (11) ./libGameEngine.so : BaseApp::OnIdle()+0xe0 [0x7f20963a4cb0] (12) ./EoCApp : main+0x170 [0x6d5180] (13) /lib/x86_64-linux-gnu/libc.so.6 : __libc_start_main+0xf0 [0x7f20952d5730] (14) ./EoCApp : _start+0x29 [0x6d4ef9] I tried with and without MESA_GL_VERSION_OVERRIDE=4.2 MESA_GLSL_VERSION_OVERRIDE=420 and it doesn't change anything. Did I miss something ?
> Did I miss something ? allow_glsl_extension_directive_midshader=true ?
(In reply to Iaroslav Andrusyak from comment #38) > > Did I miss something ? > > allow_glsl_extension_directive_midshader=true ? Thanks, that was it ! Here is how I can run the game : allow_glsl_extension_directive_midshader=true ALSOFT_DRIVERS=pulse LD_PRELOAD="/usr/local/lib/divos-hack.so" ./start.sh
Shouldn't this bug be closed as "NOTOURBUG", since it's clearly a game bug? Or at least as "WONTFIX", since the hack is probably never going to land in Mesa anyway? On a different note: I can confirm, that the shim from attachment 125302 [details] in addition to allowing mid-shader extension directives let me launch and play the game.
(In reply to Kai from comment #40) > Shouldn't this bug be closed as "NOTOURBUG", since it's clearly a game bug? > Or at least as "WONTFIX", since the hack is probably never going to land in > Mesa anyway? In my humble non expert opinion, the "hack" that allows an env variable to spoof the vendor string could make it in. It's not something that should be done generally, but it seems generic enough as both a debugging tool and a workaround for a variety of quirky games/programs.
(In reply to Thomas J. Moore from comment #32) > Created attachment 125302 [details] > Simple LD_PRELOAD shim to apply necessary patches for divos > > Game works great for me with the above patches (thanks to those who figured > this out!). However, since they are not likely to be incorporated into > Mesa, and patching my system Mesa just for one poorly written game is a bad > idea, I think one of two alternate solutions needs to be provided. My > preference would be to patch the game binaries. I don't really want to mess > with that right now, though (especially since I can't easily locate where it > does the vendor check). The other would be to provide the patches in the > form of an LD_PRELOAD shim. I have attached the source code for one that > seems to work for me. Would it be possible to add another function to that shim to override the output of sysconf (_SC_NPROCESSORS_ONLN)? I spoke to the developer and apparently the game will only use as many "worker" threads (indicated by "[N] WT" in the app title bar) as your number of cores minus the main thread, but if you're in an environment where your single-threaded performance is heavily constrained (e.g. a Y series Intel notebook chip like mine, when using the iGPU throttles the CPU speeds) but you have multiple cores available, this isn't ideal. From looking at my CPU usage under Divinity I'd really like to be able to hack it to give me 2-3 worker threads instead of 1 to see if that helps performance at all... I don't have much C experience though.
I'm not saying it's the ideal solution, but you could force any undesired CPU cores offline before starting the game if it's affecting the performance severely. e.g. echo 0 > /sys/devices/system/cpu/cpuX/online
(In reply to Alex from comment #42) > Would it be possible to add another function to that shim to override the > output of sysconf (_SC_NPROCESSORS_ONLN)? I don't think this bug report is an appropriate place to discuss things not related to Mesa. In fact, I'm all for closing this bug as fixed, since all actual bugs in Mesa have been fixed. Unfortunately, there isn't really anyplace else to go. Perhaps someplace on the Larian forums (which will exclude me, since I hate vendor forums)? In any case, I've gone ahead and made the requested change, and put it on bitbucket: https://bitbucket.org/darktjm/divos-hack/raw/844453d027e683c5d9830d42c6edf05f4735ddc3/divos-hack.c This allows you to set the # of processors reported with the env variable PROCESSORS_ONLINE. You can also post issues against this at bitbucket: https://bitbucket.org/darktjm/divos-hack/issues
as far as I know there are no linux devs at larian anymore and the linux version won't be touched at all.
Step by step guide for GOG version: 1) Download the source for the LD_PRELOAD shim 2) Compile it using the command given inside the just downloaded patch file. This will give you a divos-hack.so file. gcc -s -O2 -shared -fPIC -o divos-hack.{so,c} -ldl 3) Copy the just created divos-hack.so file to your Divinity: Original Sin game folder (the subfolder called game, within the install path) 4) now, from said game folder, run Divinity using the following command: allow_glsl_extension_directive_midshader=true LD_PRELOAD="divos-hack.so" ./runner.sh Step by step guide for Steam vesion: 1) Download the source for the LD_PRELOAD shim 2) Compile it using the command given inside the just downloaded patch file. This will give you a divos-hack.so file. gcc -s -O2 -shared -fPIC -o divos-hack.{so,c} -ldl 3) Copy the just created divos-hack.so file to your Divinity: Original Sin game folder (the subfolder called game, within the install path) 4) Go to the preferences of Divinity: OS in your Steam Library (right click on the entry -> Preferences), and open the "Set Launch Options" dialogue. There, put the following: allow_glsl_extension_directive_midshader=true LD_PRELOAD="divos-hack.so:$LD_PRELOAD" %command% source: https://www.gamingonlinux.com/articles/divinity-original-sin-may-soon-work-with-mesa-drivers.8867/page=2#r81524
Installed the shim for Fedora 25, XFCE, and Steam. It worked. Upgraded to F26 (it no longer worked) and then to F27 (and it still no longer works). I've played with it a fair bit, reinstalled the Divinity binaries, and no matter what I try, I still get the instant crash.
Yes, the shim seems to no longer work -- I assume newer Mesa versions are no longer declaring compatibility with whatever version Divinity was hardcoded to? It's probably possible to create another workaround, but I don't have the knowledge to do so, and this is obviously quite brittle.
(In reply to Alex from comment #48) > Yes, the shim seems to no longer work -- I assume newer Mesa versions are no > longer declaring compatibility with whatever version Divinity was hardcoded > to? Or maybe it's a Fedora issue, or an Intel issue. I can't test either of those (well, I could probably test Fedora if I wanted to, but that's a lot of work). I am currently running mesa 18-rc4 (gentoo/amdgpu), and the game still works fine (as it has in all previous versions I've tested). So it's not "newer mesa", at least as far as I can tell. OpenGL core profile version string: 4.5 (Core Profile) Mesa 18.0.0-rc4 OpenGL core profile shading language version string: 4.50 I set allow_glsl_extension_directive_midshader=true in my .drirc, so my startup script just adds the game dir to LD_LIBRARY_PATH, sets LD_PRELOAD, and runs the game. I don't even bother removing game-supplied libraries any more, but I used to remove libopenal*, libpng16*, libSDL2* and libXss* (not sure any more why). The only thing I can think of that I do differently that probably does not affect things is set R600_DEBUG=nodccfb due to bug #102885. I might have some other magical settings in my .drirc, but I don't feel like stabbing in the dark.
This is definitely now an issue on Intel HD 4000. Workaround shim works fine for 14.1.x and doesn't work starting from 14.2.0. What I've tried: * building master * reverting 6177d60a374a3d48969fcb062ac1d82465850cb4 * returning NULL on glCompileShaderIncludeARB (part of unsupported ARB_shading_language_include extension, see https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_shading_language_include.txt) No luck so far. I'll probably have to bisect it, but it would be great if someone more knowledgeable on topic of OpenGL steps in.
Just a heads up, tried the shim on Arch and it worked just fine. 1. Download the source or copy it into a file named divos-hack.c 2. Open folder containing divos-hack.c in a terminal and type: gcc -s -O2 -shared -fPIC -o divos-hack.{so,c} -ldl 3. Copy your new divos-hack.so 4. Open Steam, right click Divinity Original Sin > Properties > Local Files > Browse local files. Paste your new divos-hack.so into the folder. 5. While still in the Properties menus in steam click the General tab, click Set Launch Options, and set this for the launch options: LD_PRELOAD=divos-hack.so:$LD_PRELOAD %command% good luck!
just adding my system in case others are wondering what hardware: Arch 4.19 rc4 Mesa-git (18.3) llvm-svn (7+) vega 64 amd tr 1950x
(In reply to Thomas Crider from comment #52) > just adding my system in case others are wondering what hardware: > Arch 4.19 rc4 > Mesa-git (18.3) > llvm-svn (7+) > vega 64 > amd tr 1950x edit 3: if you open runner.sh in the divinity original sin folder and just append LD_PRELOAD=divos-hack.so:$LD_PRELOAD to the 3rd line so it looks like this: LD_PRELOAD=divos-hack.so:$LD_PRELOAD LD_LIBRARY_PATH="." ./EoCApp You wont have to set steam launch options. This is such a stupid fix for this. I don't know why they haven't done this already..
I went ahead and tested this with Ubuntu as well: For ubuntu users you'll need this to compile it: sudo apt install libglu1-mesa-dev freeglut3-dev mesa-common-dev I also tested this on a system with an nvidia 1050 ti using proprietary drivers just to be sure it did not interfere. The shim did not affect the nvidia driver from running the game. Thus it would be safe for them to ship this fix with the game..
Created attachment 143089 [details] [review] vendor override patch for divinity original sin I've created a patch based on the shim for mesa. This adds a driconf option: <application name="Divinity: Original Sin Enhanced Edition" executable="EoCApp"> <option name="allow_glsl_extension_directive_midshader" value="true" /> + <option name="allow_vendor_override_ati" value="true"/> </application> which then sets the vendor string to "ATI Technologies, Inc.". It also applies the glxcmds patch necessary for the game to run. With this patch users should be able to just apply the patch to mesa and run the game. No shim needed.
(In reply to Thomas Crider from comment #55) > With this patch users should be able to just apply the patch to mesa and run > the game. No shim needed. You seem to have missed the purpose of the shim. It was not to fix the problems; mesa patches were provided for that (which I guess you have successfully reinvented; I haven't actually looked at your patch). It was to avoid having to patch mesa every time it updates with patches that would likely never make it upstream. Keeping a custom patch for a distro-maintained and supplied package is a pain, even on distros that support such things directly, such as gentoo (which I use). You will also find that getting support for user-patched packages is much more difficult, even if the patch has nothing to do with the issue. Mesa updates fairly regularly, but I've been using the same shim now for over 2 years without recompile or having to modify the launch script. Unless your patch has been accepted into upstream mesa, your solution makes things harder, rather than easier.
I understand the purpose of the shim quite clearly. My problem is I've reached out to the game developers via e-mail correspondence regarding the issue, and they were unwilling to try to fix it on their end, so I wanted to fix it on Mesa's end, rather than reapplying the shim every time I install or update the game. I figured by adding it as a driconf option it would not affect other games if the patch is mainlined. I'm sure it would need more work/cleanup but it's just a rough concept. My goal wasn't to make it more difficult for the end user per different distros, but rather to get a solution mainlined so that this bug can eventually be closed.
<option name="allow_vendor_override_ati" value="true"/> is very specific. I think this would be more generic: <option name="override_vendor_string" value="ATI Technologies, Inc."/>
Axel Davy the driconf are booleans, not meant to work that way, also not meant to have users intervene as it's to be an OOTB fix. Additionally the vendor name change is driver specific (see patch changes in si_get). If anything the ati portion of the boolean can be removed then add a boolean for intel like i did with AMD in si_get, but I don't know if the game accepts an Intel vendor string.
The driconf options can be string, look at DRI_CONF_DEVICE_ID_PATH_TAG Plus you can make the driconf option driver specific (I don't remember the exact syntax, I think you have to put driver="radeonsi" in the <application ... > field). I don't get your comment about OOTB.
I still feel it's better to set as a boolean and adjust per driver accordingly where the X.Org string is usually spit out, rather than having it set as a string. OOTB = out of the box - meaning users would not need to mess with driconf (and shouldnt need to).
I believe the generic configuration proposed answers your OOTB concern. Mesa packs a default drirc (not visible anymore to the user): src/util/00-mesa-defaults.conf As you can see there are already radeonsi specific workarounds (and it seems my memory was rusty about the syntax). The user would never have to use .drirc to set the string. I've been contributing to mesa for several years now, and I believe my suggestion is more in line with the philosophy of the project and has higher chances of being accepted. That said, your proposal may get accepted, you can get feedback by posting your patch proposal on the mailing list or asking on irc.
(In reply to Axel Davy from comment #62) > I believe the generic configuration proposed answers your OOTB concern. Mesa > packs a default drirc (not visible anymore to the user): > src/util/00-mesa-defaults.conf > > As you can see there are already radeonsi specific workarounds (and it seems > my memory was rusty about the syntax). > The user would never have to use .drirc to set the string. > > I've been contributing to mesa for several years now, and I believe my > suggestion is more in line with the philosophy of the project and has higher > chances of being accepted. That said, your proposal may get accepted, you > can get feedback by posting your patch proposal on the mailing list or > asking on irc. I'm aware of the default configuration file. My patch applies the config modification to it. What I'm trying to say is if you set it as a boolean, you kill two birds with one stone, as the names need to be changed per driver anyway. If you take a look at the patch y0u'll see how it's handled. In the patch, setting the boolean makes dri_context set an envvar with the same name and value to 1. This then gets picked up by si_get via getenv and the name is set. All that would need to be done for intel is to copy/paste the si_get code into intel's intel_context/brw_context and modify it as needed. This would provide 1 boolean for 3 drivers at the same time for that game, without needing to add 3 seperate vendor override options in driconf
actually if I move it to main/getstring it may skip having to edit any of those. will post back shortly with some changes.
Created attachment 143100 [details] [review] vendor override patch for divinity original sin this is a minor update which moves the string override to mesa/main/getstring rather than having to edit individual driver strings, similar to how it was done in the shim. Works fine on my AMD machines. I believe the string issue only affects AMD cards. Without the string change the game doesn't render correctly. The only intel machine I had on hand had a 3.3 opengl core, and didnt seem to be phased by changing the string to either ATI Technologies, Inc. or Intel Corporation. I was able to get the game running using MESA_GL_VERSION_OVERRIDE=4.2COMPAT MESA_GLSL_VERSION_OVERRIDE=420, but with some artifacting, which I believe was due to having to use overrides. The artifacting did not change when I tried using either of the above mentioned strings. I should be able to test it again on an opengl 4.5 compatible intel machine at the office tomorrow.
can confirm changing the string has no effect on intel. only needed for amd. the glxcmds portion of the patch is still necessary for both. will see if I can jump in irc and get more input on the patch. Axel Davy since the string change requirement is amd specific your change may be more viable after all tested on rx 580, vega 64, and Intel Haswell iGPU
Actual override value shouldn't be hardcoded and probably should be passed through environment variable.
Shmerl: driconf vars can be overriden with env vars (of the same name). If the driconf option is a string, thus any user could change the vendor string with an env var.
The patch shouldn't set environment variables and shouldn't change glxcmds.c. The name of the option can be more straightforward, like force_ati_vendor_string.
Marek Olšák this patch for glxcmds.c is necessary for the game to run at all, without it the game crashes: + if (strcmp((const char *) procName, "glNamedStringARB") == 0 || + strcmp((const char *) procName, "glDeleteNamedStringARB") == 0 || + strcmp((const char *) procName, "glCompileShaderIncludeARB") == 0 || + strcmp((const char *) procName, "glIsNamedStringARB") == 0 || + strcmp((const char *) procName, "glGetNamedStringARB") == 0 || + strcmp((const char *) procName, "glGetNamedStringivARB") == 0) + return NULL; as for not setting envvars, in my current patch I made dri_context set an envvar and getstring read that envvar because I wasn't sure how to set the vendor string in dri_context, and I wasn't sure how to use driQueryOptionstr in getstring in order to do so. I have a modified patch which currently does this instead in dri_context: if (driQueryOptionstr(optionCache, "allow_vendor_override")) { ctx_config->vendor==driQueryOptionstr(optionCache, "allow_vendor_override"); } obv. ctx_config->vendor is wrong, but I don't know the correct syntax.
Created attachment 143118 [details] [review] patch to make divinity work Here's the modified patch that allows the vendor to be set in the driconf value. Things that need fixing: -It still sets an envvar because I dont know how to set the vendor string directly in dri_context.c or how to make driQueryOptionstr work in getstring.c -It still contains the glxcmds.c patch because without it the game crashes at launch.
(In reply to Thomas Crider from comment #71) > Created attachment 143118 [details] [review] [review] > patch to make divinity work > > Here's the modified patch that allows the vendor to be set in the driconf > value. > > Things that need fixing: > > -It still sets an envvar because I dont know how to set the vendor string > directly in dri_context.c or how to make driQueryOptionstr work in > getstring.c > > -It still contains the glxcmds.c patch because without it the game crashes > at launch. Hello Thomas, it's still crashing for me with the patch and env vars set (tried both .drirc and runtime). Anything else I have to do to make it work? Just wanna finish an old save :)
the patch works fine here, you may have better luck trying the shim if you're just trying to play the game. you dont need any envvar set with my patch. you just compile mesa with it and play the game. additionally without knowing what hardware you're on i have no idea why it might crash. i do know for intel you need at least opengl 4.5 to be supported
4.2* sorry
Tried the patch again with no luck but the shim worked fine for my rx580. Thanks for the answer.
I found the issue - I had my mesa compiled without glvnd due to an issue with it and dying light. the glxcmds patch only works without glvnd. I would need to patch that portion into libglvnd for it to work, which isnt ideal
Since the game will never be updated I've implemented the required extension in this merge request: https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1841
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/mesa/mesa/issues/999.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.