Created attachment 129874 [details] All the screenshots shown on imgur in case imgur goes down or something. When Running Mesa 17.1 git, upon entering any level, the game will start having black splotches and lighting missing/ not rendered correctly. This can be seen in this album here: http://imgur.com/a/k54kc This appears to be a problem with git only, as mesa 13.0.4 isn't affected by this bug...
Thanks for reporting this. Hitman used to work very nicely recently. I will have a look.
Can you provide the following information, please. - Which mesa-git version? - Which LLVM version? - Which graphics settings in Hitman? After a quick try I can't reproduce right now.
(In reply to Samuel Pitoiset from comment #2) > Can you provide the following information, please. > > - Which mesa-git version? > - Which LLVM version? > - Which graphics settings in Hitman? > > After a quick try I can't reproduce right now. All screenshots were taken with Mesa Git 17.1~git170218133300.ad019bf (obtained via the Padoka PPA) or Mesa Git 17.1~git1702230730.6ca434 (from the oibaf PPA). Mesa is built against LLVM 5 git, according to Paulo Dias Everything has been set to the lowest setting possible (No FXAA, trilinear, SSAO Disabled etc.) If it helps, Phoronix had problems of this nature too with certain graphics cards, one he name-checked was the R9 270X: http://www.phoronix.com/scan.php?page=article&item=radeon-nvidia-hitman&num=1 (though the screenshots provided in the article are corruption-free), so it could be a problem with specific GCN cards. I have a GCN 1.1 card, an R7 250, so it may be an issue with Sea Islands Graphics Cards.
Okay, that makes more sense. I only have a RX480, so I definitely can't help here. If you get a chance, can you try recording an apitrace which reproduces? That might other devs. Thanks.
(In reply to Samuel Pitoiset from comment #4) > Okay, that makes more sense. I only have a RX480, so I definitely can't help > here. > > If you get a chance, can you try recording an apitrace which reproduces? > That might other devs. > > Thanks. Okie dokie.
(In reply to Samuel Pitoiset from comment #4) > Okay, that makes more sense. I only have a RX480, so I definitely can't help > here. > > If you get a chance, can you try recording an apitrace which reproduces? > That might other devs. > > Thanks. Ok, I did a trace of Hitman, and I've included the insanely huge 1.4GB file. Go nuts, to whomever can read it (It's compressed into a bzip2 to keep the filesize down): https://drive.google.com/open?id=0B4VxtGqhacqZSGl5OTV2WXYyams
FWIW, I get same artifacts as well. My setup: - R9 280 TAHITI GCN1.0 (1002:679a) - Mesa 17.1~git1702240730.ccb70d~gd~z (Oibaf PPA) - LLVM 4.0~+rc2-1
R600_DEBUG=sisched?
(In reply to Ernst Sjöstrand from comment #8) > R600_DEBUG=sisched? One of the first things I tried. It makes no difference.
I have the same issue. My GPU is a 7950. I am using Debian packages. I see the issue with Mesa 17.0.1/LLVM 4. With Mesa 13.0.5/LLVM 3.9 the issue is not present. This is also at the lowest settings possible, but it happens at "medium" settings too. One thing that changed between Mesa 13 and 17 for radeonsi is the OpenGL version, it was 4.3 now at 4.5. I used the GLSL override to put version 430 with Mesa 17 but the issue still persists. Unrelated last point: while searching on Bugzilla for this issue, there was another one (100061), I think the Hitman issue it refers to in the description is this issue?
(In reply to Ernst Sjöstrand from comment #8) > R600_DEBUG=sisched? Also, I tried using the R600_debug command upon Steam startup, and Siched did make a positive performance impact after all.
With bug 100061 being fixed, I tried mesa git 72fa447d45 which contains the fix, but did not see any change with regard to this bug. Would a new apitrace be needed for further debugging?
I'm having the same problem, at least the black block artifacts. It doesn't matter if I set the shadows to low or medium quality and window or fullscreen also doesn't make any difference. I'm using Mesa 17.0.2 (Debian, experimental), LLVM is 4.0 from Debian (1:4.0-1) and have and AMD R9 280 (see below). # lspci -vvvvv -s 01:00.0 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti PRO [Radeon HD 7950/8950 OEM / R9 280] (prog-if 00 [VGA controller]) Subsystem: PC Partner Limited / Sapphire Technology Radeon R9 280 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 36 NUMA node: 0 Region 0: Memory at c0000000 (64-bit, prefetchable) [size=256M] Region 2: Memory at fea00000 (64-bit, non-prefetchable) [size=256K] Region 4: I/O ports at e000 [size=256] Expansion ROM at 000c0000 [disabled] [size=128K] Capabilities: <access denied> Kernel driver in use: radeon Kernel modules: radeon Kernels I tested are 4.7 and 4.10 (both from Debian).
TLDR: The issue is with LLVM >3.9. I started bisecting between the two versions I knew had differences (13.0.5 and 17.0.1). I started with 17.0.1 and was using LLVM 4.0, but then to compile 13.0.5 I had to use to LLVM 3.9. This gave me an idea to test the latest version with LLVM 3.9 and surprise, no more glitches! So I tested 17.0.1 and a recent git version (060a6434) with LLVM 3.9 (the one from Debian) and the glitches are gone. So for now I have a workaround that is ok for me.
The issue is still present using mesa 17.1.0-1 (Debian/experimental) and llvm 4.0.1~+rc1 (also Debian/experimental). I also have llvm 5.0 (5.0~svn301421; Debian/experimental) installed.
I can confirm that if I recompile the Debian sources with LLVM 4.0 it's not working (the lighting bug is present) but if I set LLVM to 3.9 it's working correctly (lighting bug is NOT present). I was using LLVM 4.0.1 (1:4.0.1~+rc3-1) from the Debian archives to build Mesa (17.1.2-2) with the bug and LLVM 3.9.1 (3.9.1-10) from the Debian archives to build Mesa without the bug.
Fo what it's worth, the issue is still present in mesa 17.2.0~rc3-1 and llvm 1:4.0.1-1 (Debian/experimental).
Unfortunately still the same... # glxinfo | grep Mesa client glx vendor string: Mesa Project and SGI OpenGL core profile version string: 4.5 (Core Profile) Mesa 17.2.4 OpenGL version string: 3.0 Mesa 17.2.4 OpenGL ES profile version string: OpenGL ES 3.1 Mesa 17.2.4 # glxinfo | grep -i llvm Device: AMD TAHITI (DRM 2.50.0 / 4.13.0-1-amd64, LLVM 5.0.0) (0x679a) OpenGL renderer string: AMD TAHITI (DRM 2.50.0 / 4.13.0-1-amd64, LLVM 5.0.0)
Still the same with Mesa 17.2.5 # glxinfo | grep Mesa client glx vendor string: Mesa Project and SGI OpenGL core profile version string: 4.5 (Core Profile) Mesa 17.2.5 OpenGL version string: 3.0 Mesa 17.2.5 OpenGL ES profile version string: OpenGL ES 3.1 Mesa 17.2.5 # glxinfo | grep -i llvm Device: AMD TAHITI (DRM 2.50.0 / 4.13.0-1-amd64, LLVM 5.0.0) (0x679a) OpenGL renderer string: AMD TAHITI (DRM 2.50.0 / 4.13.0-1-amd64, LLVM 5.0.0) And also with 17.3.0 rc5. # glxinfo | grep Mesa client glx vendor string: Mesa Project and SGI OpenGL core profile version string: 4.5 (Core Profile) Mesa 17.3.0-rc5 OpenGL version string: 3.0 Mesa 17.3.0-rc5 OpenGL ES profile version string: OpenGL ES 3.1 Mesa 17.3.0-rc5 # glxinfo | grep -i llvm Device: AMD TAHITI (DRM 2.50.0 / 4.13.0-1-amd64, LLVM 5.0.0) (0x679a) OpenGL renderer string: AMD TAHITI (DRM 2.50.0 / 4.13.0-1-amd64, LLVM 5.0.0)
I recompiled Mesa with LLVM 6.0 but that didn't change anything. # glxinfo | grep -i llvm Device: AMD TAHITI (DRM 2.50.0 / 4.13.0-1-amd64, LLVM 6.0.0) (0x679a) OpenGL renderer string: AMD TAHITI (DRM 2.50.0 / 4.13.0-1-amd64, LLVM 6.0.0)
Still no change with Mesa 18.0rc20 (LLVM 5.0.1) It's now nearly a year ago that the game stopped working properly and I can't play it as is. # glxinfo | grep Mesa client glx vendor string: Mesa Project and SGI OpenGL core profile version string: 4.5 (Core Profile) Mesa 18.0.0-rc2 OpenGL version string: 3.0 Mesa 18.0.0-rc2 OpenGL ES profile version string: OpenGL ES 3.1 Mesa 18.0.0-rc2 # glxinfo | grep -i llvm Device: AMD TAHITI (DRM 2.50.0 / 4.14.0-3-amd64, LLVM 5.0.1) (0x679a) OpenGL renderer string: AMD TAHITI (DRM 2.50.0 / 4.14.0-3-amd64, LLVM 5.0.1)
I have the exact same issue with this game. Distribution: Solus 3.999 Kernel driver: amdgpu glxinfo | grep OpenGL OpenGL vendor string: X.Org OpenGL renderer string: AMD Radeon HD 7900 Series (TAHITI / DRM 3.23.0 / 4.15.11-61.current, LLVM 5.0.1) OpenGL core profile version string: 4.5 (Core Profile) Mesa 17.3.6 OpenGL core profile shading language version string: 4.50 OpenGL core profile context flags: (none) OpenGL core profile profile mask: core profile OpenGL core profile extensions: OpenGL version string: 3.0 Mesa 17.3.6 OpenGL shading language version string: 1.30 OpenGL context flags: (none) OpenGL extensions: OpenGL ES profile version string: OpenGL ES 3.1 Mesa 17.3.6 OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10 OpenGL ES profile extensions
Same problem here, Mesa git master 4a8ef1f5d4c5fc96194be65045e6a3d4f5b9f913, LLVM 7 r328326, kernel 4.15.12, TAHITI + amdgpu kernel module. It looks a bit different with R600_DEBUG=nir, but still broken.
The bug was enabled with LLVM revision 281112 (or commit d5a5e9043a23bdcf0f3e4d05e007a3d67488b445 [0]). After reverting the change in lib/Target/AMDGPU/AMDGPUTargetMachine.cpp everything seems to be working again (LLVM 6.0/Mesa 18.0; I tried current snapshots but they didn't build or just crashed). System: Radeon HD 7870 Fedora 27 Steam via Flatpak and an updated org.freedesktop.Platform.GL.mesa extension # flatpak run org.freedesktop.GlxInfo | grep Mesa client glx vendor string: Mesa Project and SGI OpenGL core profile version string: 4.5 (Core Profile) Mesa 18.0.0 (git-fb64913d19) OpenGL version string: 3.0 Mesa 18.0.0 (git-fb64913d19) OpenGL ES profile version string: OpenGL ES 3.1 Mesa 18.0.0 (git-fb64913d19) # flatpak run org.freedesktop.GlxInfo | grep -i llvm Device: AMD PITCAIRN (DRM 2.50.0 / 4.15.12-301.fc27.x86_64, LLVM 6.0.0 (0x6818) OpenGL renderer string: AMD PITCAIRN (DRM 2.50.0 / 4.15.12-301.fc27.x86_64, LLVM 6.0.0) [0]: https://github.com/llvm-mirror/llvm/commit/d5a5e9043a23bdcf0f3e4d05e007a3d67488b445
Matt, any ideas?
Can you attach the temp dumps with/without the vectorizer enabled?
Possibly, but I actually don't know what is meant with "the temp dumps".
Thanks Martin for finding the issue! The vectorizer can be disabled from Mesa side, so it's not needed to recompile LLVM... See the attached patch. I tested it with Debian LLVM 7 and Mesa git and works as expected.
Created attachment 139672 [details] [review] Patch to disable vectorizer
(In reply to Martin Pilarski from comment #27) > Possibly, but I actually don't know what is meant with "the temp dumps". The output of RADEON_DEBUG=fs,vs,gs,ps,cs
Created attachment 139728 [details] Shaders dump Here is the shaders dump. hitman_error is with the glitch hitman_ok is without the glitch I don't know if they will be usable, it seems that the hitman_error dump got the output of multiple dumps at the same time... Also the files are huge! For reference, this is with mesa git 936cd3c87a212c28fe89a5c059fc4febd8b52ab7 debian llvm 7 svn333018
Hello, I have the same issue on also an SI card. Is there a way to disable the vectorizer with an environment variable or do I need to patch Mesa first? Thank you!
We've had several user reports of this through our support, and reproduced it internally as well (on a 270X, which is our officially supported minimum spec). Seems to be specific to SI cards. Given that this has regressed since the original release of the game, is it possible to get a workaround into Mesa, e.g. disabling the load/store vectorizer like suggested for SI cards (perhaps just restricted to Hitman), until the issue is resolved in LLVM?
Can confirm this is still an issue with current Mesa / LLVM git on Radeon HD 7970. Also the game has huge fps drops in certain areas. The bug is now almost two years old and makes the game unplayable. Its really a pity that no dev found time to fix this. I guess its time that AMD or Valve contracts a company to care for those bugs and general driver quality.
I tried compiling with the no-vectorization patch and the problem persists. I'm not sure if I compiled it properly (I used an srcpackage on openSUSE and modified the .spec file to include the patch, then ran `rpmbuild -bb` and force-installed the generated rpm files), but have no reason to believe that I didn't.
I'm looking into it. Disabling the vectorizer is one way to fix it.
Currently also experiencing the issue with an AMD R9 280. Mesa and LLVM versions: $ glxinfo | grep Mesa client glx vendor string: Mesa Project and SGI OpenGL core profile version string: 4.5 (Core Profile) Mesa 18.2.2 OpenGL version string: 4.4 (Compatibility Profile) Mesa 18.2.2 OpenGL ES profile version string: OpenGL ES 3.2 Mesa 18.2.2 $ glxinfo | grep LLVM Device: AMD TAHITI (DRM 2.50.0, 4.18.10-arch1-1-ARCH, LLVM 7.0.0) (0x679a) OpenGL renderer string: AMD TAHITI (DRM 2.50.0, 4.18.10-arch1-1-ARCH, LLVM 7.0.0) Sure would appreciate a fix for this, it worked fine on this same system when the game came out.
The initial fix is here: https://reviews.llvm.org/D52907
I can confirm that the issue is fixed with the initial fix. glxinfo | grep Mesa client glx vendor string: Mesa Project and SGI OpenGL core profile version string: 4.5 (Core Profile) Mesa 18.3.0-devel (git-fa52ff856d) OpenGL version string: 4.5 (Compatibility Profile) Mesa 18.3.0-devel (git-fa52ff856d) OpenGL ES profile version string: OpenGL ES 3.2 Mesa 18.3.0-devel (git-fa52ff856d) glxinfo | grep LLVM Device: AMD PITCAIRN (DRM 2.50.0, 4.18.12-arch1-1-ARCH, LLVM 8.0.0) (0x6818) OpenGL renderer string: AMD PITCAIRN (DRM 2.50.0, 4.18.12-arch1-1-ARCH, LLVM 8.0.0) LLVM was built from commit c179d7b.
Root caused to a compute shader that does an out-of-bounds array access. Tsk tsk ;) Proper fix is at https://reviews.llvm.org/D53160 + related patches.
Thx for finally fixing this! Is there a chance a backport appears for LLVM 6 & 7 ?
With the fix commited in LLVM trunk/master (and obviously working), I guess this bug can be closed? I'd like to know if it can be backported, too. LLVM 8 is quite some time away and, depending on the distribution, it takes even longer to reach the users.
I really like to say that its fixed but either a system or a game update made the game crash on start. HitmanPro.sh: Zeile 421: 13106 Speicherzugriffsfehler (Speicherabzug geschrieben) ${GAME_LAUNCH_PREFIX} ${GAME_SIGNAL_WRAPPER} "${GAMEROOT}/bin/${FERAL_GAME_NAME}" "$@"
We released a game update last week, which includes a workaround for the out of bounds array access in a shader that was triggering this issue (thanks to Nicolai for giving us the details). That should fix the rendering issues without having to update LLVM or wait for a backport. Obviously the game shouldn't be crashing for you after the update, if it is please contact support@feralinteractive.com so we can look into it.
Thats very good news. Thx for the effort! I just checked that with Vulkan the game works but standard opengl crash on start. Will contact support.
Had no time to figure out the issue lately. Feral Support gave me the advice to downgrade Mesa and LLVM to stable but that wasnt the reason the game did crash for me. I ended up deleting the HITMAN folder under .local/share/feral... and things started magically work for me. So if someone from you have also a game crash upon start this might help. In game I could confirm that the issue went away. So everything looks like to render correct now. Im going forward and close this issue now. This was fixed by game update and LLVM, so this should be really fixed for everyone.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.