Created attachment 143848 [details]
output from bisected commit with INTEL_DEBUG=fs
The following tests regressed in CI:
piglit.spec.arb_gpu_shader_int64.execution.fs-ishl-then-ishr piglit.spec.arb_gpu_shader_int64.execution.fs-ishl-then-ishr-loop piglit.spec.arb_gpu_shader_int64.execution.fs-ishl-then-ushr
Output from bisected and previous commits run with INTEL_DEBUG=fs is attached.
Bisected to the following commit:
Author: Ian Romanick <firstname.lastname@example.org>
Date: Wed Feb 27 20:12:46 2019 -0800
nir/algebraic: Add missing 64-bit extract_[iu]8 patterns
No shader-db changes on any Intel platform.
v2: Use a loop to generate patterns. Suggested by Jason.
v3: Fix a copy-and-paste bug in the extract_[ui] of ishl loop that would
replace an extract_i8 with and extract_u8. This broke ~180 tests. This
bug was introduced in v2.
Reviewed-by: Matt Turner <email@example.com> [v1]
Reviewed-by: Dylan Baker <firstname.lastname@example.org> [v2]
Acked-by: Jason Ekstrand <email@example.com> [v2]
Created attachment 143849 [details]
output from previous (working) commit with INTEL_DEBUG=fs
Heh... when you said it failed, I didn't realize the failure was an assertion. :) ICL doesn't have 64-bit integers. We lower 64-bit shifts, but it seems that there is no lowering for 64-bit extract operations.
We either need to disable this optimization for platforms that are going to lower 64-bit integer operations, add a lowering for 64-bit extract operations, or both.