This is mostly a note to myself.
While looking at bug #111490, I noticed many instances of patterns like
vec1 32 ssa_127 = ior ssa_125, ssa_126
vec1 32 ssa_128 = b2i32 ssa_127
vec1 32 ssa_129 = ineg ssa_128
I thought I'd add an optimization to clean this up, but it already exists:
dca6cd9ce651 src/compiler/nir/nir_opt_algebraic.py (Jason Ekstrand 2018-11-07 13:43:40 -0600 550) (('ineg', ('b2i32', 'a@32')), a),
Why doesn't the existing transformation trigger?
I haven't verified yet, but a 'git bisect run' that I let go over night says,
44227453ec03f5462f1cff5760909a9dba95c61a is the first bad commit
Author: Jason Ekstrand <email@example.com>
Date: Fri Oct 19 11:14:47 2018 -0500
nir: Switch to using 1-bit Booleans for almost everything
This is a squash of a few distinct changes:
glsl,spirv: Generate 1-bit Booleans
Revert "Use 32-bit opcodes in the NIR producers and optimizations"
Revert "nir/builder: Generate 32-bit bool opcodes transparently"
nir/builder: Generate 1-bit Booleans in nir_build_imm_bool
Reviewed-by: Eric Anholt <firstname.lastname@example.org>
Reviewed-by: Bas Nieuwenhuizen <email@example.com>
Tested-by: Bas Nieuwenhuizen <firstname.lastname@example.org>
:040000 040000 704b07a7770ac6639a1d7359e7f4af20becfc7d3 4376841397ec5ade287aa9dc38727206cb6efc63 M src
Looking at the commit, the reason for the regression is obvious. The nir_lower_bool_to_int32 pass happens after the last nir_opt_algebraic, so the pattern in nir_opt_algebraic never sees a ('b2i32', 'a@32'). It's effectively dead code.