Created attachment 133257 [details] Test case Hi, The following kernel computes b[0] = 127.0 when c[0] = 0.0, d = 256, and global size is 1x1x1: kernel void A(global float* b, global float* c, const int d) { int e = get_global_id(0); b[e] = (char)(c[e] + d); } Expected result: 0.0. Actual result: 127.0. Expected result confirmed on a number of different OpenCL implementations, including Intel's proprietary driver for the same CPU. Interestingly, changing the line to 'b[e] = (char)(d);' yields the correct result, despite c[e] = 0.0. Hardware: Intel(R) HD Graphics Haswell GT2 Desktop Driver: OpenCL 1.2 beignet 1.3, using llvm 3.6 OS: Ubuntu Linux 16.04 Full test case attached. Cheers, Chris
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/beignet/beignet/issues/28.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.