Bug 100105

Summary: Make Theano OpenCL support work on Clover and RadeonSI
Product: Mesa Reporter: Vedran Miletić <vedran>
Component: Gallium/StateTracker/CloverAssignee: mesa-dev
Status: RESOLVED MOVED QA Contact: mesa-dev
Severity: major    
Priority: medium CC: pbrobinson
Version: git   
Hardware: Other   
OS: All   
URL: http://deeplearning.net/software/libgpuarray/installation.html
See Also: https://github.com/Theano/libgpuarray/issues/491
https://github.com/Theano/libgpuarray/issues/462
Whiteboard:
i915 platform: i915 features:
Bug Depends on: 94273, 100212    
Bug Blocks: 99553    

Description Vedran Miletić 2017-03-07 18:35:38 UTC
$ DEVICE="opencl0:0" python -c "import pygpu;pygpu.test()"
pygpu is installed in /usr/lib64/python2.7/site-packages/pygpu-0.6.2-py2.7-linux-x86_64.egg/pygpu
NumPy version 1.11.2
NumPy relaxed strides checking option: False
NumPy is installed in /usr/lib64/python2.7/site-packages/numpy
Python version 2.7.13 (default, Jan 12 2017, 17:59:37) [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)]
nose version 1.3.7
*** Testing for AMD FIJI (DRM 3.8.0 / 4.9.13-200.fc25.x86_64, LLVM 5.0.0)

========================================================

AN INTERNAL KERNEL BUILD ERROR OCCURRED!
device name = AMD FIJI (DRM 3.8.0 / 4.9.13-200.fc25.x86_64, LLVM 5.0.0)
error = -43
memory pattern = Register accumulation based swap, computing kernel generator
Subproblem dimensions: dims[0].itemY = 32, dims[0].itemX = 32, dims[0].y = 32, dims[0].x = 32, dims[0].bwidth = 64; ; dims[1].itemY = 4, dims[1].itemX = 4, dims[1].y = 4, dims[1].x = 4, dims[1].bwidth = 8; ; 
Parallelism granularity: pgran->wgDim = 1, pgran->wgSize[0] = 64, pgran->wgSize[1] = 1, pgran->wfSize = 64
Kernel extra flags: 369130144
Source:

#ifdef DOUBLE_PRECISION
    #ifdef cl_khr_fp64
    #pragma OPENCL EXTENSION cl_khr_fp64 : enable
    #else
    #pragma OPENCL EXTENSION cl_amd_fp64 : enable
    #endif
#endif

__kernel void Sdot_kernel( __global float *_X, __global float *_Y, __global float *scratchBuff,
                                        uint N, uint offx, int incx, uint offy, int incy, int doConj )
{
    __global float *X = _X + offx;
    __global float *Y = _Y + offy;
    float dotP = (float) 0.0;

    if ( incx < 0 ) {
        X = X + (N - 1) * abs(incx);
    }
    if ( incy < 0 ) {
        Y = Y + (N - 1) * abs(incy);
    }

    int gOffset;
    for( gOffset=(get_global_id(0) * 4); (gOffset + 4 - 1)<N; gOffset+=( get_global_size(0) * 4 ) )
    {
        float4 vReg1, vReg2, res;

        #ifdef INCX_NONUNITY
             vReg1 = (float4)(  (X + (gOffset*incx))[0 + ( incx * 0)],  (X + (gOffset*incx))[0 + ( incx * 1)],  (X + (gOffset*incx))[0 + ( incx * 2)],  (X + (gOffset*incx))[0 + ( incx * 3)]);
        #else
            vReg1 = vload4(  0, (__global float *) (X + gOffset) );
        #endif

        #ifdef INCY_NONUNITY
             vReg2 = (float4)(  (Y + (gOffset*incy))[0 + ( incy * 0)],  (Y + (gOffset*incy))[0 + ( incy * 1)],  (Y + (gOffset*incy))[0 + ( incy * 2)],  (Y + (gOffset*incy))[0 + ( incy * 3)]);
        #else
            vReg2 = vload4(  0, (__global float *) (Y + gOffset) );
        #endif

        ;
         res =  vReg1 *  vReg2 ;
        dotP +=  res .S0 +  res .S1 +  res .S2 +  res .S3;
;          // Add-up elements in the vector to give a scalar
    }

    // Loop for the last thread to handle the tail part of the vector
    // Using the same gOffset used above
    for( ; gOffset<N; gOffset++ )
    {
        float sReg1, sReg2, res;
        sReg1 = X[gOffset * incx];
        sReg2 = Y[gOffset * incy];

        ;
             res =  sReg1 *  sReg2 ;
             dotP =  dotP +  res ;
        }

    // Note: this has to be called outside any if-conditions- because REDUCTION uses barrier
    // dotP of work-item 0 will have the final reduced item of the work-group
    __local float viraW [ 64 ];
	uint kFbwL = get_local_id(0);
	 viraW [ kFbwL ] =  dotP ;
	barrier(CLK_LOCAL_MEM_FENCE);

	if( kFbwL < 32 ) {
		 viraW [ kFbwL ] = viraW [ kFbwL ] + viraW [ kFbwL + 32 ];
	}
	barrier(CLK_LOCAL_MEM_FENCE);

	if( kFbwL < 16 ) {
		 viraW [ kFbwL ] = viraW [ kFbwL ] + viraW [ kFbwL + 16 ];
	}
	barrier(CLK_LOCAL_MEM_FENCE);

	if( kFbwL < 8 ) {
		 viraW [ kFbwL ] = viraW [ kFbwL ] + viraW [ kFbwL + 8 ];
	}
	barrier(CLK_LOCAL_MEM_FENCE);

	if( kFbwL < 4 ) {
		 viraW [ kFbwL ] = viraW [ kFbwL ] + viraW [ kFbwL + 4 ];
	}
	barrier(CLK_LOCAL_MEM_FENCE);

	if( kFbwL < 2 ) {
		 viraW [ kFbwL ] = viraW [ kFbwL ] + viraW [ kFbwL + 2 ];
	}
	barrier(CLK_LOCAL_MEM_FENCE);

	if( kFbwL == 0 ) {
	 dotP  = viraW [0] + viraW [1];
	}

    if( (get_local_id(0)) == 0 ) {
        scratchBuff[ get_group_id(0) ] = dotP;
    }
}



--------------------------------------------------------

Build log:


========================================================

Segmentation fault (core dumped)
Comment 1 Jan Vesely 2017-09-18 22:55:21 UTC
*** Testing for AMD Radeon R7 Graphics (CARRIZO / DRM 3.18.0 / 4.11.0-ROC, LLVM 5.0.0)

Ran 6670 tests in 785.274s

FAILED (SKIP=12, errors=580, failures=12)

all errors are caused by:
TypeError: This is for CUDA arrays.

I haven't investigated the failures.


There are couple of patches needed:
https://github.com/Theano/libgpuarray/pull/534
https://github.com/Theano/libgpuarray/pull/535

http://lists.llvm.org/pipermail/libclc-dev/2017-September/002449.html

and:
diff --git a/src/cluda_opencl.h b/src/cluda_opencl.h
index 6e0095c..e93aa8b 100644
--- a/src/cluda_opencl.h
+++ b/src/cluda_opencl.h
@@ -48,9 +48,9 @@ typedef struct _ga_half {
 } ga_half;
 
 #define ga_half2float(p) vload_half(0, &((p).data))
-static inline ga_half ga_float2half(ga_float f) {
+inline ga_half ga_float2half(ga_float f) {
   ga_half r;
-  vstore_half_rtn(f, 0, &r.data);
+  vstore_half(f, 0, &r.data);
   return r;
 }
diff --git a/src/gpuarray_buffer_opencl.c b/src/gpuarray_buffer_opencl.c
index 8f12811..2041ca2 100644
--- a/src/gpuarray_buffer_opencl.c
+++ b/src/gpuarray_buffer_opencl.c
@@ -146,7 +146,7 @@ cl_ctx *cl_make_ctx(cl_context ctx, gpucontext_props *p) {
   CL_CHECKN(global_err, clGetDeviceInfo(id, CL_DEVICE_VERSION,
                                         device_version_size,
                                         device_version, NULL));
-  if (device_version[7] == '1' && device_version[9] < '2') {
+  if (device_version[7] == '1' && device_version[9] < '1') {
     error_set(global_err, GA_UNSUPPORTED_ERROR,
               "We only support OpenCL 1.2 and up");
     return NULL;
Comment 2 Jan Vesely 2018-04-04 23:37:57 UTC
Latest update:
diff --git a/src/cluda_opencl.h b/src/cluda_opencl.h
index 6e0095c..8ba2d14 100644
--- a/src/cluda_opencl.h
+++ b/src/cluda_opencl.h
@@ -48,7 +48,7 @@ typedef struct _ga_half {
 } ga_half;
 
 #define ga_half2float(p) vload_half(0, &((p).data))
-static inline ga_half ga_float2half(ga_float f) {
+inline ga_half ga_float2half(ga_float f) {
   ga_half r;
   vstore_half_rtn(f, 0, &r.data);
   return r;
diff --git a/src/gpuarray_buffer_opencl.c b/src/gpuarray_buffer_opencl.c
index 8f12811..2041ca2 100644
--- a/src/gpuarray_buffer_opencl.c
+++ b/src/gpuarray_buffer_opencl.c
@@ -146,7 +146,7 @@ cl_ctx *cl_make_ctx(cl_context ctx, gpucontext_props *p) {
   CL_CHECKN(global_err, clGetDeviceInfo(id, CL_DEVICE_VERSION,
                                         device_version_size,
                                         device_version, NULL));
-  if (device_version[7] == '1' && device_version[9] < '2') {
+  if (device_version[7] == '1' && device_version[9] < '1') {
     error_set(global_err, GA_UNSUPPORTED_ERROR,
               "We only support OpenCL 1.2 and up");
     return NULL

>>> pygpu.test()
pygpu is installed in /home/jvesely/.local/lib/python3.6/site-packages/pygpu-0.7.5+12.g6f0132c.dirty-py3.6-linux-x86_64.egg/pygpu
NumPy version 1.13.3
NumPy relaxed strides checking option: True
NumPy is installed in /usr/lib64/python3.6/site-packages/numpy
Python version 3.6.4 (default, Mar 13 2018, 18:18:20) [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)]
nose version 1.3.7
*** Testing for AMD Radeon R7 Graphics (CARRIZO / DRM 3.23.0 / 4.15.14-300.fc27.x86_64, LLVM 6.0.0)

----------------------------------------------------------------------
Ran 6670 tests in 995.728s

FAILED (SKIP=12, errors=580, failures=2)

All errors are: TypeError: This is for CUDA arrays.
The two failures are:
FAIL: pygpu.tests.test_elemwise.test_elemwise_f16(<built-in function add>, 'float16', 'float16', (50,))
FAIL: pygpu.tests.test_elemwise.test_elemwise_f16(<built-in function iadd>, 'float16', 'float16', (50,))

Which fail on half precision rounding error. for example:
7.0390625+7.20703125 is expected to be 14.25 but gpu returns 14.2421875
the fp32 result is 14.24609375.

The GPU result is rounded down (towards zero)
The CPU result is rounded up (away from zero)

It looks like our vstore_half_rtn is not working as expected, which is weird because it passes CTS.
Comment 3 Jan Vesely 2018-04-04 23:52:39 UTC
(In reply to Jan Vesely from comment #2) 
> It looks like our vstore_half_rtn is not working as expected, which is weird
> because it passes CTS.

I take this back.

vstore_half_rtn rounds to negative infinity (towards 0 for positive numbers).
Changing line 53 in cluda_opencl.h:
-  vstore_half_rtn(f, 0, &r.data);
+  vstore_half_rte(f, 0, &r.data);

fixes the two failures.

Other than advertising OCL1.2 the remaining failures are NOTOURBUG.
Comment 4 Jan Vesely 2018-04-05 19:18:46 UTC
Lowering CL requirements combined with the following pull requests:
https://github.com/Theano/libgpuarray/pull/571
https://github.com/Theano/libgpuarray/pull/570

Results in:
Ran 4970 tests in 1158.909s

OK (SKIP=12)
Comment 5 Jan Vesely 2018-04-23 20:20:08 UTC
(In reply to Jan Vesely from comment #4)
> Lowering CL requirements combined with the following pull requests:
> https://github.com/Theano/libgpuarray/pull/571
> https://github.com/Theano/libgpuarray/pull/570

Both above pull requests have been merged with slight modifications. running
CLOVER_DEVICE_VERSION_OVERRIDE=1.2 CLOVER_DEVICE_CLC_VERSION_OVERRIDE=1.2

results in:

Ran 6670 tests in 991.622s

OK (SKIP=12)
Comment 6 ben@besd.de 2019-04-22 20:25:21 UTC
Seems the error is still there:

CLOVER_DEVICE_VERSION_OVERRIDE=1.2 CLOVER_DEVICE_CLC_VERSION_OVERRIDE=1.2 DEVICE="opencl0:0" python3 -c "import pygpu;pygpu.test()"

fails with:

pygpu is installed in /usr/local/lib/python3.6/dist-packages/pygpu-0.7.6+20.g9cec614-py3.6-linux-x86_64.egg/pygpu
NumPy version 1.16.3
NumPy relaxed strides checking option: True
NumPy is installed in /home/nano/.local/lib/python3.6/site-packages/numpy
Python version 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
nose version 1.3.7
*** Testing for Radeon RX 560 Series (POLARIS11, DRM 3.30.0, 5.1.0-rc5+, LLVM 8.0.0)
mpi4py found: True
.................................................
========================================================

AN INTERNAL KERNEL BUILD ERROR OCCURRED!
device name = Radeon RX 560 Series (POLARIS11, DRM 3.30.0, 5.1.0-rc5+, LLVM 8.0.0)
error = -43
memory pattern = Register accumulation based swap, computing kernel generator
Subproblem dimensions: dims[0].itemY = 32, dims[0].itemX = 32, dims[0].y = 32, dims[0].x = 32, dims[0].bwidth = 64; ; dims[1].itemY = 4, dims[1].itemX = 4, dims[1].y = 4, dims[1].x = 4, dims[1].bwidth = 8; ; 
Parallelism granularity: pgran->wgDim = 1, pgran->wgSize[0] = 64, pgran->wgSize[1] = 1, pgran->wfSize = 64
Kernel extra flags: 369130144
Source:

#ifdef DOUBLE_PRECISION
    #ifdef cl_khr_fp64
    #pragma OPENCL EXTENSION cl_khr_fp64 : enable
    #else
    #pragma OPENCL EXTENSION cl_amd_fp64 : enable
    #endif
#endif

__kernel void Sdot_kernel( __global float *_X, __global float *_Y, __global float *scratchBuff,
                                        uint N, uint offx, int incx, uint offy, int incy, int doConj )
{
    __global float *X = _X + offx;
    __global float *Y = _Y + offy;
    float dotP = (float) 0.0;

    if ( incx < 0 ) {
        X = X + (N - 1) * abs(incx);
    }
    if ( incy < 0 ) {
        Y = Y + (N - 1) * abs(incy);
    }

    int gOffset;
    for( gOffset=(get_global_id(0) * 4); (gOffset + 4 - 1)<N; gOffset+=( get_global_size(0) * 4 ) )
    {
        float4 vReg1, vReg2, res;

        #ifdef INCX_NONUNITY
             vReg1 = (float4)(  (X + (gOffset*incx))[0 + ( incx * 0)],  (X + (gOffset*incx))[0 + ( incx * 1)],  (X + (gOffset*incx))[0 + ( incx * 2)],  (X + (gOffset*incx))[0 + ( incx * 3)]);
        #else
            vReg1 = vload4(  0, (__global float *) (X + gOffset) );
        #endif

        #ifdef INCY_NONUNITY
             vReg2 = (float4)(  (Y + (gOffset*incy))[0 + ( incy * 0)],  (Y + (gOffset*incy))[0 + ( incy * 1)],  (Y + (gOffset*incy))[0 + ( incy * 2)],  (Y + (gOffset*incy))[0 + ( incy * 3)]);
        #else
            vReg2 = vload4(  0, (__global float *) (Y + gOffset) );
        #endif

        ;
         res =  vReg1 *  vReg2 ;
        dotP +=  res .S0 +  res .S1 +  res .S2 +  res .S3;
;          // Add-up elements in the vector to give a scalar
    }

    // Loop for the last thread to handle the tail part of the vector
    // Using the same gOffset used above
    for( ; gOffset<N; gOffset++ )
    {
        float sReg1, sReg2, res;
        sReg1 = X[gOffset * incx];
        sReg2 = Y[gOffset * incy];

        ;
             res =  sReg1 *  sReg2 ;
             dotP =  dotP +  res ;
        }

    // Note: this has to be called outside any if-conditions- because REDUCTION uses barrier
    // dotP of work-item 0 will have the final reduced item of the work-group
    __local float bixzI [ 64 ];
	uint yBrfY = get_local_id(0);
	 bixzI [ yBrfY ] =  dotP ;
	barrier(CLK_LOCAL_MEM_FENCE);

	if( yBrfY < 32 ) {
		 bixzI [ yBrfY ] = bixzI [ yBrfY ] + bixzI [ yBrfY + 32 ];
	}
	barrier(CLK_LOCAL_MEM_FENCE);

	if( yBrfY < 16 ) {
		 bixzI [ yBrfY ] = bixzI [ yBrfY ] + bixzI [ yBrfY + 16 ];
	}
	barrier(CLK_LOCAL_MEM_FENCE);

	if( yBrfY < 8 ) {
		 bixzI [ yBrfY ] = bixzI [ yBrfY ] + bixzI [ yBrfY + 8 ];
	}
	barrier(CLK_LOCAL_MEM_FENCE);

	if( yBrfY < 4 ) {
		 bixzI [ yBrfY ] = bixzI [ yBrfY ] + bixzI [ yBrfY + 4 ];
	}
	barrier(CLK_LOCAL_MEM_FENCE);

	if( yBrfY < 2 ) {
		 bixzI [ yBrfY ] = bixzI [ yBrfY ] + bixzI [ yBrfY + 2 ];
	}
	barrier(CLK_LOCAL_MEM_FENCE);

	if( yBrfY == 0 ) {
	 dotP  = bixzI [0] + bixzI [1];
	}

    if( (get_local_id(0)) == 0 ) {
        scratchBuff[ get_group_id(0) ] = dotP;
    }
}



--------------------------------------------------------

Build log:


========================================================

[nano2:28210] *** Process received signal ***
[nano2:28210] Signal: Segmentation fault (11)
[nano2:28210] Signal code: Address not mapped (1)
[nano2:28210] Failing at address: (nil)
[nano2:28210] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x3ef20)[0x7fbff3a90f20]
[nano2:28210] [ 1] /usr/lib/x86_64-linux-gnu/libclBLAS.so(makeKernelCached+0x2a0)[0x7fbf9eaefcf0]
[nano2:28210] [ 2] /usr/lib/x86_64-linux-gnu/libclBLAS.so(makeSolutionSeq+0x101b)[0x7fbf9eaf445b]
[nano2:28210] [ 3] /usr/lib/x86_64-linux-gnu/libclBLAS.so(doDot+0x2b2)[0x7fbf9ead7c52]
[nano2:28210] [ 4] /usr/lib/x86_64-linux-gnu/libclBLAS.so(clblasSdot+0x98)[0x7fbf9ead7da8]
[nano2:28210] [ 5] /home/nano/.local/lib/libgpuarray.so.3(+0x32529)[0x7fbff23aa529]
[nano2:28210] [ 6] /home/nano/.local/lib/libgpuarray.so.3(GpuArray_rdot+0x393)[0x7fbff23879f3]
[nano2:28210] [ 7] /usr/local/lib/python3.6/dist-packages/pygpu-0.7.6+20.g9cec614-py3.6-linux-x86_64.egg/pygpu/blas.cpython-36m-x86_64-linux-gnu.so(+0x6032)[0x7fbf9c0fa032]
[nano2:28210] [ 8] /usr/local/lib/python3.6/dist-packages/pygpu-0.7.6+20.g9cec614-py3.6-linux-x86_64.egg/pygpu/blas.cpython-36m-x86_64-linux-gnu.so(+0x67ba)[0x7fbf9c0fa7ba]
[nano2:28210] [ 9] python3[0x5030d5]
[nano2:28210] [10] python3(_PyEval_EvalFrameDefault+0x1231)[0x507641]
[nano2:28210] [11] python3[0x504c28]
[nano2:28210] [12] python3[0x58650d]
[nano2:28210] [13] python3(PyObject_Call+0x3e)[0x59ebbe]
[nano2:28210] [14] python3(_PyEval_EvalFrameDefault+0x1807)[0x507c17]
[nano2:28210] [15] python3[0x504c28]
[nano2:28210] [16] python3[0x58644b]
[nano2:28210] [17] python3(PyObject_Call+0x3e)[0x59ebbe]
[nano2:28210] [18] python3(_PyEval_EvalFrameDefault+0x1807)[0x507c17]
[nano2:28210] [19] python3[0x502209]
[nano2:28210] [20] python3[0x502f3d]
[nano2:28210] [21] python3(_PyEval_EvalFrameDefault+0x449)[0x506859]
[nano2:28210] [22] python3[0x504c28]
[nano2:28210] [23] python3(_PyFunction_FastCallDict+0x2de)[0x501b2e]
[nano2:28210] [24] python3[0x591461]
[nano2:28210] [25] python3(PyObject_Call+0x3e)[0x59ebbe]
[nano2:28210] [26] python3(_PyEval_EvalFrameDefault+0x1807)[0x507c17]
[nano2:28210] [27] python3[0x504c28]
[nano2:28210] [28] python3(_PyFunction_FastCallDict+0x2de)[0x501b2e]
[nano2:28210] [29] python3[0x591461]
[nano2:28210] *** End of error message ***
Speicherzugriffsfehler (Speicherabzug geschrieben)
Comment 7 ben@besd.de 2019-04-22 20:25:38 UTC
Running https://github.com/ZVK/sampleRNN_ICLR2017 fails with:
Traceback (most recent call last):
  File "models/two_tier/two_tier32k.py", line 429, in <module>
    on_unused_input='warn'
  File "/home/nano/rust/mesa/Theano/theano/compile/function.py", line 317, in function
    output_keys=output_keys)
  File "/home/nano/rust/mesa/Theano/theano/compile/pfunc.py", line 486, in pfunc
    output_keys=output_keys)
  File "/home/nano/rust/mesa/Theano/theano/compile/function_module.py", line 1841, in orig_function
    fn = m.create(defaults)
  File "/home/nano/rust/mesa/Theano/theano/compile/function_module.py", line 1715, in create
    input_storage=input_storage_lists, storage_map=storage_map)
  File "/home/nano/rust/mesa/Theano/theano/gof/link.py", line 699, in make_thunk
    storage_map=storage_map)[:3]
  File "/home/nano/rust/mesa/Theano/theano/gof/vm.py", line 1091, in make_all
    impl=impl))
  File "/home/nano/rust/mesa/Theano/theano/gof/op.py", line 955, in make_thunk
    no_recycling)
  File "/home/nano/rust/mesa/Theano/theano/gof/op.py", line 858, in make_c_thunk
    output_storage=node_output_storage)
  File "/home/nano/rust/mesa/Theano/theano/gof/cc.py", line 1217, in make_thunk
    keep_lock=keep_lock)
  File "/home/nano/rust/mesa/Theano/theano/gof/cc.py", line 1157, in __compile__
    keep_lock=keep_lock)
  File "/home/nano/rust/mesa/Theano/theano/gof/cc.py", line 1641, in cthunk_factory
    *(in_storage + out_storage + orphd))
RuntimeError: ('The following error happened while compiling the node', GpuCrossentropySoftmaxArgmax1HotWithBias(GpuDot22.0, SampleLevel.Output.b, GpuReshape{1}.0), '\n', 'GpuKernel_init error 3: clBuildProgram: Unknown error')
Comment 8 ben@besd.de 2019-04-22 20:27:00 UTC
I'm using mesa and linux master git on ubuntu 18.04.2
Theano and libgpuarray are installed from git as well.
The changes you have made in the past are still there.

Any idea what could be wrong now?
Comment 9 ben@besd.de 2019-04-22 20:42:16 UTC
Just in case it is of any importance:
clinfo
Number of platforms                               1
  Platform Name                                   Clover
  Platform Vendor                                 Mesa
  Platform Version                                OpenCL 1.1 Mesa 19.1.0-devel (git-a6ccc4c 2019-04-21 bionic-oibaf-ppa)
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_icd
  Platform Extensions function suffix             MESA

  Platform Name                                   Clover
Number of devices                                 1
  Device Name                                     Radeon RX 560 Series (POLARIS11, DRM 3.30.0, 5.1.0-rc5+, LLVM 8.0.0)
  Device Vendor                                   AMD
  Device Vendor ID                                0x1002
  Device Version                                  OpenCL 1.1 Mesa 19.1.0-devel (git-a6ccc4c 2019-04-21 bionic-oibaf-ppa)
  Driver Version                                  19.1.0-devel
  Device OpenCL C Version                         OpenCL C 1.1 
  Device Type                                     GPU
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Max compute units                               16
  Max clock frequency                             1300MHz
  Max work item dimensions                        3
  Max work item sizes                             256x256x256
  Max work group size                             256
  Preferred work group size multiple              64
  Preferred / native vector sizes                 
    char                                                16 / 16      
    short                                                8 / 8       
    int                                                  4 / 4       
    long                                                 2 / 2       
    half                                                 8 / 8        (cl_khr_fp16)
    float                                                4 / 4       
    double                                               2 / 2        (cl_khr_fp64)
  Half-precision Floating-point support           (cl_khr_fp16)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 No
    Round to infinity                             No
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
  Single-precision Floating-point support         (core)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 No
    Round to infinity                             No
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
  Address bits                                    64, Little-Endian
  Global memory size                              4294967296 (4GiB)
  Error Correction support                        No
  Max memory allocation                           3435973836 (3.2GiB)
  Unified memory for Host and Device              No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       32768 bits (4096 bytes)
  Global Memory cache type                        None
  Image support                                   No
  Local memory type                               Local
  Local memory size                               32768 (32KiB)
  Max number of constant args                     16
  Max constant buffer size                        2147483647 (2GiB)
  Max size of kernel argument                     1024
  Queue properties                                
    Out-of-order execution                        No
    Profiling                                     Yes
  Profiling timer resolution                      0ns
  Execution capabilities                          
    Run OpenCL kernels                            Yes
    Run native kernels                            No
  Device Extensions                               cl_khr_byte_addressable_store cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_fp64 cl_khr_fp16

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  Clover
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   Success [MESA]
  clCreateContext(NULL, ...) [default]            Success [MESA]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  Success (1)
    Platform Name                                 Clover
    Device Name                                   Radeon RX 560 Series (POLARIS11, DRM 3.30.0, 5.1.0-rc5+, LLVM 8.0.0)
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  Success (1)
    Platform Name                                 Clover
    Device Name                                   Radeon RX 560 Series (POLARIS11, DRM 3.30.0, 5.1.0-rc5+, LLVM 8.0.0)
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 Clover
    Device Name                                   Radeon RX 560 Series (POLARIS11, DRM 3.30.0, 5.1.0-rc5+, LLVM 8.0.0)

ICD loader properties
  ICD loader Name                                 OpenCL ICD Loader
  ICD loader Vendor                               OCL Icd free software
  ICD loader Version                              2.2.11
  ICD loader Profile                              OpenCL 2.1
Comment 10 ben@besd.de 2019-04-22 21:42:23 UTC
Just to make extra sure its most likely a problem with clover I installed the AMD legacy opencl driver in parallel (works fine):

DEVICE="opencl1:0" python3 -c "import pygpu;pygpu.test()"
pygpu is installed in /usr/local/lib/python3.6/dist-packages/pygpu-0.7.6+20.g9cec614-py3.6-linux-x86_64.egg/pygpu
NumPy version 1.16.3
NumPy relaxed strides checking option: True
NumPy is installed in /home/nano/.local/lib/python3.6/site-packages/numpy
Python version 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
nose version 1.3.7
*** Testing for Baffin
mpi4py found: True
.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSSSSSSS................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
----------------------------------------------------------------------
Ran 7300 tests in 101.882s

OK (SKIP=11)
Comment 11 Jan Vesely 2019-04-23 17:07:46 UTC
It has been some time that I ran theano. Does the error happen even if it's built without clBLAS support?

clBLAS depends on CL1.2 features which are not implemented, yet. (hence the dependence on 94273)
Comment 12 ben@besd.de 2019-04-23 18:30:56 UTC
Seems about right

CLOVER_DEVICE_VERSION_OVERRIDE=1.2 CLOVER_DEVICE_CLC_VERSION_OVERRIDE=1.2 DEVICE="opencl0:0" python3 -c "import pygpu;pygpu.test()"
pygpu is installed in /usr/local/lib/python3.6/dist-packages/pygpu-0.7.6+20.g9cec614-py3.6-linux-x86_64.egg/pygpu
NumPy version 1.16.3
NumPy relaxed strides checking option: True
NumPy is installed in /home/nano/.local/lib/python3.6/site-packages/numpy
Python version 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
nose version 1.3.7
*** Testing for Radeon RX 560 Series (POLARIS11, DRM 3.30.0, 4.15.0-47-generic, LLVM 8.0.0)
mpi4py found: True
.................................................EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEESSSSSSSSSSS................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
======================================================================
ERROR: pygpu.tests.test_blas.test_dot(1, 'float32', True, True, True, False)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/usr/local/lib/python3.6/dist-packages/pygpu-0.7.6+20.g9cec614-py3.6-linux-x86_64.egg/pygpu/tests/test_blas.py", line 22, in f
    func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/pygpu-0.7.6+20.g9cec614-py3.6-linux-x86_64.egg/pygpu/tests/test_blas.py", line 56, in dot
    gr = gblas.dot(gX, gY, gZ, overwrite_z=overwrite)
  File "pygpu/blas.pyx", line 79, in pygpu.blas.dot
  File "pygpu/blas.pyx", line 29, in pygpu.blas.pygpu_blas_rdot
pygpu.gpuarray.GpuArrayException: (b'Missing Blas library', 5)

...

Ran 7300 tests in 972.999s

FAILED (SKIP=11, errors=584)
Comment 13 ben@besd.de 2019-04-23 18:39:06 UTC
Unfortunately it turns out that even a working opencl (used the closed AMD legacy driver) isnt getting the application i'm trying to run (see above) to work. At some point it complains that a data structure requires cuda (somewhere in libgpuarray).

Which could probably be fixed if theano was still being maintained.
Which it sort of isnt (pymc3 still uses it and they want to maintain what they use of it)
However pymc4 is using tensorflow which as AFAICT is CUDA only.

So what I really want is probably cuda on opencl (https://github.com/hughperkins/coriander) but that requires opencl 1.2.

back to square one ;)
Comment 14 GitLab Migration User 2019-09-18 17:56:13 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/mesa/mesa/issues/137.

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.