neurotools.obsolete.gpu.cu.function module
- neurotools.obsolete.gpu.cu.function.format(code)[source]
This is a kernel source auto-formatter. It mostly just does auto-indent
- neurotools.obsolete.gpu.cu.function.printKernel(code)[source]
This prints out a kernel source with line numbers
- neurotools.obsolete.gpu.cu.function.gpubin(fun)[source]
This is a small wrapper to simplify calling binary r = a op b kernels. It automates creation of the result array
- neurotools.obsolete.gpu.cu.function.gpuscalar(fun)
This is a small wrapper to simplify calling binary r = a op b kernels. It automates creation of the result array
- neurotools.obsolete.gpu.cu.function.gpumap(exp)[source]
This is a small wrapper to simplify creation of b[i] = f(a[i]) map kernels. The map function is passed in as a string representing a CUDA expression. The dollar sign $ should denote the argument variable. A return array is automatically constructed. For example, gpumap(‘$’) creates a clone or idenitiy kernel, so A=gpumap(‘$’)(B) will assign a copy of B to A. As a nontrivial example, a nonlinear map might function could be created as gpumap(‘1/(1+exp(-$))’)
- neurotools.obsolete.gpu.cu.function.gpuintmap(exp)[source]
This is the same thing as gpumap except for integer datatypes
- neurotools.obsolete.gpu.cu.function.expsub(exp)
- neurotools.obsolete.gpu.cu.function.gpumapeq(exp)[source]
This is a small wrapper to simplify creation of a[i] = f(a[i]) map kernels. The map function is passed in as a string representing a CUDA expression. The dollar sign $ should denote the argument variable. The result is assigned into the original array, so no new memory is allocated. For example, gpumap(‘$’) creates a clone or idenitiy kernel, so A = gpumap(‘$’)(B) will assign a copy of B to A. As a nontrivial example, a nonlinear map might function could be created as gpumap(‘1/(1+exp(-$))’)
- neurotools.obsolete.gpu.cu.function.gpubinaryeq(exp)[source]
This wrapper simplified the creation of kernels executing operators like {‘+=’,’-=’,’*=’,’/=’}. That is, binary operators that assign the result to the left operator. This is to suppliment the functionality of PyCUDA GPUArrays, which support binary operations but always allocate a new array to hold the result. This wrapper allows you to efficiently execute binary operations that assign the result to one of the argument arrays. For example, implement the GPU equivalent of += as gpubinaryeq(‘$x+$y’)(x,y). The result will automatically be assigned to the first argument, x.
- neurotools.obsolete.gpu.cu.function.guessGPUType(arg)[source]
At the moment, this returns numpy.float32 for Python floats and numpy.int32 for python integers, and is otherwise undefined
- neurotools.obsolete.gpu.cu.function.toGPUType(arg)
A little wrapper to auto-cast floats/ints to respective numpy datatypes for use on the GPU. This functionality probably exists elsewhere
- neurotools.obsolete.gpu.cu.function.ezkern(header, code, other=None)[source]
This is my easy kernel wrapper. This function accepts a header ( the list of arguments ), a body ( the core of the loop ), and optionally a block of helper function code. The core loop should reference “tid” as the thread index variable. The distribution of threads on the GPU is automatically managed.
- neurotools.obsolete.gpu.cu.function.kernel(header, code, other=None)
This is my easy kernel wrapper. This function accepts a header ( the list of arguments ), a body ( the core of the loop ), and optionally a block of helper function code. The core loop should reference “tid” as the thread index variable. The distribution of threads on the GPU is automatically managed.
- neurotools.obsolete.gpu.cu.function.gpupointer(gpuarr)
Returns the starting memory location of a GPUArray
- neurotools.obsolete.gpu.cu.function.cpu(v)
Casts a gpu array to respective numpy array type
- neurotools.obsolete.gpu.cu.function.gpufloat(v)
Casts a python list to a float array on the gpu
- neurotools.obsolete.gpu.cu.function.gpufloatmat(M)
Moves a python list of lists of floats to a GPU row major packed integer matric simply by flattening the python datastructure and copying
- neurotools.obsolete.gpu.cu.function.gpufloatred(fun)
Wraps a GPUArray reduction function into a succint form operating on float arrays
- neurotools.obsolete.gpu.cu.function.gpuint(M)
Casts a python list to an integer array on the GPU
- neurotools.obsolete.gpu.cu.function.gpuintmat(M)
Moves a python list of lists of integers to a GPU row major packed integer matric simply by flattening the python datastructure and copying
- neurotools.obsolete.gpu.cu.function.gpuintred(fun)
Wraps a GPUArray reduction function into a succint form operating on int arrays