Check out my first novel, midnight's simulacra!

CUDA: Difference between revisions

From dankwiki
Line 163: Line 163:
|-  
|-  
| 1.2
| 1.2
| Atomic ops on 64-bit global integers and 32-bit shared integers. 32 warps and 16K registers per MP. Vote instructions.
| Atomic ops on 64-bit global integers and 32-bit shared integers. 32 warps (1024 threads) and 16K registers per MP. Vote instructions.
|-
|-
| 1.3
| 1.3
Line 169: Line 169:
|-
|-
| 2.0
| 2.0
| Atomic addition on 32-bit global and shared FP. <tt>__syncthreads_{count,and,or}()</tt>, <tt>__threadfence_system()</tt>, and <tt>__ballot()</tt>.
| Atomic addition on 32-bit global and shared FP. 48 warps (1536 threads), 48K shared memory banked 32 ways, and 32K registers per MP. 512K local memory per thread. <tt>__syncthreads_{count,and,or}()</tt>, <tt>__threadfence_system()</tt>, and <tt>__ballot()</tt>. 1024 threads per block and <tt>blockIdx.{x,y}</tt> values ranging through 1024. Larger texture references.
|-
|-
|}
|}

Revision as of 07:52, 11 March 2010

Hardware

NVIDIA maintains a list of supported hardware. For actual hardware, you'll need the "nvidia.ko" kernel module. Download the nvidia-kernel-source and nvidia-kernel-common packages, unpack /usr/src/nvidia-kernel.tar.bz2, and run make-kpkg modules_image. Install the resulting .deb, and modprobe nvidia. You'll see something like this in dmesg output:

nvidia: module license 'NVIDIA' taints kernel.
Disabling lock debugging due to kernel taint
nvidia 0000:07:00.0: enabling device (0000 -> 0003)
nvidia 0000:07:00.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21
nvidia 0000:07:00.0: setting latency timer to 64
NVRM: loading NVIDIA UNIX x86_64 Kernel Module  190.53  Wed Dec  9 15:29:46 PST 2009

Once the module is loaded, CUDA should be able to find the device. See below for sample outputs.

Emulation

Otherwise, there's emulation...

[recombinator](0) $ ~/local/cuda/C/bin/linux/emurelease/deviceQuery
CUDA Device Query (Runtime API) version (CUDART static linking)
There is no device supporting CUDA.

Device 0: "Device Emulation (CPU)"
  CUDA Driver Version:                           2.30
  CUDA Runtime Version:                          2.30
  CUDA Capability Major revision number:         9999
  CUDA Capability Minor revision number:         9999
  Total amount of global memory:                 4294967295 bytes
  Number of multiprocessors:                     16
  Number of cores:                               128
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       16384 bytes
  Total number of registers available per block: 8192
  Warp size:                                     1
  Maximum number of threads per block:           512
  Maximum sizes of each dimension of a block:    512 x 512 x 64
  Maximum sizes of each dimension of a grid:     65535 x 65535 x 1
  Maximum memory pitch:                          262144 bytes
  Texture alignment:                             256 bytes
  Clock rate:                                    1.35 GHz
  Concurrent copy and execution:                 No
  Run time limit on kernels:                     No
  Integrated:                                    Yes
  Support host page-locked memory mapping:       Yes
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)

Test PASSED

Each device has a compute capability, though this does not encompass all differentiated capabilities (see also deviceOverlap and canMapHostMemory...).

Installation on Debian

libcuda-dev packages exist in the non-free archive area, and supply the core library libcuda.so. Together with the upstream toolkit and SDK from NVIDIA, this provides a full CUDA development environment for 64-bit Debian Unstable systems. I installed CUDA 2.3 on 2010-01-25 (hand-rolled 2.6.32.6 kernel, built with gcc-4.4). This machine did not have CUDA-compatible hardware (it uses Intel 965).

  • Download the Ubuntu 9.04 files from NVIDIA's "CUDA Zone".
  • Run the toolkit installer (sh cudatoolkit_2.3_linux_64_ubuntu9.04.run)
    • For a user-mode install, supply $HOME/local or somesuch
* Please make sure your PATH includes /home/dank/local/cuda/bin
* Please make sure your LD_LIBRARY_PATH
*   for 32-bit Linux distributions includes /home/dank/local/cuda/lib
*   for 64-bit Linux distributions includes /home/dank/local/cuda/lib64
* OR
*   for 32-bit Linux distributions add /home/dank/local/cuda/lib
*   for 64-bit Linux distributions add /home/dank/local/cuda/lib64
* to /etc/ld.so.conf and run ldconfig as root

* Please read the release notes in /home/dank/local/cuda/doc/

* To uninstall CUDA, delete /home/dank/local/cuda
* Installation Complete
  • Run the SDK installer (sh cudasdk_2.3_linux.run)
    • I just installed it to the same directory as the toolkit, which seems to work fine.
========================================

Configuring SDK Makefile (/home/dank/local/cuda/shared/common.mk)...

========================================

* Please make sure your PATH includes /home/dank/local/cuda/bin
* Please make sure your LD_LIBRARY_PATH includes /home/dank/local/cuda/lib

* To uninstall the NVIDIA GPU Computing SDK, please delete /home/dank/local/cuda
* Installation Complete

Building CUDA Apps

nvcc flags

  • -ptax-options=-v displays per-thread register usage

SDK's common.mk

This assumes use of the SDK's common.mk, as recommended by the documentation.

  • Add the library path to LD_LIBRARY_PATH, assuming CUDA's been installed to a non-standard directory.
  • Set the CUDA_INSTALL_PATH and ROOTDIR (yeargh!) if outside the SDK.
  • I keep the following in bin/cudasetup of my home directory. Source it, using sh's . cudasetup syntax:
CUDA="$HOME/local/cuda/"

export CUDA_INSTALL_PATH="$CUDA"
export ROOTDIR="$CUDA/C/common/"
if [ -n "$LD_LIBRARY_PATH" ] ; then
	export "LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA/lib64"
else
	export "LD_LIBRARY_PATH=$CUDA/lib64"
fi

unset CUDA
  • Set EXECUTABLE in your Makefile, and include $CUDA_INSTALL_PATH/C/common/common.mk

Unit testing

The DEFAULT_GOAL special variable of GNU Make can be used:

.PHONY: test
.DEFAULT_GOAL:=test

include $(CUDA_INSTALL_PATH)/C/common/common.mk

test: $(TARGET)
        $(TARGET)

Libraries

Two mutually exclusive means of driving CUDA are available: the "Driver API" and "C for CUDA" with its accompanying nvcc compiler and runtime. The latter (libcudart) is built atop the former, and requires its libcuda library.

CUDA model

  • A given host thread can execute code on only one device at once (but multiple host threads can execute code on the same device)
  • Each processor has a register file.
    • 8192 registers for compute capability <= 1.1, otherwise
    • 16384 for compute capability <= 1.3
  • A group of threads which share a memory and can "synchronize their execution to coördinate accesses to memory" (use a barrier) form a block. Each thread has a threadId within its (three-dimensional) block.
    • For a block of dimensions <Dx, Dy, Dz>, the threadId of the thread having index <x, y, z> is (x + y * Dx + z * Dy * Dx).
  • Register allocation is performed per-block, and rounded up to the nearest
    • 256 registers per block for compute capability <= 1.1, otherwise
    • 512 registers per block for compute capability <= 1.3
  • A group of blocks which share a kernel form a grid. Each block (and each thread within that block) has a blockId within its (two-dimensional) grid.
    • For a grid of dimensions <Dx, Dy>, the blockId of the block having index <x, y> is (x + y * Dx).
  • Thus, a given thread's <blockId X threadId> dyad is unique across the device. All the threads of a block share a blockId, and corresponding threads of various blocks share a threadId.
  • Each time the kernel is instantiated, new grid and block dimensions may be provided
  • A block's threads, starting from threadId 0, are broken up into contiguous warps having some warp size number of threads.
Memory type Replication Access Host access
Registers Per-thread Read-write None
Local memory Per-thread Read-write None
Shared memory Per-block Read-write None
Global memory Per-grid Read-write Read-write
Constant memory Per-grid Read Read-write
Texture memory Per-grid Read Read-write

Compute Capabilities

The original public CUDA revision was 1.0, implemented on the NV50 chipset corresponding to the GeForce 8 series. Compute capability, formed of a non-negative major and minor revision number, can be queried on CUDA-capable cards. All revisions thus far have been backwards-compatible.

Revision Changes
1.1 Atomic ops on 32-bit global integers. Breakpoints and other debugging support.
1.2 Atomic ops on 64-bit global integers and 32-bit shared integers. 32 warps (1024 threads) and 16K registers per MP. Vote instructions.
1.3 Double-precision floating point
2.0 Atomic addition on 32-bit global and shared FP. 48 warps (1536 threads), 48K shared memory banked 32 ways, and 32K registers per MP. 512K local memory per thread. __syncthreads_{count,and,or}(), __threadfence_system(), and __ballot(). 1024 threads per block and blockIdx.{x,y} values ranging through 1024. Larger texture references.

deviceQuery info

Compute capability 1.3

Compute capability 1.2

GeForce 310 (PCIe x16)

Device 0: "GeForce 310"
  CUDA Driver Version:                           3.0
  CUDA Runtime Version:                          2.30
  CUDA Capability Major revision number:         1
  CUDA Capability Minor revision number:         2
  Total amount of global memory:                 536084480 bytes
  Number of multiprocessors:                     2
  Number of cores:                               16
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       16384 bytes
  Total number of registers available per block: 16384
  Warp size:                                     32
  Maximum number of threads per block:           512
  Maximum sizes of each dimension of a block:    512 x 512 x 64
  Maximum sizes of each dimension of a grid:     65535 x 65535 x 1
  Maximum memory pitch:                          262144 bytes
  Texture alignment:                             256 bytes
  Clock rate:                                    1.40 GHz
  Concurrent copy and execution:                 Yes
  Run time limit on kernels:                     No
  Integrated:                                    No
  Support host page-locked memory mapping:       Yes
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)

Compute capability 1.1

GeForce 8400 GS (PCI)

Device 0: "GeForce 8400 GS"
  CUDA Driver Version:                           2.30
  CUDA Runtime Version:                          2.30
  CUDA Capability Major revision number:         1
  CUDA Capability Minor revision number:         1
  Total amount of global memory:                 536608768 bytes
  Number of multiprocessors:                     1
  Number of cores:                               8
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       16384 bytes
  Total number of registers available per block: 8192
  Warp size:                                     32
  Maximum number of threads per block:           512
  Maximum sizes of each dimension of a block:    512 x 512 x 64
  Maximum sizes of each dimension of a grid:     65535 x 65535 x 1
  Maximum memory pitch:                          262144 bytes
  Texture alignment:                             256 bytes
  Clock rate:                                    1.40 GHz
  Concurrent copy and execution:                 No
  Run time limit on kernels:                     No
  Integrated:                                    No
  Support host page-locked memory mapping:       No
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)

See Also