Check out my first novel, midnight's simulacra!

CUDA: Difference between revisions

From dankwiki
("65K" ?!)
 
(76 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[File:Gt200die-big.jpg|right|thumb|A "Fermi" GT200 die]]
==Hardware==
==Hardware==
NVIDIA maintains a list of [http://www.nvidia.com/object/cuda_learn_products.html supported hardware]. For actual hardware, you'll need the "nvidia.ko" kernel module. Download the <tt>nvidia-kernel-source</tt> and <tt>nvidia-kernel-common</tt> packages, unpack <tt>/usr/src/nvidia-kernel.tar.bz2</tt>, and run <tt>make-kpkg modules_image</tt>. Install the resulting .deb, and modprobe nvidia. You'll see something like this in dmesg output:<pre>nvidia: module license 'NVIDIA' taints kernel.
NVIDIA maintains a list of [http://www.nvidia.com/object/cuda_learn_products.html supported hardware]. You'll need the "nvidia.ko" kernel module. On [[Debian]], use the <tt>nvidia-kernel-dkms</tt> package to build a module appropriate for your kernel (and automatically rebuild it upon kernel upgrades). You can also download the <tt>nvidia-kernel-source</tt> and <tt>nvidia-kernel-common</tt> packages, unpack <tt>/usr/src/nvidia-kernel.tar.bz2</tt>, and run <tt>make-kpkg modules_image</tt>. Install the resulting .deb, and modprobe nvidia. You'll see something like this in dmesg output:<pre>nvidia: module license 'NVIDIA' taints kernel.
Disabling lock debugging due to kernel taint
Disabling lock debugging due to kernel taint
nvidia 0000:07:00.0: enabling device (0000 -> 0003)
nvidia 0000:07:00.0: enabling device (0000 -> 0003)
Line 6: Line 7:
nvidia 0000:07:00.0: setting latency timer to 64
nvidia 0000:07:00.0: setting latency timer to 64
NVRM: loading NVIDIA UNIX x86_64 Kernel Module  190.53  Wed Dec  9 15:29:46 PST 2009</pre>
NVRM: loading NVIDIA UNIX x86_64 Kernel Module  190.53  Wed Dec  9 15:29:46 PST 2009</pre>
Once the module is loaded, CUDA should be able to find the device. See [[CUDA#deviceQuery_Output|below]] for sample outputs.
Once the module is loaded, CUDA should be able to find the device. See [[CUDA#deviceQuery_Output|below]] for sample outputs. Each device has a [[CUDA#Compute_Capabilities|compute capability]], though this does not encompass all differentiated capabilities (see also <tt>deviceOverlap</tt> and <tt>canMapHostMemory</tt>...). Note that "emulation mode" has been removed as of CUDA Toolkit Version 3.1.
===Emulation===
Otherwise, there's emulation...
<pre>[recombinator](0) $ ~/local/cuda/C/bin/linux/emurelease/deviceQuery
CUDA Device Query (Runtime API) version (CUDART static linking)
There is no device supporting CUDA.


Device 0: "Device Emulation (CPU)"
  CUDA Driver Version:                          2.30
  CUDA Runtime Version:                          2.30
  CUDA Capability Major revision number:        9999
  CUDA Capability Minor revision number:        9999
  Total amount of global memory:                4294967295 bytes
  Number of multiprocessors:                    16
  Number of cores:                              128
  Total amount of constant memory:              65536 bytes
  Total amount of shared memory per block:      16384 bytes
  Total number of registers available per block: 8192
  Warp size:                                    1
  Maximum number of threads per block:          512
  Maximum sizes of each dimension of a block:    512 x 512 x 64
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
  Maximum memory pitch:                          262144 bytes
  Texture alignment:                            256 bytes
  Clock rate:                                    1.35 GHz
  Concurrent copy and execution:                No
  Run time limit on kernels:                    No
  Integrated:                                    Yes
  Support host page-locked memory mapping:      Yes
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)
Test PASSED</pre>
Each device has a '''compute capability''', though this does not encompass all differentiated capabilities (see also <tt>deviceOverlap</tt> and <tt>canMapHostMemory</tt>...).
==CUDA model==
==CUDA model==
===Host===
===Host===
Line 52: Line 22:
** Larger one-time setup cost due to device register programming for DMA transfers.
** Larger one-time setup cost due to device register programming for DMA transfers.
** This memory will be unswappable -- allocate only as much as is needed.
** This memory will be unswappable -- allocate only as much as is needed.
* Pinned memory can be mapped directly into CUDAspace on ''integrated'' devices or in the presence of some IOMMUs.
* Pinned memory can be mapped directly into CUDAspace on ''integrated'' devices or in the presence of some [[IOMMU|IOMMUs]].
** "Zero (explicit)-copy" interface (can never hide all bus delays)
** "Zero (explicit)-copy" interface (can never hide all bus delays)
* Write-combining memory (configured via [[MTRR|MTRRs]] or [[Page Attribute Tables|PATs]]) avoids PCI snoop requirements and maximizes linear throughput
* Write-combining memory (configured via [[MTRR|MTRRs]] or [[Page Attribute Tables|PATs]]) avoids PCI snoop requirements and maximizes linear throughput
Line 62: Line 32:
===Streaming Multiprocessor===
===Streaming Multiprocessor===
* Each SM has a register file, fast local (''shared'') memory, a cache for constant memory, an instruction cache (ROP), a multithreaded instruction dispatcher, and some number of [[#Stream Processor|Stream Processors]] (SPs).
* Each SM has a register file, fast local (''shared'') memory, a cache for constant memory, an instruction cache (ROP), a multithreaded instruction dispatcher, and some number of [[#Stream Processor|Stream Processors]] (SPs).
** 8192 registers for compute capability <= 1.1, otherwise
** 8K registers for compute capability <= 1.1, otherwise
** 16384 for compute capability <= 1.3
** 16K for compute capability <= 1.3, otherwise
** 32K for compute capability <= 2.1, otherwise
** 64K through at least compute capability 3.5
* A group of threads which share a memory and can "synchronize their execution to coördinate accesses to memory" (use a [[barrier]]) form a '''block'''. Each thread has a ''threadId'' within its (three-dimensional) block.
* A group of threads which share a memory and can "synchronize their execution to coördinate accesses to memory" (use a [[barrier]]) form a '''block'''. Each thread has a ''threadId'' within its (three-dimensional) block.
** For a block of dimensions &lt;D<sub>x</sub>, D<sub>y</sub>, D<sub>z</sub>&gt;, the threadId of the thread having index &lt;x, y, z&gt; is (x + y * D<sub>x</sub> + z * D<sub>y</sub> * D<sub>x</sub>).
** For a block of dimensions &lt;D<sub>x</sub>, D<sub>y</sub>, D<sub>z</sub>&gt;, the threadId of the thread having index &lt;x, y, z&gt; is (x + y * D<sub>x</sub> + z * D<sub>y</sub> * D<sub>x</sub>).
* Register allocation is performed per-block, and rounded up to the nearest
* Register allocation is performed per-block, and rounded up to the nearest
** 256 registers per block for compute capability <= 1.1, otherwise
** 256 registers per block for compute capability <= 1.1, otherwise
** 512 registers per block for compute capability <= 1.3.
** 512 registers per block for compute capability <= 1.3
* A group of blocks which share a kernel form a '''grid'''. Each block (and each thread within that block) has a ''blockId'' within its (two-dimensional) grid.
* A group of blocks which share a kernel form a '''grid'''. Each block (and each thread within that block) has a ''blockId'' within its (two-dimensional) grid.
** For a grid of dimensions &lt;D<sub>x</sub>, D<sub>y</sub>&gt;, the blockId of the block having index &lt;x, y&gt; is (x + y * D<sub>x</sub>).
** For a grid of dimensions &lt;D<sub>x</sub>, D<sub>y</sub>&gt;, the blockId of the block having index &lt;x, y&gt; is (x + y * D<sub>x</sub>).
Line 89: Line 61:
* that the total number of threads not exceed some limit ''t'' (likely bounding the divergence-tracking stacks), and
* that the total number of threads not exceed some limit ''t'' (likely bounding the divergence-tracking stacks), and
* that the total number of blocks not exceed some limit ''b'' (likely bounding the warp-scheduling complexity).
* that the total number of blocks not exceed some limit ''b'' (likely bounding the warp-scheduling complexity).
A given SM, then, supports '''T''' values through the minimum of {''r''/'''Thr<sub>reg</sub>''', ''s''/'''Blk<sub>shmem</sub>''', and ''t''}; as the block requires fewer registers and less shared memory, the upper bound converges to ''t''. Motivations for larger blocks include:
A given SM, then, supports '''T''' values through the minimum of {''r''/'''Thr<sub>reg</sub>''', ''s''/'''Blk<sub>shmem</sub>''', and ''t''}; as the block requires fewer registers and less shared memory, the upper bound converges to ''t''.
 
Motivations for larger blocks include:
* freedom in the ''b'' dimension exposes parallelism until ''t'' <= ''b'' * '''T'''
* freedom in the ''b'' dimension exposes parallelism until ''t'' <= ''b'' * '''T'''
* larger maximum possible kernels (an absolute limit exists on grid dimensions)
* larger maximum possible kernels (an absolute limit exists on grid dimensions)
Line 109: Line 83:
===Stream Processor===
===Stream Processor===
* In-order, multithreaded processor: memory latencies can be hidden only by TLP, not ILP.
* In-order, multithreaded processor: memory latencies can be hidden only by TLP, not ILP.
** '''UPDATE''' Vasily Volkov's awesome GTC 2010 paper, "[http://www.cs.berkeley.edu/~volkov/volkov10-GTC.pdf Better Performance at Lower Occupancy]", ''destroys'' this notion.
*** Really. Go read Vasily's paper. It's better than anything you'll find here.
** Arithmetic intensity and parallelism are paramount!
** Arithmetic intensity and parallelism are paramount!
** Memory-bound kernels require sufficiently high ''occupancy'' (the ratio of concurrently-running warps to maximum possible concurrent warps (as applied, usually, to [[#Streaming Multiprocessor|SMs]])) to hide latency.
** Memory-bound kernels require sufficiently high ''occupancy'' (the ratio of concurrently-running warps to maximum possible concurrent warps (as applied, usually, to [[#Streaming Multiprocessor|SMs]])) to hide latency.
* No branch prediction or speculation (and thus also no pipeline flushes on mispredicted branches).
* No branch prediction or speculation. Full predication.
{| border="1"
{| border="1"
! Memory type
! Memory type
Line 158: Line 134:
| Read-write
| Read-write
| Read-write
| Read-write
| None
| '''1.x''': None
'''2.0+''': L1 on SM, L2 on TPC(?)
| Yes
| Yes
|-
|-
Line 188: Line 165:


===Compute Capabilities===
===Compute Capabilities===
The original public CUDA revision was 1.0, implemented on the NV50 chipset corresponding to the GeForce 8 series. Compute capability, formed of a non-negative major and minor revision number, can be queried on CUDA-capable cards. All revisions thus far have been backwards-compatible.
The original public CUDA revision was 1.0, implemented on the NV50 chipset corresponding to the GeForce 8 series. Compute capability, formed of a non-negative major and minor revision number, can be queried on CUDA-capable cards. All revisions thus far have been fowards-compatible, though recent CUDA toolkits will not generate code for CC1 or 2.
 
{| border="1" class="wikitable"
! Resource
! 1.0 SM
! 1.1 SM
! 1.2 SM
! 1.3 SM
! 2.0 SM
! 2.1 SM
! 3.0 SMX
! 3.5 SMX
! 7.0 SM
! 7.5 SM
|-
|CUDA cores
|8
|8
|8
|8
|32
|48
|192
|192
|64/32<br/>64/8
|64/2<br/>64/8
|-
|Schedulers
|1
|1
|1
|1
|2
|2
|4
|4
|4
|4
|-
|Insts/sched
|1
|1
|1
|1
|1
|2
|2
|2
|1
|1
|-
|Threads
|768
|768
|1K
|1K
|1536
|1536
|2K
|2K
|2K
|1K
|-
|Warps
|24
|24
|32
|32
|48
|48
|64
|64
|64
|32
|-
|Blocks
|8
|8
|8
|8
|8
|8
|16
|16
|32
|16
|-
|32-bit regs
|8K
|8K
|16K
|16K
|32K
|32K
|64K
|64K
|64K
|64K
|-
|Examples
|G80
|G9x
|GT21x
|GT200
|GF110
|GF10x
|GK104
|GK110
|GV100
|TU10x
|-
|}
{| border="1"
{| border="1"
! Revision
! Revision
Line 194: Line 282:
|-
|-
| 1.1
| 1.1
| Atomic ops on 32-bit global integers. Breakpoints and other debugging support.
|
* Atomic ops on 32-bit global integers.
* Breakpoints and other debugging support.
|-  
|-  
| 1.2
| 1.2
| Atomic ops on 64-bit global integers and 32-bit shared integers. 32 warps (1024 threads) and 16K registers per multiprocessor (MP). Vote instructions. Three MPs per Texture Processing Cluster (TPC). Relaxed memory coalescing constraints.
|
* Atomic ops on 64-bit global integers and 32-bit shared integers.
* 32 warps (1024 threads) and 16K registers per multiprocessor (MP).
* Vote instructions.
* Three MPs per Texture Processing Cluster (TPC).
* Relaxed memory coalescing constraints.
|-
|-
| 1.3
| 1.3
| Double-precision floating point at 32 cycles per operation.
|
* Double-precision floating point at 32 cycles per operation.
|-
|-
| 2.0
| 2.0
| Atomic addition on 32-bit global and shared FP. 48 warps (1536 threads), 48K shared memory banked 32 ways, and 32K registers per MP. 512K local memory per thread. <tt>__syncthreads_{count,and,or}()</tt>, <tt>__threadfence_system()</tt>, and <tt>__ballot()</tt>. 1024 threads per block and <tt>blockIdx.{x,y}</tt> values ranging through 1024. Larger texture references. PTX 2.0.
|
* 32 cores per SM
* 4 SFUs
* Atomic addition on 32-bit global and shared FP.
* 48 warps (1536 threads), 48K shared memory banked 32 ways, and 32K registers per MP.
* 512K local memory per thread.
* <tt>__syncthreads_{count,and,or}()</tt>, <tt>__threadfence_system()</tt>, and <tt>__ballot()</tt>.
* 1024 threads per block and <tt>blockIdx.{x,y}</tt> values ranging through 1024.
* Larger texture references.
* ''PTX 2.0''
** Efficient uniform addressing (<tt>ldu</tt>)
** Unified address space: <tt>isspacep</tt>/<tt>cvta</tt>
** Prefetching: <tt>prefetch</tt>/<tt>prefetchu</tt>
** Cache modifiers on loads and stores: <tt>.ca</tt>, <tt>.cg</tt>, <tt>.cs</tt>, <tt>.lu</tt>, <tt>.cv</tt>
** New integer ops: <tt>popc</tt>/<tt>clz</tt>/<tt>bfind</tt>/<tt>brev</tt>/<tt>bfe</tt>/<tt>bfi</tt>
** Video ops: <tt>vadd</tt>, <tt>vsub</tt>, <tt>vabsdiff</tt>, <tt>vmin</tt>, <tt>vmax</tt>, <tt>vshl</tt>, <tt>vshr</tt>, <tt>vmad</tt>, <tt>vset</tt>
** New special registers: <tt>nsmid</tt>, <tt>clock64</tt>, ...).
|-
|-
|}
| 2.1
|
* 48 cores per SM
* 8 SFUs per SM, 8 TFUs per ROP
* 2 warp schedulers per SM, capable of issuing two instructions per clock
|-
| 3.0
|
* 192 cores per SMX
* 32 SFUs per SMX, 32 TFUs per ROP
* 4 warp schedulers per SMX, capable of issuing two instructions per clock
* Double-precision instructions can be paired with non-DP
** Previously, double-precision instructions couldn't be paired with anything
* ''PTX 3.0''
** <tt>madc</tt> and <tt>mad.cc</tt> instructions
** Cubemaps and cubearrays for the <tt>tex</tt> instruction
** 3D surfaces via the <tt>suld.b.3d</tt> and <tt>sust.b.3d</tt> instructions
** <tt>pmevent.mask</tt> to trigger multiple performance counters
** 64-bit grid IDs
** 4 more performance counters, for a total of 8
** DWARF debugging symbols support


==Installation on [[Debian]]==
|-
[http://packages.debian.org/sid/libdevel/libcuda1-dev libcuda-dev] packages exist in the <tt>non-free</tt> archive area, and supply the core library <tt>libcuda.so</tt>. Together with the upstream toolkit and SDK from NVIDIA, this provides a full CUDA development environment for 64-bit Debian Unstable systems. I installed CUDA 2.3 on 2010-01-25 (hand-rolled 2.6.32.6 kernel, built with gcc-4.4). This machine did not have CUDA-compatible hardware (it uses [[Intel 965]]).
| 3.5
* Download the Ubuntu 9.04 files from NVIDIA's "[http://www.nvidia.com/object/cuda_get.html CUDA Zone]".
|
* Run the toolkit installer (<tt>sh cudatoolkit_2.3_linux_64_ubuntu9.04.run</tt>)
* 255 registers per thread
** For a user-mode install, supply <tt>$HOME/local</tt> or somesuch
* "CUDA Dynamic Parallelism", the ability to spawn threads from within device code
<pre>* Please make sure your PATH includes /home/dank/local/cuda/bin
* ''PTX 3.1''
* Please make sure your LD_LIBRARY_PATH
** A funnel shift instruction, <tt>shf</tt>
*   for 32-bit Linux distributions includes /home/dank/local/cuda/lib
** Loading read-only global data through the non-coherent texture cache, <tt>ld.global.nc</tt>
*   for 64-bit Linux distributions includes /home/dank/local/cuda/lib64
** 64-bit atomic/reduction operators extended to {or, xor, and, integer min, integer max}
* OR
** Mipmap type support
*   for 32-bit Linux distributions add /home/dank/local/cuda/lib
** Indirect texture/surface support
*   for 64-bit Linux distributions add /home/dank/local/cuda/lib64
** Extends generic addressing to include the const state space
* to /etc/ld.so.conf and run ldconfig as root


* Please read the release notes in /home/dank/local/cuda/doc/
|-
| 7.0
|
* ''PTX 6.3''
* Tensor cores
* Independent thread scheduling


* To uninstall CUDA, delete /home/dank/local/cuda
|-
* Installation Complete</pre>
| 7.5
* Run the SDK installer (<tt>sh cudasdk_2.3_linux.run</tt>)
|
** I just installed it to the same directory as the toolkit, which seems to work fine.
* ''PTX 6.4''
<pre>========================================
* Integer matrix multiplication in tensor cores
|-
|}


Configuring SDK Makefile (/home/dank/local/cuda/shared/common.mk)...
==PTX==
===Syntax Coloring===
[[File:ptxcolor.png|thumb|right|PTX with syntax coloring]]
I've got a [[vim]] syntax coloring file for PTX/NVIR/SASS at https://raw.github.com/dankamongmen/dankhome/master/.vim/syntax/nvir.vim. It operates by coloring all registers congruent to some integer mod 10 the same color:
<pre>syn match asmReg0 "v\?R[0-9]*0\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg1 "v\?R[0-9]*1\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg2 "v\?R[0-9]*2\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg3 "v\?R[0-9]*3\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg4 "v\?R[0-9]*4\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg5 "v\?R[0-9]*5\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg6 "v\?R[0-9]*6\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg7 "v\?R[0-9]*7\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg8 "v\?R[0-9]*8\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg9 "v\?R[0-9]*9\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmPReg "P[0-9]\([0-9]*\)\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmBB "BB[0-9][0-9]*\(_\d\d*\)\?"
syn match asmBBNew "BB-\d\d*"
syn match nvirNT ".NEXT_TRUE.*"
syn match nvirNF ".NEXT_FALSE.*"
syn match hexconst "0x\x\+\(\.F\|\.U\?\(I\|L\)\)\?"
syn match spreg "\(ctaid\|ntid\|tid\|nctaid\).\(x\|y\|z\)"</pre>


========================================
* Please make sure your PATH includes /home/dank/local/cuda/bin
* Please make sure your LD_LIBRARY_PATH includes /home/dank/local/cuda/lib
* To uninstall the NVIDIA GPU Computing SDK, please delete /home/dank/local/cuda
* Installation Complete</pre>
==Building CUDA Apps==
==Building CUDA Apps==
===nvcc flags===
===nvcc flags===
* <tt>-ptax-options=-v</tt> displays per-thread register usage
Pass flags to <tt>ptxas</tt> via -X:
* <tt>-X -v</tt> displays per-thread register usage
* <tt>-X -abi=no</tt> disables the PTX ABI, saving registers but taking away your stack
* <tt>-dlcm={cg,cs,ca}</tt> modifies cache behavior for loads
* <tt>-dscm={cw,cs}</tt> modifies cache behavior for stores
===SDK's common.mk===
===SDK's common.mk===
This assumes use of the SDK's common.mk, as recommended by the documentation.
This assumes use of the SDK's common.mk, as recommended by the documentation.
Line 283: Line 438:


==deviceQuery info==
==deviceQuery info==
===Compute capability 2.0===
* Memory shown is that amount which is free; I've substituted total VRAM.
===Compute capability 1.3===
* Most CUDA devices can switch between multiple frequencies; the "Clock rate" output ought be considered accurate only at a given moment, and the outputs listed here are merely illustrative.
* Three device modes are currently supported:
** 0: Default (multiple applications can use the device)
** 1: Exclusive (only one application may use the device; other calls to <tt>cuCtxCreate</tt> will fail)
** 2: Disabled (no applications may use the device; all calls to <tt>cuCtxCreate</tt> will fail
* The mode can be set using <tt>nvidia-smi</tt>'s -c option, specifying the device number via -g.
* A run time limit is activated by default if the device is being used to drive a display.
* Please feel free to [mailto:nickblack@acm.org send me output!]




====Tesla C1060====
{| border="1"
<pre>Device 0: "Tesla C1060"
! Device name
  CUDA Driver Version:                          2.30
! Memory
  CUDA Runtime Version:                          2.30
! MP's
  CUDA Capability Major revision number:        1
! Cores
  CUDA Capability Minor revision number:        3
! Shmem/block
  Total amount of global memory:                4294705152 bytes
! Reg/block
  Number of multiprocessors:                    30
! Warp size
  Number of cores:                              240
! Thr/block
  Total amount of constant memory:              65536 bytes
! Texalign
  Total amount of shared memory per block:      16384 bytes
! Clock
  Total number of registers available per block: 16384
! C+E?
  Warp size:                                    32
! Integrated?
  Maximum number of threads per block:          512
! Shared maps?
  Maximum sizes of each dimension of a block:    512 x 512 x 64
|-
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
! COLSPAN="13" style="background:#eebeb6;" | Compute capability 7.0
  Maximum memory pitch:                          262144 bytes
|-
  Texture alignment:                            256 bytes
| Tesla V100
  Clock rate:                                   1.30 GHz
| 16GB
  Concurrent copy and execution:                Yes
| 84
  Run time limit on kernels:                    No
| 5376/2688/672
  Integrated:                                    No
|
  Support host page-locked memory mapping:      Yes
|
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
|
Other clock rates: 1.44 GHz
|
 
|
====GeForce GTX 295====
| 1.53GHz
<pre>Device 1: "GeForce GTX 295"
| Yes
  CUDA Driver Version:                          2.30
| No
  CUDA Runtime Version:                          2.30
| Yes
  CUDA Capability Major revision number:        1
|-
  CUDA Capability Minor revision number:        3
! COLSPAN="13" style="background:#8070D8;" | Compute capability 3.0
  Total amount of global memory:                939261952 bytes
|-
  Number of multiprocessors:                    30
| GeForce GTX 680
  Number of cores:                              240
| 1.5GB
  Total amount of constant memory:              65536 bytes
| 8
  Total amount of shared memory per block:      16384 bytes
| 1536
  Total number of registers available per block: 16384
|
  Warp size:                                    32
|
  Maximum number of threads per block:          512
|
  Maximum sizes of each dimension of a block:    512 x 512 x 64
|
  Maximum sizes of each dimension of a grid:     65535 x 65535 x 1
|
  Maximum memory pitch:                          262144 bytes
|
  Texture alignment:                            256 bytes
| Yes
  Clock rate:                                    1.24 GHz
| No
  Concurrent copy and execution:                Yes
| Yes
  Run time limit on kernels:                    No
|-
  Integrated:                                    No
! COLSPAN="13" style="background:#ffdead;" | Compute capability 2.1
  Support host page-locked memory mapping:      Yes
|-
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
| GeForce GTX 560 Ti
====GeForce GTX 285====
|
<pre>Device 0: "GeForce GTX 285"
|
  CUDA Capability Major revision number:        1
|
  CUDA Capability Minor revision number:        3
|
  Total amount of global memory:                1073414144 bytes
|
  Number of multiprocessors:                    30
|
  Number of cores:                              240
|
  Total amount of constant memory:              65536 bytes
|
  Total amount of shared memory per block:      16384 bytes
|
  Total number of registers available per block: 16384
|
  Warp size:                                    32
|
  Maximum number of threads per block:          512
|
  Maximum sizes of each dimension of a block:    512 x 512 x 64
|-
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| GeForce GTX 550 Ti
  Maximum memory pitch:                          262144 bytes
|
  Texture alignment:                            256 bytes
|
  Clock rate:                                    1.48 GHz
|
  Concurrent copy and execution:                Yes
|
  Run time limit on kernels:                    Yes
|
  Integrated:                                    Yes
|
  Support host page-locked memory mapping:      Yes
|
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
|
 
|
====GeForce GTX 280====
|
<pre>Device 0: "GeForce GTX 280"
|
  CUDA Driver Version:                          2.30
|
  CUDA Runtime Version:                          2.30
|-
  CUDA Capability Major revision number:        1
| GeForce GTX 460
  CUDA Capability Minor revision number:        3
| 1GB
  Total amount of global memory:                1073020928 bytes
| 7
  Number of multiprocessors:                    30
| 224
  Number of cores:                               240
| 48k
  Total amount of constant memory:              65536 bytes
| 32k
  Total amount of shared memory per block:      16384 bytes
| 32
  Total number of registers available per block: 16384
| 1024
  Warp size:                                    32
| 512b
  Maximum number of threads per block:          512
| 1.35GHz
  Maximum sizes of each dimension of a block:    512 x 512 x 64
| Yes
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| No
  Maximum memory pitch:                          262144 bytes
| Yes
  Texture alignment:                            256 bytes
|-
  Clock rate:                                    1.30 GHz
| GeForce GTS 450
  Concurrent copy and execution:                Yes
|
  Run time limit on kernels:                    Yes
|
  Integrated:                                    No
|
  Support host page-locked memory mapping:      Yes
|
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
|
 
|
====GeForce GTX 260====
|
<pre>Device 0: "GeForce GTX 260"
|
  CUDA Driver Version:                          2.30
|
  CUDA Runtime Version:                          2.30
|
  CUDA Capability Major revision number:        1
|
  CUDA Capability Minor revision number:        3
|
  Total amount of global memory:                938803200 bytes
|-
  Number of multiprocessors:                    27
! COLSPAN="13" style="background:#ffdead;" | Compute capability 2.0
  Number of cores:                              216
|-
  Total amount of constant memory:              65536 bytes
| GeForce GTX 580
  Total amount of shared memory per block:      16384 bytes
| 1.5GB
  Total number of registers available per block: 16384
| 16
  Warp size:                                    32
| 512
  Maximum number of threads per block:          512
|
  Maximum sizes of each dimension of a block:    512 x 512 x 64
|
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| 32
  Maximum memory pitch:                          262144 bytes
| 1024
  Texture alignment:                            256 bytes
|
  Clock rate:                                    1.47 GHz
| 1.544GHz
  Concurrent copy and execution:                Yes
| Yes
  Run time limit on kernels:                    Yes
| No
  Integrated:                                    No
| Yes
  Support host page-locked memory mapping:      Yes
|-
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
| Tesla C2050 (*CB)
 
| 3GB
===Compute capability 1.2===
| 14
====GeForce GTS 360M (PCIe x16)====
| 448
<pre>Device 0: "GeForce GTS 360M"
| 48k
  CUDA Driver Version:                          3.0
| 32k
  CUDA Capability Major revision number:        1
| 32
  CUDA Capability Minor revision number:        2
| 1024
  Total amount of global memory:                1073020928 bytes
| 512b
  Number of multiprocessors:                    12
| 1.15GHz
  Number of cores:                              96
| Yes
  Total amount of constant memory:              65536 bytes
| No
  Total amount of shared memory per block:      16384 bytes
| Yes
  Total number of registers available per block: 16384
|-
  Warp size:                                    32
| Tesla C2070 (*CB)
  Maximum number of threads per block:          512
| 6GB
  Maximum sizes of each dimension of a block:    512 x 512 x 64
| 14
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| 448
  Maximum memory pitch:                          2147483647 bytes
| 48k
  Texture alignment:                            256 bytes
| 32k
  Clock rate:                                    1.32 GHz
| 32
  Concurrent copy and execution:                Yes
| 1024
  Run time limit on kernels:                    No
| 512b
  Integrated:                                    No
| 1.15GHz
  Support host page-locked memory mapping:      Yes
| Yes
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
| No
====GeForce 310 (PCIe x16)====
| Yes
<pre>Device 0: "GeForce 310"
|-
  CUDA Driver Version:                          3.0
| GeForce GTX 480
  CUDA Runtime Version:                          2.30
| 1536MB
  CUDA Capability Major revision number:        1
| 15
  CUDA Capability Minor revision number:        2
| 480
  Total amount of global memory:                536084480 bytes
|
  Number of multiprocessors:                    2
|
  Number of cores:                              16
|
  Total amount of constant memory:              65536 bytes
|
  Total amount of shared memory per block:      16384 bytes
|
  Total number of registers available per block: 16384
|
  Warp size:                                    32
|
  Maximum number of threads per block:          512
|
  Maximum sizes of each dimension of a block:    512 x 512 x 64
|
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
|-
  Maximum memory pitch:                          262144 bytes
| GeForce GTX 470
  Texture alignment:                            256 bytes
| 1280MB
  Clock rate:                                    1.40 GHz
| 14
  Concurrent copy and execution:                Yes
| 448
  Run time limit on kernels:                    No
|
  Integrated:                                    No
|
  Support host page-locked memory mapping:      Yes
|
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
|
====GeForce 240 GT====
|
<pre>Device 0 GeForce GT 240
|
  CUDA Driver Version:                          3.0
|
  CUDA Runtime Version:                          2.30
|
  CUDA Capability Major revision number:        1
|
  CUDA Capability Minor revision number:        2
|-
  Total amount of global memory:                1073414144 bytes
! COLSPAN="13" style="background:#efefef;" | Compute capability 1.3
  Number of multiprocessors:                    12
|-
  Number of cores:                              96
| Tesla C1060
  Total amount of constant memory:              65536 bytes
| 4GB
  Total amount of shared memory per block:      16384 bytes
| 30
  Total number of registers available per block: 16384
| 240
  Warp size:                                    32
| 16384b
  Maximum number of threads per block:          512
| 16384
  Maximum sizes of each dimension of a block:    512,512,64
| 32
  Maximum sizes of each dimension of a grid:    65535,65535,1
| 512
  Maximum memory pitch:                          262144 bytes
| 256b
  Texture alignment:                            256 bytes
| 1.30GHz
  Clock rate:                                    1.424 GHz
| Yes
  Concurrent copy and execution:                Yes
| No
  Run time limit on kernels:                    Yes
| Yes
  Integrated:                                    No
|-
  Support host page-locked memory mapping:      Yes
| GeForce GTX 295
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
| 1GB
 
| 30
===Compute capability 1.1===
| 240
====ION====
| 16384b
<pre>Device 0: "ION"
| 16384
  CUDA Driver Version:                          2.30
| 32
  CUDA Runtime Version:                          2.30
| 512
  CUDA Capability Major revision number:        1
| 256b
  CUDA Capability Minor revision number:        1
| 1.24GHz
  Total amount of global memory:                268435456 bytes
| Yes
  Number of multiprocessors:                    2
| No
  Number of cores:                              16
| Yes
  Total amount of constant memory:              65536 bytes
|-
  Total amount of shared memory per block:      16384 bytes
| GeForce GTX 285
  Total number of registers available per block: 8192
| 1GB
  Warp size:                                    32
| 30
  Maximum number of threads per block:          512
| 240
  Maximum sizes of each dimension of a block:    512 x 512 x 64
| 16384b
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| 16384
  Maximum memory pitch:                          262144 bytes
| 32
  Texture alignment:                            256 bytes
| 512
  Clock rate:                                    1.10 GHz
| 256b
  Concurrent copy and execution:                No
| 1.48GHz
  Run time limit on kernels:                    No
| Yes
  Integrated:                                    Yes
| No
  Support host page-locked memory mapping:      Yes
| Yes
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
|-
====Quadro FX 570====
| GeForce GTX 280
<pre>Device 1: "Quadro FX 570"
| 1GB
  CUDA Driver Version:                          2.30
| 30
  CUDA Runtime Version:                          2.30
| 240
  CUDA Capability Major revision number:        1
| 16384b
  CUDA Capability Minor revision number:        1
| 16384
  Total amount of global memory:                268107776 bytes
| 32
  Number of multiprocessors:                    2
| 512
  Number of cores:                              16
| 256b
  Total amount of constant memory:              65536 bytes
| 1.30GHz
  Total amount of shared memory per block:      16384 bytes
| Yes
  Total number of registers available per block: 8192
| No
  Warp size:                                    32
| Yes
  Maximum number of threads per block:          512
|-
  Maximum sizes of each dimension of a block:    512 x 512 x 64
| GeForce GTX 260
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| 1GB
  Maximum memory pitch:                          262144 bytes
| 27
  Texture alignment:                            256 bytes
| 216
  Clock rate:                                    0.92 GHz
| 16384b
  Concurrent copy and execution:                Yes
| 16384
  Run time limit on kernels:                    Yes
| 32
  Integrated:                                    No
| 512
  Support host page-locked memory mapping:      No
| 256b
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
| 1.47GHz
 
| Yes
====GeForce 9800 GTX/9800 GTX+====
| No
<pre>Device 0: “GeForce 9800 GTX/9800 GTX+”
| Yes
  CUDA Capability Major revision number:        1
|-
  CUDA Capability Minor revision number:        1
! COLSPAN="13" style="background:#efefef;" | Compute capability 1.2
  Total amount of global memory:                536608768 bytes
|-
  Number of multiprocessors:                    16
| GeForce GT 360M
  Number of cores:                              128
| 1GB
  Total amount of constant memory:              65536 bytes
| 12
  Total amount of shared memory per block:      16384 bytes
| 96
  Total number of registers available per block: 8192
| 16384b
  Warp size:                                    32
| 16384
  Maximum number of threads per block:          512
| 32
  Maximum sizes of each dimension of a block:    512 x 512 x 64
| 512
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| 256b
  Maximum memory pitch:                          262144 bytes
| 1.32GHz
  Texture alignment:                            256 bytes
| Yes
  Clock rate:                                    1.67 GHz
| No
  Concurrent copy and execution:                Yes
| Yes
  Run time limit on kernels:                    No
|-
  Integrated:                                    Yes
| GeForce 310
  Support host page-locked memory mapping:      Yes
| 512MB
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
| 2
====GeForce 9600 GT====
| 16
<pre>Device 0: “GeForce 9600 GT”
| 16384b
  CUDA Driver Version:                          2.30
| 16384
  CUDA Runtime Version:                          2.30
| 32
  CUDA Capability Major revision number:        1
| 512
  CUDA Capability Minor revision number:        1
| 256b
  Total amount of global memory:                536543232 bytes
| 1.40GHz
  Number of multiprocessors:                    8
| Yes
  Number of cores:                              64
| No
  Total amount of constant memory:              65536 bytes
| Yes
  Total amount of shared memory per block:      16384 bytes
|-
  Total number of registers available per block: 8192
| GeForce 240 GT
  Warp size:                                    32
| 1GB
  Maximum number of threads per block:           512
| 12
  Maximum sizes of each dimension of a block:    512 x 512 x 64
| 96
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| 16384b
  Maximum memory pitch:                          262144 bytes
| 16384
  Texture alignment:                            256 bytes
| 32
  Clock rate:                                    1.50 GHz
| 512
  Concurrent copy and execution:                Yes
| 256b
  Run time limit on kernels:                    Yes
| 1.424GHz
  Integrated:                                    No
| Yes
  Support host page-locked memory mapping:      No
| No
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
| Yes
I've also seen a clock rate of 1.62GHz.
|-
 
! COLSPAN="13" style="background:#efefef;" | Compute capability 1.1
====GeForce 9400M====
|-
<pre>Device 0: "GeForce 9400M"
| ION
  Major revision number:                        1
| 256MB
  Minor revision number:                        1
| 2
  Total amount of global memory:                266010624 bytes
| 16
  Number of multiprocessors:                    2
| 16384b
  Number of cores:                              16
| 8192
  Total amount of constant memory:              65536 bytes
| 32
  Total amount of shared memory per block:      16384 bytes
| 512
  Total number of registers available per block: 8192
| 256b
  Warp size:                                    32
| 1.1GHz
  Maximum number of threads per block:          512
| No
  Maximum sizes of each dimension of a block:    512 x 512 x 64
| Yes
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| Yes
  Maximum memory pitch:                          262144 bytes
|-
  Texture alignment:                            256 bytes
| Quadro FX 570
  Clock rate:                                    0.80 GHz
| 256MB
  Concurrent copy and execution:                No</pre>
| 2
====GeForce 8800 GTS 512====
| 16
<pre>Device 0: "GeForce 8800 GTS 512"
| 16384b
  Major revision number:                        1
| 8192
  Minor revision number:                        1
| 32
  Total amount of global memory:                536150016 bytes
| 512
  Number of multiprocessors:                    16
| 256b
  Number of cores:                              128
| 0.92GHz
  Total amount of constant memory:              65536 bytes
| Yes
  Total amount of shared memory per block:      16384 bytes
| No
  Total number of registers available per block: 8192
| No
  Warp size:                                    32
|-
  Maximum number of threads per block:          512
| GeForce GTS 250 (*JR)
  Maximum sizes of each dimension of a block:    512 x 512 x 64
| 1G
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| 16
  Maximum memory pitch:                          262144 bytes
| 128
  Texture alignment:                            256 bytes
| 16384b
  Clock rate:                                    1.62 GHz
| 8192
  Concurrent copy and execution:                Yes
| 32
  Run time limit on kernels:                    No
| 512
  Integrated:                                    No
| 256b
  Support host page-locked memory mapping:      No
| 1.84GHz
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
| Yes
 
| No
====GeForce 8600 GT====
| No
<pre>Device 0: "GeForce 8600 GT"
|-
  Major revision number:                        1
| GeForce 9800 GTX
  Minor revision number:                        1
| 512MB
  Total amount of global memory:                268107776 bytes
| 16
  Total amount of constant memory:              65536 bytes
| 128
  Total amount of shared memory per block:      16384 bytes
| 16384b
  Total number of registers available per block: 8192
| 8192
  Warp size:                                    32
| 32
  Maximum number of threads per block:          512
| 512
  Maximum sizes of each dimension of a block:    512 x 512 x 64
| 256b
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| 1.67GHz
  Maximum memory pitch:                          262144 bytes
| Yes
  Texture alignment:                            256 bytes
| Yes
  Clock rate:                                    1674000 kilohertz</pre>
| Yes
====GeForce 8600M GT====
|-
<pre>Device 0: "GeForce 8600M GT"
| GeForce 9600 GT
  CUDA Driver Version:                          2.30
| 512MB
  CUDA Runtime Version:                          2.30
| 8
  CUDA Capability Major revision number:        1
| 64
  CUDA Capability Minor revision number:        1
| 16384b
  Total amount of global memory:                267714560 bytes
| 8192
  Number of multiprocessors:                    4
| 32
  Number of cores:                              32
| 512
  Total amount of constant memory:              65536 bytes
| 256b
  Total amount of shared memory per block:      16384 bytes
| 1.62GHz,
  Total number of registers available per block: 8192
1.50GHz
  Warp size:                                    32
| Yes
  Maximum number of threads per block:          512
| No
  Maximum sizes of each dimension of a block:    512 x 512 x 64
| No
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
|-
  Maximum memory pitch:                          262144 bytes
| GeForce 9400M
  Texture alignment:                            256 bytes
| 256MB
  Clock rate:                                    0.95 GHz
| 2
  Concurrent copy and execution:                Yes
| 16
  Run time limit on kernels:                    Yes
| 16384b
  Integrated:                                    No
| 8192
  Support host page-locked memory mapping:      No
| 32
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
| 512
 
| 256b
====PNY GeForce 8400 GS (PCI)====
| 0.88GHz
<pre>Device 0: "GeForce 8400 GS"
| No
  CUDA Driver Version:                          2.30
| No
  CUDA Runtime Version:                          2.30
| No
  CUDA Capability Major revision number:        1
|-
  CUDA Capability Minor revision number:        1
| GeForce 8800 GTS 512
  Total amount of global memory:                536608768 bytes
| 512MB
  Number of multiprocessors:                    1
| 16
  Number of cores:                              8
| 128
  Total amount of constant memory:              65536 bytes
| 16384b
  Total amount of shared memory per block:      16384 bytes
| 8192
  Total number of registers available per block: 8192
| 32
  Warp size:                                    32
| 512
  Maximum number of threads per block:          512
| 256b
  Maximum sizes of each dimension of a block:    512 x 512 x 64
| 1.62GHz
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| Yes
  Maximum memory pitch:                          262144 bytes
| No
  Texture alignment:                            256 bytes
| No
  Clock rate:                                    1.40 GHz
|-
  Concurrent copy and execution:                No
| GeForce 8600 GT
  Run time limit on kernels:                    No
| 256MB
  Integrated:                                    No
| 4
  Support host page-locked memory mapping:      No
| 32
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
| 16384b
 
| 8192
===Compute capability 1.0===
| 32
====GeForce 8800 Ultra====
| 512
<pre>Device 0: "GeForce 8800 Ultra"
| 256b
  CUDA Driver Version:                          2.30
| 0.95GHz
  CUDA Runtime Version:                          2.30
| Yes
  CUDA Capability Major revision number:        1
| No
  CUDA Capability Minor revision number:        0
| No
  Total amount of global memory:                804585472 bytes
|-
  Number of multiprocessors:                    16
| GeForce 9400M
  Number of cores:                              128
| 512MB
  Total amount of constant memory:              65536 bytes
| 1
  Total amount of shared memory per block:      16384 bytes
| 8
  Total number of registers available per block: 8192
| 16384b
  Warp size:                                    32
| 8192
  Maximum number of threads per block:          512
| 32
  Maximum sizes of each dimension of a block:    512 x 512 x 64
| 512
  Maximum sizes of each dimension of a grid:    65535 x 65535 x 1
| 256b
  Maximum memory pitch:                          262144 bytes
| 1.40GHz
  Texture alignment:                            256 bytes
| No
  Clock rate:                                    1.51 GHz
| No
  Concurrent copy and execution:                No
| No
  Run time limit on kernels:                    Yes
|-
  Integrated:                                    No
|}
  Support host page-locked memory mapping:      No
(*CB) Thanks to Cameron Black for this submission!
  Compute mode:                                  Default (multiple host threads can use this device simultaneously)</pre>
(*JR) Thanks to Javier Ruiz for this submission!


==See Also==
==See Also==
Line 719: Line 881:
* The [http://code.google.com/p/gpuocelot/ gpuocelot] project, hosted on Google Code.
* The [http://code.google.com/p/gpuocelot/ gpuocelot] project, hosted on Google Code.
* The NVIDIA [http://developer.nvidia.com/object/gpucomputing.html GPU Developer Zone]
* The NVIDIA [http://developer.nvidia.com/object/gpucomputing.html GPU Developer Zone]
* My [[CUBAR]] tools
* My [[CUBAR]] tools and reverse-engineered [[libcudest]]
[[CATEGORY: GPGPU]]

Latest revision as of 01:33, 15 August 2019

A "Fermi" GT200 die

Hardware

NVIDIA maintains a list of supported hardware. You'll need the "nvidia.ko" kernel module. On Debian, use the nvidia-kernel-dkms package to build a module appropriate for your kernel (and automatically rebuild it upon kernel upgrades). You can also download the nvidia-kernel-source and nvidia-kernel-common packages, unpack /usr/src/nvidia-kernel.tar.bz2, and run make-kpkg modules_image. Install the resulting .deb, and modprobe nvidia. You'll see something like this in dmesg output:

nvidia: module license 'NVIDIA' taints kernel.
Disabling lock debugging due to kernel taint
nvidia 0000:07:00.0: enabling device (0000 -> 0003)
nvidia 0000:07:00.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21
nvidia 0000:07:00.0: setting latency timer to 64
NVRM: loading NVIDIA UNIX x86_64 Kernel Module  190.53  Wed Dec  9 15:29:46 PST 2009

Once the module is loaded, CUDA should be able to find the device. See below for sample outputs. Each device has a compute capability, though this does not encompass all differentiated capabilities (see also deviceOverlap and canMapHostMemory...). Note that "emulation mode" has been removed as of CUDA Toolkit Version 3.1.

CUDA model

Host

  • A host contains zero or more CUDA-capable devices (emulation must be used if zero devices are available).
  • It can run multiple CUDA processes, each composed of one or more host threads.
  • A given host thread can execute code on only one device at once.
  • Multiple host threads can execute code on the same device.

Device

  • A device packages a streaming processor array (SPA), a memory interface, and possibly memory (global memory. device memory).
    • In CUDA terminology, an integrated (vs discrete) device does not have its own global memory.
    • Specially-prepared global memory is designated constant memory, and can be cached.
  • Pinned (locked) host memory avoids a bounce buffer, accelerating transfers.
    • Larger one-time setup cost due to device register programming for DMA transfers.
    • This memory will be unswappable -- allocate only as much as is needed.
  • Pinned memory can be mapped directly into CUDAspace on integrated devices or in the presence of some IOMMUs.
    • "Zero (explicit)-copy" interface (can never hide all bus delays)
  • Write-combining memory (configured via MTRRs or PATs) avoids PCI snoop requirements and maximizes linear throughput
    • Subtle side-effects; not to be used glibly or carelessly!
  • Distributes work at block granularity to Texture Processing Clusters (TPCs).

Texture Processing Cluster

Streaming Multiprocessors (SMs) are grouped into TPCs. Each TPC contains some number of SMs and a single texture processing unit, including a few filters and a cache for texture memory. The details of these texture caches have not generally been publicized, but NVIDIA optimization guides confirm 1- and 2-dimensional spatial caching to be in effect.

Streaming Multiprocessor

  • Each SM has a register file, fast local (shared) memory, a cache for constant memory, an instruction cache (ROP), a multithreaded instruction dispatcher, and some number of Stream Processors (SPs).
    • 8K registers for compute capability <= 1.1, otherwise
    • 16K for compute capability <= 1.3, otherwise
    • 32K for compute capability <= 2.1, otherwise
    • 64K through at least compute capability 3.5
  • A group of threads which share a memory and can "synchronize their execution to coördinate accesses to memory" (use a barrier) form a block. Each thread has a threadId within its (three-dimensional) block.
    • For a block of dimensions <Dx, Dy, Dz>, the threadId of the thread having index <x, y, z> is (x + y * Dx + z * Dy * Dx).
  • Register allocation is performed per-block, and rounded up to the nearest
    • 256 registers per block for compute capability <= 1.1, otherwise
    • 512 registers per block for compute capability <= 1.3
  • A group of blocks which share a kernel form a grid. Each block (and each thread within that block) has a blockId within its (two-dimensional) grid.
    • For a grid of dimensions <Dx, Dy>, the blockId of the block having index <x, y> is (x + y * Dx).
  • Thus, a given thread's <blockId X threadId> dyad is unique across the grid. All the threads of a block share a blockId, and corresponding threads of various blocks share a threadId.
  • Each time the kernel is instantiated, new grid and block dimensions may be provided.
  • A block's threads, starting from threadId 0, are broken up into contiguous warps having some warp size number of threads.
  • Distributes out-of-order work at warp granularity across SPs.
    • One program counter per warp -- divergence within warp leads to serialization.
    • Divergence is trivially supported with a per-warp stack; warps reconverge at immediate post-dominators of branches
  • Supports some maximum number of blocks and threads (~8 and ~768 on G80).

Block sizing

FIXME: review/verify this!

How tightly can we bound the optimal block size T, given a warp size w? The number of threads per block ought almost always be a multiple of w, both to:

  • facilitate coalescing (coalescing requirements are related to w/2), and
  • maximize utilization of SPs within warp-granular scheduling.

A SM has r registers and s words of shared memory, allocated per-block (see above). Assuming that w threads can be supported (i.e., that none requires more than r/w registers or s/w words of shared memory), the most obvious lower bound is w itself. The most obvious upper bound, assuming arbitrary available work, is the greatest multiple of w supported by hardware (and, obviously, the SDK). A block must be scheduled to an SM, which requires:

  • registers sufficient to support the block,
  • shared memory sufficient to support the block,
  • that the total number of threads not exceed some limit t (likely bounding the divergence-tracking stacks), and
  • that the total number of blocks not exceed some limit b (likely bounding the warp-scheduling complexity).

A given SM, then, supports T values through the minimum of {r/Thrreg, s/Blkshmem, and t}; as the block requires fewer registers and less shared memory, the upper bound converges to t.

Motivations for larger blocks include:

  • freedom in the b dimension exposes parallelism until t <= b * T
  • larger maximum possible kernels (an absolute limit exists on grid dimensions)
  • better if data can be reused among threads (e.g. in tiled matrix multiply)

Motivations for smaller blocks include:

  • freedom in the t dimension exposes parallelism until t >= b * T
  • freedom in the r and s dimensions exposes parallelism until r >= b * T * Thrreg or s >= b * Blkshmem.
  • cheaper per-block operations(?) (__syncthreads(), voting, etc)
  • support for older hardware and SDKs
  • fairer distribution among SMs and thus possibly better utilization, lower latency
    • relative speedup tends to 0 as work grows arbitrarily on finite SMs
    • relative speedup tends to 1/Fracpar on infinitely many SMs

We can now optimize occupancy for a specific {t, b, r and s}, assuming t to be a multiple of both w and b:

  • Let T = t / b. T is thus guaranteed to be the smallest multiple of w such that t == b * T.
  • Check the r and w conditions. FIXME: handle reduction
  • FIXME: handle very large (external) kernels

Optimizing for ranges of hardware values is left as an exercise for the reader. Occupancy is only worth optimizing if the number of warps are insufficient to hide latencies. It might be possible to eliminate latencies altogether by reusing data throughout a block via shared memory; if the algorithm permits, this is almost certainly a net win. In that case, we likely want to maximize Blkshmem. A more advanced theory would incorporate the arithmetic intensity of a kernel...FIXME

Stream Processor

  • In-order, multithreaded processor: memory latencies can be hidden only by TLP, not ILP.
    • UPDATE Vasily Volkov's awesome GTC 2010 paper, "Better Performance at Lower Occupancy", destroys this notion.
      • Really. Go read Vasily's paper. It's better than anything you'll find here.
    • Arithmetic intensity and parallelism are paramount!
    • Memory-bound kernels require sufficiently high occupancy (the ratio of concurrently-running warps to maximum possible concurrent warps (as applied, usually, to SMs)) to hide latency.
  • No branch prediction or speculation. Full predication.
Memory type PTX name Sharing Kernel access Host access Cache location Adddressable
Registers .reg Per-thread Read-write None None No
Special registers .sreg varies Read-only None None No
Local memory .local Per-thread Read-write None None Yes
Shared memory .shared Per-block Read-write None None Yes
Global memory .global Global Read-write Read-write 1.x: None

2.0+: L1 on SM, L2 on TPC(?)

Yes
Constant memory .const Per-grid Read Read-write Stream multiprocessor Yes
Texture memory .tex Global Read Read-write Texture processing cluster texture API
Parameters (to grids or functions) .param Per-grid (or per-thread) Read-only (or read-write) None None Yes (or restricted)

Compute Capabilities

The original public CUDA revision was 1.0, implemented on the NV50 chipset corresponding to the GeForce 8 series. Compute capability, formed of a non-negative major and minor revision number, can be queried on CUDA-capable cards. All revisions thus far have been fowards-compatible, though recent CUDA toolkits will not generate code for CC1 or 2.

Resource 1.0 SM 1.1 SM 1.2 SM 1.3 SM 2.0 SM 2.1 SM 3.0 SMX 3.5 SMX 7.0 SM 7.5 SM
CUDA cores 8 8 8 8 32 48 192 192 64/32
64/8
64/2
64/8
Schedulers 1 1 1 1 2 2 4 4 4 4
Insts/sched 1 1 1 1 1 2 2 2 1 1
Threads 768 768 1K 1K 1536 1536 2K 2K 2K 1K
Warps 24 24 32 32 48 48 64 64 64 32
Blocks 8 8 8 8 8 8 16 16 32 16
32-bit regs 8K 8K 16K 16K 32K 32K 64K 64K 64K 64K
Examples G80 G9x GT21x GT200 GF110 GF10x GK104 GK110 GV100 TU10x
Revision Changes
1.1
  • Atomic ops on 32-bit global integers.
  • Breakpoints and other debugging support.
1.2
  • Atomic ops on 64-bit global integers and 32-bit shared integers.
  • 32 warps (1024 threads) and 16K registers per multiprocessor (MP).
  • Vote instructions.
  • Three MPs per Texture Processing Cluster (TPC).
  • Relaxed memory coalescing constraints.
1.3
  • Double-precision floating point at 32 cycles per operation.
2.0
  • 32 cores per SM
  • 4 SFUs
  • Atomic addition on 32-bit global and shared FP.
  • 48 warps (1536 threads), 48K shared memory banked 32 ways, and 32K registers per MP.
  • 512K local memory per thread.
  • __syncthreads_{count,and,or}(), __threadfence_system(), and __ballot().
  • 1024 threads per block and blockIdx.{x,y} values ranging through 1024.
  • Larger texture references.
  • PTX 2.0
    • Efficient uniform addressing (ldu)
    • Unified address space: isspacep/cvta
    • Prefetching: prefetch/prefetchu
    • Cache modifiers on loads and stores: .ca, .cg, .cs, .lu, .cv
    • New integer ops: popc/clz/bfind/brev/bfe/bfi
    • Video ops: vadd, vsub, vabsdiff, vmin, vmax, vshl, vshr, vmad, vset
    • New special registers: nsmid, clock64, ...).
2.1
  • 48 cores per SM
  • 8 SFUs per SM, 8 TFUs per ROP
  • 2 warp schedulers per SM, capable of issuing two instructions per clock
3.0
  • 192 cores per SMX
  • 32 SFUs per SMX, 32 TFUs per ROP
  • 4 warp schedulers per SMX, capable of issuing two instructions per clock
  • Double-precision instructions can be paired with non-DP
    • Previously, double-precision instructions couldn't be paired with anything
  • PTX 3.0
    • madc and mad.cc instructions
    • Cubemaps and cubearrays for the tex instruction
    • 3D surfaces via the suld.b.3d and sust.b.3d instructions
    • pmevent.mask to trigger multiple performance counters
    • 64-bit grid IDs
    • 4 more performance counters, for a total of 8
    • DWARF debugging symbols support
3.5
  • 255 registers per thread
  • "CUDA Dynamic Parallelism", the ability to spawn threads from within device code
  • PTX 3.1
    • A funnel shift instruction, shf
    • Loading read-only global data through the non-coherent texture cache, ld.global.nc
    • 64-bit atomic/reduction operators extended to {or, xor, and, integer min, integer max}
    • Mipmap type support
    • Indirect texture/surface support
    • Extends generic addressing to include the const state space
7.0
  • PTX 6.3
  • Tensor cores
  • Independent thread scheduling
7.5
  • PTX 6.4
  • Integer matrix multiplication in tensor cores

PTX

Syntax Coloring

PTX with syntax coloring

I've got a vim syntax coloring file for PTX/NVIR/SASS at https://raw.github.com/dankamongmen/dankhome/master/.vim/syntax/nvir.vim. It operates by coloring all registers congruent to some integer mod 10 the same color:

syn match asmReg0	"v\?R[0-9]*0\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg1	"v\?R[0-9]*1\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg2	"v\?R[0-9]*2\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg3	"v\?R[0-9]*3\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg4	"v\?R[0-9]*4\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg5	"v\?R[0-9]*5\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg6	"v\?R[0-9]*6\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg7	"v\?R[0-9]*7\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg8	"v\?R[0-9]*8\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmReg9	"v\?R[0-9]*9\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmPReg	"P[0-9]\([0-9]*\)\(\.B\|\.F\|\.U\?\(I\|L\)\|\([^0-9]\)\@=\)"
syn match asmBB		"BB[0-9][0-9]*\(_\d\d*\)\?"
syn match asmBBNew	"BB-\d\d*"
syn match nvirNT	".NEXT_TRUE.*"
syn match nvirNF	".NEXT_FALSE.*"
syn match hexconst	"0x\x\+\(\.F\|\.U\?\(I\|L\)\)\?"
syn match spreg		"\(ctaid\|ntid\|tid\|nctaid\).\(x\|y\|z\)"

Building CUDA Apps

nvcc flags

Pass flags to ptxas via -X:

  • -X -v displays per-thread register usage
  • -X -abi=no disables the PTX ABI, saving registers but taking away your stack
  • -dlcm={cg,cs,ca} modifies cache behavior for loads
  • -dscm={cw,cs} modifies cache behavior for stores

SDK's common.mk

This assumes use of the SDK's common.mk, as recommended by the documentation.

  • Add the library path to LD_LIBRARY_PATH, assuming CUDA's been installed to a non-standard directory.
  • Set the CUDA_INSTALL_PATH and ROOTDIR (yeargh!) if outside the SDK.
  • I keep the following in bin/cudasetup of my home directory. Source it, using sh's . cudasetup syntax:
CUDA="$HOME/local/cuda/"

export CUDA_INSTALL_PATH="$CUDA"
export ROOTDIR="$CUDA/C/common/"
if [ -n "$LD_LIBRARY_PATH" ] ; then
	export "LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA/lib64"
else
	export "LD_LIBRARY_PATH=$CUDA/lib64"
fi

unset CUDA
  • Set EXECUTABLE in your Makefile, and include $CUDA_INSTALL_PATH/C/common/common.mk

Unit testing

The DEFAULT_GOAL special variable of GNU Make can be used:

.PHONY: test
.DEFAULT_GOAL:=test

include $(CUDA_INSTALL_PATH)/C/common/common.mk

test: $(TARGET)
        $(TARGET)

Libraries

Two mutually exclusive means of driving CUDA are available: the "Driver API" and "C for CUDA" with its accompanying nvcc compiler and runtime. The latter (libcudart) is built atop the former, and requires its libcuda library.

Undocumented Functions

The following unlisted functions were extracted from 3.0's libcudart.so using objdump -T:

00000000000097d0 g    DF .text	000000000000020e  Base        __cudaRegisterShared
0000000000005410 g    DF .text	0000000000000003  Base        __cudaSynchronizeThreads
0000000000009e60 g    DF .text	0000000000000246  Base        __cudaRegisterVar
000000000000a0b0 g    DF .text	0000000000000455  Base        __cudaRegisterFatBinary
00000000000095c0 g    DF .text	000000000000020e  Base        __cudaRegisterSharedVar
0000000000005420 g    DF .text	0000000000000002  Base        __cudaTextureFetch
000000000000a510 g    DF .text	00000000000009dd  Base        __cudaUnregisterFatBinary
00000000000099e0 g    DF .text	000000000000024e  Base        __cudaRegisterFunction
0000000000005820 g    DF .text	000000000000001c  Base        __cudaMutexOperation
0000000000009c30 g    DF .text	000000000000022e  Base        __cudaRegisterTexture

deviceQuery info

  • Memory shown is that amount which is free; I've substituted total VRAM.
  • Most CUDA devices can switch between multiple frequencies; the "Clock rate" output ought be considered accurate only at a given moment, and the outputs listed here are merely illustrative.
  • Three device modes are currently supported:
    • 0: Default (multiple applications can use the device)
    • 1: Exclusive (only one application may use the device; other calls to cuCtxCreate will fail)
    • 2: Disabled (no applications may use the device; all calls to cuCtxCreate will fail
  • The mode can be set using nvidia-smi's -c option, specifying the device number via -g.
  • A run time limit is activated by default if the device is being used to drive a display.
  • Please feel free to send me output!


Device name Memory MP's Cores Shmem/block Reg/block Warp size Thr/block Texalign Clock C+E? Integrated? Shared maps?
Compute capability 7.0
Tesla V100 16GB 84 5376/2688/672 1.53GHz Yes No Yes
Compute capability 3.0
GeForce GTX 680 1.5GB 8 1536 Yes No Yes
Compute capability 2.1
GeForce GTX 560 Ti
GeForce GTX 550 Ti
GeForce GTX 460 1GB 7 224 48k 32k 32 1024 512b 1.35GHz Yes No Yes
GeForce GTS 450
Compute capability 2.0
GeForce GTX 580 1.5GB 16 512 32 1024 1.544GHz Yes No Yes
Tesla C2050 (*CB) 3GB 14 448 48k 32k 32 1024 512b 1.15GHz Yes No Yes
Tesla C2070 (*CB) 6GB 14 448 48k 32k 32 1024 512b 1.15GHz Yes No Yes
GeForce GTX 480 1536MB 15 480
GeForce GTX 470 1280MB 14 448
Compute capability 1.3
Tesla C1060 4GB 30 240 16384b 16384 32 512 256b 1.30GHz Yes No Yes
GeForce GTX 295 1GB 30 240 16384b 16384 32 512 256b 1.24GHz Yes No Yes
GeForce GTX 285 1GB 30 240 16384b 16384 32 512 256b 1.48GHz Yes No Yes
GeForce GTX 280 1GB 30 240 16384b 16384 32 512 256b 1.30GHz Yes No Yes
GeForce GTX 260 1GB 27 216 16384b 16384 32 512 256b 1.47GHz Yes No Yes
Compute capability 1.2
GeForce GT 360M 1GB 12 96 16384b 16384 32 512 256b 1.32GHz Yes No Yes
GeForce 310 512MB 2 16 16384b 16384 32 512 256b 1.40GHz Yes No Yes
GeForce 240 GT 1GB 12 96 16384b 16384 32 512 256b 1.424GHz Yes No Yes
Compute capability 1.1
ION 256MB 2 16 16384b 8192 32 512 256b 1.1GHz No Yes Yes
Quadro FX 570 256MB 2 16 16384b 8192 32 512 256b 0.92GHz Yes No No
GeForce GTS 250 (*JR) 1G 16 128 16384b 8192 32 512 256b 1.84GHz Yes No No
GeForce 9800 GTX 512MB 16 128 16384b 8192 32 512 256b 1.67GHz Yes Yes Yes
GeForce 9600 GT 512MB 8 64 16384b 8192 32 512 256b 1.62GHz,

1.50GHz

Yes No No
GeForce 9400M 256MB 2 16 16384b 8192 32 512 256b 0.88GHz No No No
GeForce 8800 GTS 512 512MB 16 128 16384b 8192 32 512 256b 1.62GHz Yes No No
GeForce 8600 GT 256MB 4 32 16384b 8192 32 512 256b 0.95GHz Yes No No
GeForce 9400M 512MB 1 8 16384b 8192 32 512 256b 1.40GHz No No No

(*CB) Thanks to Cameron Black for this submission! (*JR) Thanks to Javier Ruiz for this submission!

See Also