Computational Science Community Wiki

Differences between revisions 11 and 12
Revision 11 as of 2011-07-06 21:06:15
Size: 2491
Editor: MichaelBane
Comment:
Revision 12 as of 2011-07-06 21:14:12
Size: 2783
Editor: MichaelBane
Comment:
Deletions are marked like this. Additions are marked like this.
Line 12: Line 12:
 * Q: how do I profile my CUDA code?  * Q: how do I profile my GPU codes?
  i. SDK solution:
  i. add calls to `clock()` but beware asynchronicity
Line 15: Line 17:
  i. CUDA has `cuda-memcheck`

 * Q: how do I debug my GPU codes?
  i. CUDA SDK has cuda-gdb
Line 19: Line 25:
 * Q: what does the `computeMode` output mean?  * Q: what does the `computeMode` output mean on NVIDIA cards?
Line 31: Line 37:
  i. There's two ways of setting this up   i. There's two ways of setting this up on NVIDIA cards
Line 38: Line 44:
 * Q: What about exclusive use of AMD cards?
  i.

== Accessing GPU Resources ==

University of Manchester GPU FAQ


alt="NVIDIA CUDA Research Centre"

  • Software for GPUs inc. compiler/directives, maths libs & tools (debuggers and profilers)


Please login and add your own questions or solutions

Performance

  • Q: how do I get max performance from NVIDIA cards?
    1. A: you need to use pinned memory and asynchronous comms
  • Q: how do I profile my GPU codes?
    1. SDK solution:
    2. add calls to clock() but beware asynchronicity

  • Q: how do I determine the amount of memory actually used by the GPU?
    1. CUDA has cuda-memcheck

  • Q: how do I debug my GPU codes?
    1. CUDA SDK has cuda-gdb
  • Q: how can I tell if my NVIDIA card is running in exclusive mode or not?
    1. A: load the SDK and run deviceQuery and examine the computeMode output

  • Q: what does the computeMode output mean on NVIDIA cards?

    1. CUDA 4.0 has 4 compute modes
      • Default: Multiple host threads can use the device
      • Exclusive-process: Only one CUDA context may be created on the device across all processes in the system and that context may be current to as many threads as desired within the process that created that context.
      • Exclusive-process-and-thread: Only one CUDA context may be created on the device across all processes in the system and that context may only be current to one thread at a time.
      • Prohibited: No CUDA context can be created on the device.
    2. CUDA 3 has 3 compute nodes
      • Default - same as CUDA 4.0
      • Exclusive - only one host thread to use device at any given time.
      • Prohibited - same as CUDA 4.0
  • Q: how can I (and only me) run on a GPU card?
    1. There's two ways of setting this up on NVIDIA cards
      1. set your card to be in exclusive mode (see above); OR

      2. grab a whole node (using, eg, qsub -pe mpich 4 if 4 slots per node) AND explicitly select each GPU (eg from your CUDA or OpenCL context) if using more than one card otherwise the kernels all run on same card

    2. The HECToR GPU testbed consists of 4 nodes, each with multiple GPUs (see here). Therefore it is important to consider how programs running on the same node share these nodes, which is controlled by the compute modes.

    3. For more information see section 3.6 in the NVIDIA CUDA C Programming Guide

    4. On the HECToR GPU Testbed all NVidia GPUs are in Default compute mode (checked on 10 June 2011)

  • Q: What about exclusive use of AMD cards?

Accessing GPU Resources

  • Q: how do I use the GPU testbed on HECToR?
    1. A: Accessing and running; contact us <its-research@manchester.ac.uk> if you'd like trial access