Check out my first novel, midnight's simulacra!

High Performance Parallel Computing: Difference between revisions

From dankwiki
No edit summary
mNo edit summary
Line 6: Line 6:
* ''lock-free'' - guaranteed system-wide progress
* ''lock-free'' - guaranteed system-wide progress
* ''wait-free'' - guaranteed per-thread progress
* ''wait-free'' - guaranteed per-thread progress
==Measures of parallel algorithms==
* Cost: proccount * exectime
* Overhead: Cost(P) - Cost(1)
* Speedup: exectime(1) / exectime(p)
* Efficiency: Speedup / p
* Scalable if
** Efficiency is O(1) as p approaches infinity (*not* 0)
** Work(p) is linear in p
** Fixed work per processor is O(1) as p approaches infinity (*not* 0)
* Isoefficiency: How fast must our working set grow to maintain constant efficiency as processors are added?


==Papers==
==Papers==
* [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/5_hill.pdf Amdahl's Law in the Multicore Era] by MD Hill
* [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/5_hill.pdf Amdahl's Law in the Multicore Era] by MD Hill

Revision as of 00:53, 13 November 2009

CSE 6230 -- High Performance Parallel Computing

  • weak scaling - maximizing work performed per unit time
  • strong scaling - minimizing time-to-solution
  • lock-free - guaranteed system-wide progress
  • wait-free - guaranteed per-thread progress

Measures of parallel algorithms

  • Cost: proccount * exectime
  • Overhead: Cost(P) - Cost(1)
  • Speedup: exectime(1) / exectime(p)
  • Efficiency: Speedup / p
  • Scalable if
    • Efficiency is O(1) as p approaches infinity (*not* 0)
    • Work(p) is linear in p
    • Fixed work per processor is O(1) as p approaches infinity (*not* 0)
  • Isoefficiency: How fast must our working set grow to maintain constant efficiency as processors are added?

Papers