Check out my first novel, midnight's simulacra!

High Performance Parallel Computing: Difference between revisions

From dankwiki
No edit summary
No edit summary
 
(4 intermediate revisions by the same user not shown)
Line 4: Line 4:
* ''weak scaling'' - maximizing work performed per unit time
* ''weak scaling'' - maximizing work performed per unit time
* ''strong scaling'' - minimizing time-to-solution
* ''strong scaling'' - minimizing time-to-solution
* ''lock-free'' - guaranteed system-wide progress
==Measures of parallel algorithms==
* ''wait-free'' - guaranteed per-thread progress
* Cost: proccount * exectime
* Overhead: Cost(P) - Cost(1)
* Speedup: exectime(1) / exectime(p)
* Efficiency: Speedup / p
* Scalable if
** Efficiency is O(1) as p approaches infinity (*not* 0)
** Work(p) is linear in p
** Fixed work per processor is O(1) as p approaches infinity (*not* 0)
* Isoefficiency: How fast must our working set grow to maintain constant efficiency as processors are added?


==Papers==
==Papers==
* [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/5_hill.pdf Amdahl's Law in the Multicore Era] by MD Hill
* [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/5_hill.pdf Amdahl's Law in the Multicore Era] by MD Hill
* [[Lock-free algorithms]] page

Latest revision as of 11:47, 29 January 2010

CSE 6230 -- High Performance Parallel Computing

  • weak scaling - maximizing work performed per unit time
  • strong scaling - minimizing time-to-solution

Measures of parallel algorithms

  • Cost: proccount * exectime
  • Overhead: Cost(P) - Cost(1)
  • Speedup: exectime(1) / exectime(p)
  • Efficiency: Speedup / p
  • Scalable if
    • Efficiency is O(1) as p approaches infinity (*not* 0)
    • Work(p) is linear in p
    • Fixed work per processor is O(1) as p approaches infinity (*not* 0)
  • Isoefficiency: How fast must our working set grow to maintain constant efficiency as processors are added?

Papers