Check out my first novel, midnight's simulacra!
Libtorque
A work of tightly-controlled audacity |
libtorque is a multithreaded event library for UNIX designed to take full advantage of the manycore, heterogenous, NUMA future. The project proposal suggests motivation for libtorque: I believe it necessary to take scheduling and memory-placement decisions into account to most optimally handle events, especially on manycore machines and especially to handle unexpected traffic sets (denial of service attacks, oversubscribed pipes, mixed-latency connections, etc). Along the way, I intend to shake up the UNIX programming idiom; my hope is that libtorque leads to more network programmers thinking about the complex issues involved, but simplifies rather than aggravates a task already fraught with difficulty.
Previous, non-threaded event libraries include libevent, libev and liboop.
Resources
- git hosting from GitHub (dankamongmen/libtorque project page)
- Lots of good data in the README!
- git clone from git://github.com/dankamongmen/libtorque.git
- bugzilla, hosted here on http://dank.qemfd.net/bugzilla/
- A presentation I did for GT's Arch-Whiskey seminar, 2009-11-13
{{#bugzilla:
|group=milestone |bar=status
}}
Important open bugs
{{#bugzilla:
|product=libtorque |severity=critical,major |status=!(RESOLVED,VERIFIED)
}}
Design/Functionality
libtorque exposes an affinity-managing, continuations-based, architecture-aware, multithreaded scheduler and various utility functions, including an architecture-, OS-, and thread-aware allocator with strong scheduler feedbacks. It can analyze arbitrary object code via libdl and libelf, discover where instructions live, and allocate around those areas. It can take dynamically balanced or (static) asymmetric interrupt loads into account. By making service decisions and allocations based on whole-system effects, libtorque provides low latency and high throughput under even pedantically asymmetric, irregular loads. By carefully distributing edge-triggered event descriptors among various processors' notification sets, highly scalable use is made of advanced low-level primitives such as kqueue and epoll. Through the use of lazy asynchronous I/O, each (expensive!) core is kept busy doing real work.
- Q: Why aren't you letting the OS manage scheduling, thus following the advice of just about everyone (assuming no real-time requirements)?
- A: Scalable event notification on UNIX is based around stateful polling (even in signal-driven POSIX asynchronous I/O, we want to use stateful polling on signals both for performance and event unification). The distribution of these events, at heart, drives scheduling. Since we must influence this distribution, we must make scheduling decisons.
- Q: Why not just launch a thread per configured processing element, and use an existing event-handling library?
- A: These threads will have to communicate with one another to spread around event sources. If reshuffling is desired, they must communicate further; if not, the system can be forced into dynamic behavior. The information most pertinent to this shuffling is at the event-handling level; we thus make the decisions there.
- Q How is this any better than Win32's I/O Completion Ports or Solaris 10's Event Completion Framework?
- A These two systems are laudable, and provide much of libtorque's functionality (on different operating systems, of course). An algorithm expressible in IOCP can be expressed in libtorque's CPS; the opposite might not always be true (that is, I suspect IOCP ⊆ libtorque, but that the converse is not strictly true FIXME detail). More importantly, libtorque performs architecture- and topology-aware event handling within its CPUSet, which a user of IOCP must provide.
- This having been said, they ought be much easier to build atop than the linux and FreeBSD primitives.
System discovery
- Exciting news! Thanks to the efforts of Dr. David Bader and Dr. Richard Vuduc of Georgia Tech's Computational Science and Engineering program, I now have access to a Niagara 2 machine. Expect SPARC and OpenSolaris support soon!
- THANK YOU!, Professors Bader and Vuduc!
- Full support for CPUID as most recently defined by Intel and AMD (more advanced, as of 2009-10-31, than x86info)
- Full support for Linux and FreeBSD's native cpuset libraries, and SGI's libcpuset and libNUMA
- Discovers and makes available, for each processor type:
- ISA, ISA-specific capabilities, and number of concurrent threads supported (degrees of SMT)
- Line count, associativity, line length, geometry, and type of all caches
- Entry count, associativity, page size and type of all TLBs
- Inclusiveness relationships among cache and TLB levels
- Interconnection topology, APIC ID's, and how caches are shared among processors
- More: properties of hardware prefetching, ability to support non-temporal loads (MOVNTDQA, PREFETCHNTA, etc)
- Discovers and makes available, for each memory node type:
- Connected processor groups and relative distance information
- Number of pages and bank geometry
- More: OS page NUMA/prefetching policies, error-recovery info
archdetect
- Utility built/packaged with libtorque:
Package 0: (8 threads total) Core 0: 1 9 (2x processor type 1) Core 1: 3 11 (2x processor type 1) Core 2: 5 13 (2x processor type 1) Core 3: 7 15 (2x processor type 1) Package 1: (8 threads total) Core 0: 0 8 (2x processor type 1) Core 1: 2 10 (2x processor type 1) Core 2: 4 12 (2x processor type 1) Core 3: 6 14 (2x processor type 1) ( 1x) Memory node 1 of 1: 12292928KB (11.723 GB) total, 4KB pages ( 16x) Processing unit type 1 of 1: Brand name: Intel(R) Xeon(R) CPU E5520 @ 2.27GHz (OEM) Family: 0x006 (6) Model: 0x1a (26) Stepping: 5 2 threads per processing core, 8 cores (4 logical) per package Cache 1 of 4: 32KB total, 64B line, 8-assoc, 2-shared (L1 data) Cache 2 of 4: 32KB total, 64B line, 4-assoc, 2-shared (L1 code) Cache 3 of 4: 256KB total, 64B line, 8-assoc, 2-shared (L2 unified) Cache 4 of 4: 8MB total, 64B line, 16-assoc, 16-shared (L3 unified) TLB 1 of 5: 4KB pages, 7-entry, 7-assoc, 2-shared (L2 code) TLB 2 of 5: 4KB pages, 64-entry, 4-assoc, 2-shared (L2 data) TLB 3 of 5: 4MB pages, 32-entry, 4-assoc, 2-shared (L1 data) TLB 4 of 5: 4KB pages, 64-entry, 4-assoc, 2-shared (L1 code) TLB 5 of 5: 4KB pages, 512-entry, 4-assoc, 2-shared (L2 data)
Scheduling
System architecture, allocations, continuation targets, event pattern and of course processing itself play a role in our scheduling. We first construct the scheduling universe, a scheduling model using the hardware detected during library initialization. Taken from src/libtorque/topology.h (commit a03fc592d03ccc9c20b016faaa1362b88bc07417):
// We are not considering distributed systems in this model. // // The scheduling universe is defined by an ordered set of $N > 1$ levels // $L_0..L_n$. A scheduling group is a structural isomorphism, a counter $C$ of // instances, and the $C$ affinity masks of corresponding processing elements. // A level contains $N > 0$ scheduling groups and a description of hardware // unique to that level. A unique surjection from classes of usable hardware to // levels (no hardware class is described at two levels, and all usable // hardware is described by the scheduling universe) is defined by via our // discovery algorithm. // // The reality compactifies things a bit, but in theory: // Level L_0 consists of scheduling groups over threads and processor types. // Each successive level $L_{n+1}, n >= 0$ extends $L_n$'s elements. For each // element $E \in L_n$, extend $E$ through those paths reachable at equal cost // from all elements in $E$. // // Examples: // A uniprocessor, no matter its memories, is a single topology level. // 2 unicore, unithread SMP processors with distinct L1 and shared L2 are two // topology levels: 2x{proc + L1} and {L2 + memory}. Split the L2, and they // remain two topology levels: 2x{proc + L1 + L2} and {memory}. Combine both // caches for a single level: {2proc + L1 + L2 + memory}. Sum over a level's // processors' threads for a thread count. // // Note that SMT does not come into play in shaping the topology hierarchy, // only in calculating the number of threads in a topological group. We only // schedule to SMT processors if // - shared data is allocated which fits entirely within the group's cache, or // - we need to.
- If the memory being used is distinct, space things out as far as possible:
- Use different NUMA nodes, to spread memory bandwidth
- Use different packages, to engage different caches
- Use different cores, to engage different execution units
- Use different threads, to engage different architectural state
- If the memory being used is shared, compress as much as possible:
- Use the same core, to maximize per-core cache utilization and minimize trips off-core
- Use the same package, to maximize per-package cache utilization and minimize trips off-die
- Use the same NUMA node, to maximize per-node memory utilization and minimize trips off-node
- Whether to use the same core is a function of proportion/distribution of shared accesses...
- Shared writes are even more important to keep close, to minimize true coherence overhead
- Unshared writes are even more important to space out, to minimize false coherence overhead
- Buffers (both RX and TX) are written to once, then read from N (N>0) times -- worst case for coherence protocols
- ...and SMT-affability of the instruction load (in other words, degree of fine-grained parallelism)
- Network servers are not matrix kernels! Assume crappy IPC (syscalls, branching, I/O), and exploit sharing purely in terms of memory
- Let's color connections
Edge-triggered event handling
- libtorque deals only in edge-triggered event notification, thus evading an entire level of otherwise necessary locking. FIXME EXPLAIN
- Multiple threads share a kernel event queue, but write results to distinct portions (|largest cache line|-spaced). Only one goes into the event dispatch system call at a time. If there's a good spread of events, they'll each get a portion. If not, we're either way ahead of current traffic, or there's some lengthy events.
- We can steal work in the case of lengthy events.
- We must implement an event cache anyway in such cases (lengthy unboundable receipts)
Allocation
FIXME EXPLAIN
- Networks imply buffers. Big, unshared, easily-colored buffers.
- As much as possible, parameterize buffer allocation and move it into libtorque.
- We get slabs/recycling for free, and only need lock when we enlarge slabs (kernel also locks, so try to match)
- Expose allocators for explicitly shared/unshared data
- When done based on connection type, libtorque can wait for data's arrival to allocate
- Excellent way to introduce L5+ van Jacobson channels (a zetetic-like engine, perhaps driven by parvenu)
- As much as possible, parameterize buffer allocation and move it into libtorque.
Scalability
Commonly-used algorithms (anything in the event-handling hotpath) oughtn't depend on the number of processors, depth of the scheduling topology, or number of memories -- that is, O(1) as these grow. Performance per processor ought exhibit linear growth from top to bottom, save perhaps superlinear growth when a scheduling level shares caches.
Robustness
As the number of dies grow, so do interconnects. We're extending Moore's Law, but what about Weibull's distribution? As the number of cores rises, so too does the likelyhood that a processor or DIMM failure will be seen across the lifetime of a machine (remember, a failure distribution exponential as a function of time suggests only ~37% likelyhood of any given component reaching MTBF). Hot-pluggable processors and memories are also likely to be encountered, especially in cloud/virtual environments. For that matter, sysadmins can reconfigure cpusets and NUMA on the fly, or even migrate our process between sets. We ought design with all this in mind.
- Determine: how are Linux/FreeBSD processes notified/affected when their processors or memories fail?
- Determine: how to recover/redetect/redistribute?
Compatibility wrappers
- All of this is very hand-wavy thus far...
Unthreaded, blocking I/O
- Anything done purely via shim can be done without even relinking the binary, by launching it with an LD_PRELOAD wrapper
- For an unthreaded, blocking program, we can shim read(), lseek() and write() -- the blocking I/O operations -- and parallelize them *but*
- we'd have to play some COW game (difficult), or consider the input gifted (breaks semantics)
- analogous to the issues faced by zero-copy networking implementations
- breaks any ordering-dependent control (analogous to memory reordering), although most of this is already broken
- this can be automatically and easily done, although the analysis for safety cannot be automated AFAICT
- we'd have to play some COW game (difficult), or consider the input gifted (breaks semantics)
- Instead of making blocking I/O asynchronous, parallelize the I/O
- Only works for very large I/O, not a bunch of small I/O
Threaded, blocking I/O
- Common (if terrible) paradigm -- bundle state into per-thread state, run main loop per-thread
- "Threads are for people who don't understand state machines." -- Alan Cox
- Issue here isn't scheduling flexibility, but overhead. Not much we can do. Don't write code like this.
Unthreaded, nonblocking I/O
- Hand-rolled or libev/etc-based poll(2) or epoll(2)-like control, one thread
- If hand-rolled: already using a continuations model? Yes? Good. If not, rewrite in continuations.
- Can't help out much with the locking side of things in callback code, but...
- Once callbacks are MT-safe, we're just like libev et al.
- We ought be able to do a trivial mapping from libev/libevent/liboop to our primitives
- (if not, that's likely a bad reflection on our design!)
- We ought be able to do a trivial mapping from libev/libevent/liboop to our primitives
Threaded, nonblocking I/O
- We assume multiple threads, each wrapping a libev context
- if you wrote your own multithreaded async continuations library, why do you need libtorque?
- Good! You're MT-safe already, or at least won't become any less MT-safe.
- Strip your shallow, pedestrian, static threading away. See "Threaded, nonblocking I/O" above.
References/Prior Art
- Philip Mucci's "Linux Multicore Performance Analysis and Optimization in a Nutshell", delivered at NOTUR 2009
- Elmeleegy et al's "Lazy Asynchronous I/O", USENIX 2004
- PGAS: Kathy Yelick's "Performance and Productivity Opportunities using Global Address Space Programming Models", 2006
- Emery Berger's Hoard and other manycore-capable allocators (libumem aka magazined slab, Google's ctmalloc, etc).
- Williams, Vuduc, Oliker, Shalf, Yelick, Demmel. "Tuning Sparse Matrix Vector Multiplication for multi-core SMPs", 2007
- The Radovic-Hagersten lock team has a page on NUMA locking
- A lot of discursive theorizing/ruminating is captured on my Fast UNIX Servers page
Miscellanea
History
libtorque, under that name, began as a project for Professor Rich Vuduc's Fall 2009 CSE6230. It really began gestating in three parts:
- Work on intrusion prevention at Reflex got me hooked on automata-based networking activations and fast networking
- Work on ICAP servers and reverse proxies at McAfee, especially snare, got me thinking about networking API's
- Professor Tom Conte's Spring 2009 "CS 8803 MCA: Multicore and Manycore Architecture" lit up the parallelism fire
Milestones
- 2009-10-22: First commit (e7429294beb9dc581a7cdab2371d2ddca3169047)
- 2009-11-13: CS 8001-CAS Arch-Whiskey presentation
- 2009-11-19: CSE 6230 checkpoint (see proposal)
- 2009-12-10: CSE 6230 due date (again, see proposal)
Logos
888 ,e, 888 d8 "...tear the roof off the sucka..." 888 " 888 88e d88 e88 88e 888,8, e88 888 8888 8888 ,e e, 888 888 888 888b d88888 d888 888b 888 " d888 888 8888 8888 d88 88b 888 888 888 888P 888 Y888 888P 888 Y888 888 Y888 888P 888 , 888 888 888 88" 888 "88 88" 888 "88 888 "88 88" "YeeP" _____________________________________________ 888 _________________ continuation-based unix i/o for manycore numa\888/© nick black 2009