Check out my first novel, midnight's simulacra!
Pages: Difference between revisions
No edit summary |
|||
Line 6: | Line 6: | ||
* UltraSPARC III (900MHz+) - '''FIXME''' (upshot: things are fixed, go for it) | * UltraSPARC III (900MHz+) - '''FIXME''' (upshot: things are fixed, go for it) | ||
===x86/amd64=== | ===x86/amd64=== | ||
* 4k default pages / 4M available | |||
* 2M in PAE | |||
===ia64=== | ===ia64=== | ||
Revision as of 17:20, 25 June 2009
Hardware
- PAE, page tables, PTEs, TLB, MMU -- explain FIXME
UltraSPARC
- UltraSPARC I and II - four page sizes. one instruction TLB, one data TLB, each 64 fully-associative entries, each capable of using any of the four page sizes.
- UltraSPARC III (750MHz) - FIXME (upshot: just use native 8k pages; there's only 7 largepage TLB entries available to userspace)
- UltraSPARC III (900MHz+) - FIXME (upshot: things are fixed, go for it)
x86/amd64
- 4k default pages / 4M available
- 2M in PAE
ia64
Huge Pages
Making pages larger means fewer TLB misses for a given TLB size (due to more pages being supportable in the same amount of memory, due to narrower page identifiers), large mapping/releasing operations will be faster (due to fewer page table entries needing to be handled), and less memory is devoted to page table entries for a given amount of memory being indexed. The downside is possible wastage of main memory (due to pages not being used as completely). A 2002 paper from Navarro et al at Rice proposed transparent operating system support: "Transparent Operating System Support for Superpages". Applications must generally be modified or wrapped to take advantage of large pages, for instance on Linux (through at least 2.6.30) and Solaris (through at least Solaris 9); FreeBSD (as of 7.2) claims transparent support with high performance.
Linux
- They were a 2003 Kernel Summit topic, after seeing first introduction in Linux 2.5.36 (LinuxGazette primer article)
- Rohit Seth provided the first explicit large page support to applications as covered in this LWN article
- alloc_hugepages, free_hugepages, get_large_pages(2) and shared_large_pages(2) were present in kernels 2.5.36-2.5.54
- hugetlbfs and assorted infrastructure replaced these. Mel Gorman's Linux MM wiki has a good page on hugetlbfs. With the CONFIG_HUGETLBFS kernel option enabled, the following variables are seen in /proc/meminfo (from 2.6.30 on amd64 with no hugepages reserved):
HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB
- Val Henson wrote a good 2006 KHB article in LWN on transparent largepage support
- Jonathan Corbet followed up with a relevant summary of the 2007 Kernel Summit's VM mini-summit
- There appears, as of Linux 2.6.30 and glibc 2.9, to exist no way to use shm_open(3) with huge pages under Linux
- One course, of can, directly open(2) and mmap(2) a file on a hugetlbfs filesystem
Solaris
- Essential paper: "Supporting Multiple Page Sizes in the Solaris Operating System" (March 2004)
- Solaris 2.6 through Solaris 8 offered "intimate shared memory" (ISM) based of 4M pages, requested via shmat(2) with the SHM_SHARE_MMU flag
- Solaris 9 supported a variety of page sizes and introduced memcntl(2) to configure page sizes on a per-map basis
- The ppgsz(1) wrapper amd libmpss.so libraries allow configuration of heap/stack pagesizes on a per-app-instance basis
- The getpagesizes(2) system call has been added to discover multiple page sizes
FreeBSD
- FreeBSD 7.2, released May 2009, supports fully transparent "superpages"
- They must be enabled via setting loader tunable vm.pmap.pg_ps_enabled to 1
- See the thread entitled "Superpages?" on the freebsd-current mailing list
Applications
- MySQL can use hugetlbfs via the large-pages option
- kvm can use hugetlbfs with the --mem-path option since kvm-62, released in late 2008
Page Clustering
Page clustering (implemented by William Lee Irwin for Linux in 2003, and not to be confused with page-granularity swap-out clustering). There's good coverage in this KernelTrap article. This is essentially huge pages without hardware support, and therefore with some overhead and no improvements in TLB-relative performance. It was written up in Irwin's 2003 OLS paper, "A 2.5 Page Clustering Implementation".