Skip to content

Instantly share code, notes, and snippets.

@Nadrieril
Last active August 14, 2024 21:34
Show Gist options
  • Save Nadrieril/d14921d5dc6c7695c0cf87a3b406f978 to your computer and use it in GitHub Desktop.
Save Nadrieril/d14921d5dc6c7695c0cf87a3b406f978 to your computer and use it in GitHub Desktop.
`&mut [u8]` as `Allocator`
#![feature(allocator_api)]
#![feature(slice_ptr_get)]
#![feature(strict_provenance)]
use core::{alloc::Layout, ptr::NonNull, sync::atomic::AtomicUsize};
use std::alloc::{AllocError, Allocator};
use std::marker::PhantomData;
/// Allocator backed by a provided buffer. Once the buffer is full, all subsequent allocations fail.
pub struct BufferAllocator<'a> {
/// The backing buffer.
buf: NonNull<[u8]>,
/// The allocated length.
/// SAFETY: this is smaller than `buf.len()`.
len: AtomicUsize,
// Recall that our pointer comes from a reference.
phantom: PhantomData<&'a mut [u8]>,
}
impl<'a> BufferAllocator<'a> {
/// Creates a new allocator using the specified buffer.
#[must_use]
pub fn new(buf: &'a mut [u8]) -> Self {
Self {
buf: buf.into(),
len: AtomicUsize::new(0),
phantom: PhantomData,
}
}
pub fn allocated_bytes(&self) -> usize {
self.len.load(core::sync::atomic::Ordering::Relaxed)
}
}
unsafe impl Allocator for BufferAllocator<'_> {
fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
use core::sync::atomic::Ordering;
loop {
let len = self.len.load(Ordering::Acquire);
// SAFETY: by the invariant of `self.len`, we're indexing inside the backing buffer.
// Note: we must not use this pointer until we've updated `self.len` to ensure we have
// unique ownership over the contents.
let free_space_start: NonNull<u8> = unsafe { self.buf.as_non_null_ptr().add(len) };
let padding = free_space_start.align_offset(layout.align());
let new_len = padding + len + layout.size();
if new_len > self.buf.len() {
return Err(AllocError);
}
if self
.len
.compare_exchange(len, new_len, Ordering::Release, Ordering::Relaxed)
.is_err()
{
continue;
};
// SAFETY: we're indexing inside the backing buffer; we just checked the bounds.
let alloc_start: NonNull<u8> = unsafe { free_space_start.add(padding) };
let allocated_slice: NonNull<[u8]> =
NonNull::slice_from_raw_parts(alloc_start, layout.size());
// SAFETY: Using `padding`, we made sure the beginning of the slice is correctly
// aligned. By updating `self.len`, we ensure no one else will touch this part of the
// buffer ever. We added enough length to fit the whole of `layout`. Hence `slice_ptr`
// points to a uniquely-owned, aligned slice of bytes that can hold the provided
// `layout`.
return Ok(allocated_slice);
}
}
unsafe fn deallocate(&self, _ptr: NonNull<u8>, _layout: Layout) {}
}
unsafe impl Sync for BufferAllocator<'_> {}
unsafe impl Send for BufferAllocator<'_> {}
fn main() {
let mut slab = [0u8; 32];
{
let slab = BufferAllocator::new(&mut slab);
let mut x = Box::new_in(0u8, &slab);
let ref_x = Box::new_in(&mut *x, &slab);
**ref_x = 42;
}
println!("{:p}", &slab);
println!("{slab:?}");
}
@Nadrieril
Copy link
Author

Nadrieril commented Aug 14, 2024

Inspired by piece's LinearAllocator, this is a simple Allocator struct backed by a user-provided slice. I have limited proficiency in unsafe rust, don't trust me on this, but hopefully I didn't change it too much from piece's version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment