Skip to content

Instantly share code, notes, and snippets.

@gsoykan
Created March 23, 2024 13:37
Show Gist options
  • Save gsoykan/a771ce07f0625e329487163138af5f5b to your computer and use it in GitHub Desktop.
Save gsoykan/a771ce07f0625e329487163138af5f5b to your computer and use it in GitHub Desktop.

In PyTorch, "contiguous" refers to the way an array (or tensor) is stored in memory. When a tensor is contiguous, the data within the tensor is stored in a continuous block of memory. This means that the elements are laid out in memory in the order that they appear in the tensor, row by row, column by column, etc., without any gaps between the elements that belong to different rows or columns.

Non-contiguous tensors can occur after certain operations that change the shape or stride of a tensor, like transpose, permute, or slicing. When a tensor is non-contiguous, the elements are not stored in a linear chunk of memory, which can lead to inefficient memory access and increased computational overhead during operations.

To ensure that a tensor is contiguous, you can call the .contiguous() method on a tensor. This method will make a contiguous copy of the tensor in memory if the tensor is not already contiguous. Here’s how you use it:

tensor = tensor.contiguous()

Making a tensor contiguous rearranges its data in memory to ensure that the elements are laid out sequentially, which can optimize performance for subsequent operations that require linear memory access.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment