In PyTorch, "contiguous" refers to the way an array (or tensor) is stored in memory. When a tensor is contiguous, the data within the tensor is stored in a continuous block of memory. This means that the elements are laid out in memory in the order that they appear in the tensor, row by row, column by column, etc., without any gaps between the elements that belong to different rows or columns.
Non-contiguous tensors can occur after certain operations that change the shape or stride of a tensor, like transpose
, permute
, or slicing. When a tensor is non-contiguous, the elements are not stored in a linear chunk of memory, which can lead to inefficient memory access and increased computational overhead during operations.
To ensure that a tensor is contiguous, you can call the .contiguous()
method on a tensor. This method will make a contiguous copy of the tensor in memory if the tensor is not already contiguous. Here’s how you use it:
tensor = tensor.contiguous()
Making a tensor contiguous rearranges its data in memory to ensure that the elements are laid out sequentially, which can optimize performance for subsequent operations that require linear memory access.