Disclaimer: ChatGPT generated document.
In programming, two closely related concepts frequently appear when working with sequences, arrays, strings, buffers, tensors, or multidimensional data:
- Slice — a contiguous subset or “view” into a sequence
- Stride — the step size used when walking through memory or selecting elements
Both terms come from simple physical metaphors — one from cutting, the other from stepping. But their conceptual and practical importance runs very deep, especially in languages like C++, Python, Rust, Go, and data frameworks like NumPy, BLAS, or GPU kernels.
This article explains:
- What slices are
- What strides are
- Why they are called that
- How they are implemented in C++, Python, and systems programming
- How they relate to memory, performance, and modern API design
A slice is a view into part of a larger sequence — usually zero-copy, non-owning, and defined by:
- A starting point (pointer/offset)
- A length (number of elements)
Conceptually:
“Give me the portion of the sequence from index start to end.”
Because you are literally taking a slice of a larger thing, like slicing:
- a cake
- a loaf of bread
- a log
- a pizza
A contiguous, nicely cut-out piece from the whole.
This metaphor works incredibly well because a slice is:
- contiguous
- bounded
- non-copying (you don't bake a new cake — you cut from the original)
arr = [10, 11, 12, 13, 14]
part = arr[1:4] # [11, 12, 13]let arr = [10, 11, 12, 13, 14];
let part = &arr[1..4]; // slice: [11, 12, 13]In modern C++20, the canonical slice type is std::span<T>:
std::vector<int> v = {10, 11, 12, 13, 14};
std::span<int> slice(v.data() + 1, 3); // [11, 12, 13]A std::span stores:
- pointer to first element
- size (count of elements)
It does not own data, and it does not copy data.
It is the closest built-in abstraction C++ has to a “slice.”
Other slice-like abstractions in C++:
std::string_view— slice of a stringstd::ranges::subrange— slice in ranges pipelinesgsl::span— precursor tostd::span- Typical APIs using
(ptr, length)pairs
A stride determines how far you step in memory between consecutive elements.
“Select every k-th element.”
“Advance the pointer by this number of bytes each time.”
Because it comes from walking:
- Your stride is the distance between your steps.
In memory terms:
A stride is the number of bytes to jump to reach the next element.
Examples:
- Stride = 1 element → contiguous
- Stride = 2 elements → every second element
- Stride = 4 bytes, 12 bytes, etc. → in matrix row/column traversal
arr = [10, 11, 12, 13, 14, 15]
evens = arr[0:6:2] # [10, 12, 14] stride = 2NumPy arrays store strides in bytes:
For a 3×3 float32 matrix:
- stride[0] = 3 * 4 bytes = 12 → step to next row
- stride[1] = 4 bytes → step to next column
This enables reshaping, slicing, and transposing without copying memory.
Unlike Python/NumPy, C++ does not automatically store “stride metadata,” but you implement strides manually using pointer arithmetic.
int arr[] = {10, 11, 12, 13, 14, 15};
for (int i = 0; i < 6; i += 2) {
std::cout << arr[i] << ' ';
}int* p = arr;
int stride = 2;
for (int i = 0; i < 3; ++i)
std::cout << *(p + i * stride) << " ";This is exactly how BLAS/LAPACK, Eigen, and GPU kernels implement strided memory access.
- Eigen uses
InnerStrideandOuterStridetemplates. - MDSPAN (proposed for C++23+ and now available in some compilers) lets you define explicit strides for multidimensional arrays.
- BLAS/LAPACK C APIs always accept stride parameters (
incx,incy) for vector operations.
Example BLAS call:
cblas_saxpy(n, alpha, x, incx, y, incy);The two concepts combine into:
A strided slice — a view that selects a contiguous portion using steps.
Examples:
arr = [0, 1, 2, 3, 4, 5, 6, 7, 8]
slice_with_stride = arr[2:9:3] # [2, 5, 8]C++ has no built-in “strided span,” but you can implement one:
template<typename T>
struct strided_span {
T* data;
size_t count;
size_t stride; // in elements
T& operator[](size_t i) const { return data[i * stride]; }
};This is essentially how:
- Eigen slices work
- NumPy views work internally
- Many HPC codebases manage multidimensional memory
- GPU kernels process tiled data
- Slicing avoids copies → excellent for zero-cost APIs
- Striding allows accessing patterns efficiently
- Better cache locality if stride = 1
- Potential SIMD-friendly access
Strides describe how multidimensional arrays lie in memory:
- Row-major (
stride_col = sizeof(T)) - Column-major (Fortran, BLAS)
- Tiled/block layouts (GPU optimization)
C++20 pushes toward:
- non-owning views (
span,string_view) - boundaries/enforced safety
- better composability
- zero-overhead abstractions
Slices and strides are foundational to this philosophy.
Slices + strides = many transformations “for free”:
- subarrays
- windows
- downsampling
- flattening
- transposing (via strides)
- reshaping
| Concept | Meaning | Example | C++ Equivalent |
|---|---|---|---|
| Slice | contiguous piece of a sequence | arr[2:5] |
std::span, std::string_view |
| Stride | step size between selected elements | arr[::2] |
pointer arithmetic, BLAS incx/incy |
| Strided Slice | slice with a step | arr[1:8:3] |
custom strided-span, Eigen Map |
Slice = “cut a piece out of the sequence.” Stride = “walk through data in fixed steps.”
Together, they give expressive, memory-efficient control over sequences.
I can extend the article with:
- How NumPy calculates strides internally
- How
mdspangives C++ a NumPy-like memory layout - SIMD vectorization implications
- How to design your own C++ slicing API for your libraries
- How slicing and strides interact with cache lines
- How striding affects performance on modern CPUs
Just tell me what you'd like to dive into next!
