Last active
January 11, 2025 12:49
-
-
Save KoStard/7c648ed83eb11c81655ac136c16860a6 to your computer and use it in GitHub Desktop.
A small example showing performance boost when doing computation on GPU with torch vs running on CPU with NumPy.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
⋊> ~/s/g/education on main ⨯ uv run example/with_numpy.py 16:35:47 | |
NumPy (CPU) Time: 9.0652 seconds | |
⋊> ~/s/g/education on main ⨯ uv run example/with_torch.py 16:36:03 | |
Using device: mps | |
PyTorch (MPS) Time: 0.0077 seconds | |
⋊> ~/s/g/education on main ⨯ uv run example/with_torch_cpu.py 16:37:40 | |
Using device: cpu | |
PyTorch (MPS) Time: 2.5213 seconds |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import numpy as np | |
import time | |
# Create large random matrices | |
size = 10000 # Size of the square matrix | |
a = np.random.randn(size, size) | |
b = np.random.randn(size, size) | |
# Start timing | |
start_time = time.time() | |
# Perform matrix multiplication | |
result = np.matmul(a, b) | |
# End timing | |
end_time = time.time() | |
# Print the time taken | |
print(f"NumPy (CPU) Time: {end_time - start_time:.4f} seconds") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import torch | |
import time | |
# Check if GPU is available | |
if torch.cuda.is_available(): | |
device = torch.device("cuda") | |
elif torch.backends.mps.is_available(): | |
device = torch.device("mps") # Use Apple Silicon GPU | |
else: | |
device = torch.device("cpu") # Fallback to CPU | |
print(f"Using device: {device}") | |
# Create large random matrices | |
size = 10000 # Size of the square matrix | |
a = torch.randn(size, size, device=device) | |
b = torch.randn(size, size, device=device) | |
# Start timing | |
start_time = time.time() | |
# Perform matrix multiplication | |
result = torch.matmul(a, b) | |
# End timing | |
end_time = time.time() | |
# Print the time taken | |
print(f"PyTorch (MPS) Time: {end_time - start_time:.4f} seconds") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import torch | |
import time | |
device = torch.device("cpu") # Fallback to CPU | |
print(f"Using device: {device}") | |
# Create large random matrices | |
size = 10000 # Size of the square matrix | |
a = torch.randn(size, size, device=device) | |
b = torch.randn(size, size, device=device) | |
# Start timing | |
start_time = time.time() | |
# Perform matrix multiplication | |
result = torch.matmul(a, b) | |
# End timing | |
end_time = time.time() | |
# Print the time taken | |
print(f"PyTorch (MPS) Time: {end_time - start_time:.4f} seconds") |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment