Skip to content

Instantly share code, notes, and snippets.

@tartley
Last active February 7, 2025 19:48
Show Gist options
  • Save tartley/bc023abc9963258a374f431102bfa9c2 to your computer and use it in GitHub Desktop.
Save tartley/bc023abc9963258a374f431102bfa9c2 to your computer and use it in GitHub Desktop.
Time how long, on average, it takes to call time.time(), using only time.time() to measure.
#!/usr/bin/env python3
from time import time
# If we iterate a million times,
# then results per-call are in micro seconds.
ITERS = 1_000_000
UNIT = "µs"
# If our first loop below ONLY measures calls to time(),
# then the second loop below (which is to measure loop overhead)
# would be empty, i.e only contain 'pass'. This seems to get
# optimised away, giving incorrect timings, so we call this noop
# from both loops, so that the empty loop is actually executed properly.
def noop():
pass
# The first loop measures how long to call time(), ITERS times
start = time()
for _ in range(ITERS):
noop()
time()
everything = time() - start
print(f"{everything=}{UNIT}")
# The second loop does just the same, but without time(),
# to measure the loop overhead.
start = time()
for _ in range(ITERS):
noop()
overhead = time() - start
print(f"{overhead=}{UNIT}")
# Output result
result = everything - overhead
print(f"time() calls take {result}{UNIT}")
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment