Skip to content

Instantly share code, notes, and snippets.

@skurfer
Created March 14, 2017 14:05
Show Gist options
  • Save skurfer/41f99002f56f4a1c2f76fdf32e711351 to your computer and use it in GitHub Desktop.
Save skurfer/41f99002f56f4a1c2f76fdf32e711351 to your computer and use it in GitHub Desktop.
Leak Test
import gc
import time
import sys
import resource
from boto3.session import Session
prev_mem = 0
print('pass, using, diff')
line_tmpl = '{}, {}, {}'
for n in range(50):
time.sleep(1.0)
sess = Session()
client = sess.client('s3')
bname = 'custodian-skunk-trails'
obj_list = client.list_objects_v2(Bucket=bname)
for page in range(10):
token = obj_list.get('NextContinuationToken')
obj_list = client.list_objects_v2(
Bucket=bname,
ContinuationToken=token,
)
if not obj_list['IsTruncated']:
break
stats = resource.getrusage(resource.RUSAGE_SELF)
mem = stats.ru_maxrss
sys.stdout.write(line_tmpl.format(n + 1, mem, mem - prev_mem))
prev_mem = mem
sys.stdout.write('\n')
sys.stdout.flush()
gc.collect()
@skurfer
Copy link
Author

skurfer commented Mar 14, 2017

Here’s the most recent incarnation of the script I was testing with.

On each iteration, it creates an S3 client, gets 10 pages of objects from a specific bucket, then explicitly triggers garbage collection.

I was using objgraph to watch for object creation, but it turns out to have its own impact on overall memory usage, so I ripped it out. But for what it’s worth, after about a dozen or so runs through the loop, the object counts had completely stabilized.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment