Skip to content

Instantly share code, notes, and snippets.

@sharmaansh21
sharmaansh21 / slack-pagerduty-oncall.py
Created May 25, 2022 11:34 — forked from markddavidoff/slack-pagerduty-oncall.py
Updates a Slack User Group with People that are on call in PagerDuty
#!/usr/bin/env python
from __future__ import print_function
import json
import logging
from urllib2 import Request, urlopen, URLError, HTTPError
from base64 import b64decode
@sharmaansh21
sharmaansh21 / aws-ebs-test.md
Created November 2, 2023 15:32 — forked from niwinz/aws-ebs-test.md
EBS Benchmark (gp3 vs gp2, and st1 just for comparison).

EBS Volume Type Benchmark

Tested volumes:

  • /dev/nvme1n1 (type: st1, size: 125gb, defaults)
  • /dev/nvme2n1 (type: gp3, size: 15gb, iops:3000, tp: 125MB/s)
  • /dev/nvme3n1 (type: gp2, size: 15gb, defaults (iops:100 burstable until 3000))

All volumes created from scrach, no gp2->gp3 conversion.

@sharmaansh21
sharmaansh21 / data_loading_utils.py
Created May 29, 2024 11:19 — forked from iyvinjose/data_loading_utils.py
Read large files line by line without loading entire file to memory. Supports files of GB size
def read_lines_from_file_as_data_chunks(file_name, chunk_size, callback, return_whole_chunk=False):
"""
read file line by line regardless of its size
:param file_name: absolute path of file to read
:param chunk_size: size of data to be read at at time
:param callback: callback method, prototype ----> def callback(data, eof, file_name)
:return:
"""
def read_in_chunks(file_obj, chunk_size=5000):