This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
When I do a dig for googleusercontent.com I don't get an answer. If I specify the DNS server IP address I do get an answer. Not sure what is happening but curl and Python cannot resolve the address while the browser can. Any ideas? | |
$ dig googleusercontent.com | |
; <<>> DiG 9.8.3-P1 <<>> googleusercontent.com | |
;; global options: +cmd | |
;; Got answer: | |
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23113 | |
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
CC=gcc | |
INCLUDE=/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 | |
LINK=python2.7 | |
FILE=shell | |
$(FILE): $(FILE).c | |
$(CC) $(FILE).c -I$(INCLUDE) -l$(LINK) -o $(FILE) | |
$(FILE).c: | |
cython --embed $(FILE).pyx |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import requests | |
resp = requests.get('https://ip-ranges.amazonaws.com/ip-ranges.json') | |
ranges = resp.json() | |
sync = ranges['syncToken'] | |
ec2 = [r['ip_prefix'] for r in ranges['prefixes'] if r['service'] == 'EC2'] | |
with open('ec2.conf', 'wb') as f: | |
f.write('EC2 Masscan Configuration.\n'.encode('utf-8')) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env python | |
import requests | |
import random | |
import json | |
import os | |
token = '' | |
cookie = '' | |
server = '' |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/sh | |
if [ "$#" -ne 1 ]; then | |
echo "Usage: bust.sh URL" | |
exit 1 | |
fi | |
APPTEST_DIR="/Users/shaywood/apptest" | |
DISC="$APPTEST_DIR/fuzzdb/discovery/PredictableRes" |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from scapy.all import * | |
http = IP(dst="10.0.2.15)/TCP(dport=80)/"GET /index.html HTTP/1.0\r\n\r\n" | |
send(http) | |
sendp(http, iface="eth0") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# I was testing a web app recently where each POST request updated the session cookie | |
# and generated a new CSRF token in a hidden input field in the body of the response. | |
# By default, Burp's Session handling rules will only use the cookie jar for Spider | |
# and Scanner. I modified the rules to use the cookie jar for Intruder and Repeater | |
# as well. In addition, Burp will only update the cookie jar from Proxy and Scanner | |
# so I had to allow Repeater, Spider, and Intruder to update the cookie jar as well. | |
# This allowed me to use a fresh cookie with each request as required by the app. | |
# | |
# To get a fresh CSRF token with each request I had to write an extension. The | |
# extension processes any responses that it receives from any tool except Proxy and |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# The goal of this script is to complete a three-way handshake with a netcat listener on port 8888. Tcpdump | |
# shows the SYN packet being sent but I'm getting a RST/ACK instead of a SYN/ACK packet from netcat. I've | |
# configured Iptables to drop any RST packets where the source and destination are the same as the server's | |
# IP address, but the output from iptables -L -nv shows the rule is not being hit. Any ideas what is going on? | |
# | |
# I think I've decided that scapy is good for processing pcaps or gathering stats while sniffing traffic but | |
# for actually sending packets, it sucks. I know I can create the socket with Python and use the stream with | |
# Scapy but I really don't want to do that. | |
# Suppress Scapy IPv6 warning |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/env/bin python3 | |
import requests | |
import re | |
import hashlib | |
import sys | |
tag_re = re.compile(r'<.*?>') | |
if len(sys.argv) != 2: |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env python3 | |
import sys | |
import dns.resolver | |
import dns.reversename | |
import dns.zone | |
import dns.exception | |
TIMEOUT = 15.0 |