Last active
July 8, 2022 14:11
-
-
Save j3rrykh4n/e2084dd1ef314662642434d4324e7ab0 to your computer and use it in GitHub Desktop.
Look for Hole and dig everything ~_~
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Technology | |
| Subdomain Enumeration: | |
| # Basic usage | |
| subfinder -d example.com > example.com.subs | |
| # Recursive | |
| subfinder -d example.com -recursive -silent -t 200 -v -o example.com.subs | |
| subfinder -dL domains.txt -o subdomains.txt & subjack -w subdomains.txt | |
| # Use censys for even more results | |
| subfinder -d example.com -b -w wordlist.txt -t 100 -sources censys -set-settings CensysPages=2 -v -o example.com.subs | |
| #automation | |
| subfinder -d subtarget.com -silent | httpx -follow-redirects -status-code -vhost -threads 300 -silent | sort -u | grep “[200]” | cut -d [ -f1 > sub.txt | |
| # passive | |
| amass enum --passive -d example.com -o example.com.subs | |
| # active | |
| amass enum -src -ip -brute -min-for-recursive 2 -d example.com -o example.com.subs | |
| # Basic usage | |
| assetfinder [--subs-only] <domain> | |
| # Extract subdomains from output | |
| gau -subs example.com | cut -d / -f 3 | sort -u | |
| sublist3r | |
| /sublist3r.py -d example.com | |
| nuclei | |
| nuclei -l alive.subdomains -t ~/tools/nuclei-templates/cves -c 60 -o nuclei_op/cves -pbar | |
| Altdns | |
| altdns -i subdomains.txt -o data_output -w words.txt -r -s results_output.txt | |
| Domain Profiler | |
| ./profile <targetName> | |
| photon | |
| python3 photon.py -u “<yourTargetName>” --keys --dns | |
| Search Engines: | |
| github | |
| “company.com” “dev” | |
| “dev.company.com” | |
| “company.com” API_key | |
| “company.com” password | |
| “api.company.com” authorization | |
| password “.target.tld” telnet, ftp, ssh, mysql, jdbc, oracle | |
| target.tld “password_value_here” | |
| site:target.com -www | |
| site:target.com intitle:”test” -support | |
| site:target.com ext:php | ext:html | |
| site:subdomain.target.com | |
| site:target.com inurl:auth | |
| site:target.com inurl:dev | |
| inurl:wp-config.php intext:DB_PASSWORD -stackoverflow -wpbeginner -foro -forum -topic -blog -about -docs -articles | |
| Shodan | |
| country: find devices in a particular country | |
| geo: you can pass it coordinates | |
| hostname: find values that match the hostname | |
| net: search based on an IP or /x CIDR | |
| os: search based on operating system | |
| port: find particular ports that are open | |
| before/after: find results within a timeframe | |
| Censys | |
| Get Subdomains from IPs | |
| # Basic usage | |
| python3 hosthunter.py <target-ips.txt> > vhosts.txt | |
| httprobe | |
| cat recon/example/domains.txt | httprobe | |
| # Use other ports | |
| cat domains.txt | httprobe -p http:81 -p https:8443 | |
| # Concurrency - You can set the concurrency level with the -c flag: | |
| cat domains.txt | httprobe -c 50 | |
| subjack | |
| Basic usage | |
| ./subjack -w subdomains.txt -t 100 -timeout 30 -o results.txt -ssl | |
| EyeWitness | |
| # Basic usage | |
| ./EyeWitness.py -f filename --timeout optionaltimeout | |
| # Further examples | |
| ./EyeWitness -f urls.txt --web | |
| ./EyeWitness -x urls.xml --timeout 8 | |
| ./EyeWitness.py -f urls.txt --web --proxy-ip 127.0.0.1 --proxy-port 8080 --proxy-type socks5 --timeout 120 | |
| webscreenshot | |
| # Basic usage | |
| python webscreenshot.py -i list.txt -w 40 | |
| URL and Parameter Discovery: | |
| # Basic usage | |
| # Run with single site | |
| gospider -s "https://google.com/" -o output -c 10 -d 1 | |
| # Run with site list | |
| gospider -S sites.txt -o output -c 10 -d 1 | |
| # Also get URLs from 3rd party (Archive.org, CommonCrawl.org, VirusTotal.com, AlienVault.com) and include subdomains | |
| gospider -s "https://google.com/" -o output -c 10 -d 1 --other-source --include-subs | |
| # Blacklist url/file extension. | |
| gospider -s "https://google.com/" -o output -c 10 -d 1 --blacklist ".(woff|pdf)" | |
| Arjun | |
| # Scanning a single URL | |
| python3 arjun.py -u https://api.example.com/endpoint --get | |
| python3 arjun.py -u https://api.example.com/endpoint --post | |
| # Scanning multiple URLs | |
| python3 arjun.py --urls targets.txt --get | |
| # Multi-threading | |
| python3 arjun.py -u https://api.example.com/endpoint --get -t 22 | |
| GetAllUrls (gau) | |
| # Basic usage | |
| printf example.com | gau | |
| cat domains.txt | gau | |
| gau example.com | |
| gau -subs example.com | |
| Filtering | |
| After having assembled a huge list of subdomains, URLs, and parameters, we now want to filter them, and remove duplicates. | |
| qsreplace | |
| # Remove duplicates | |
| cat urls.txt |qsreplace -a | |
| gf | |
| ~# gf redirect urls-uniq.txt | |
| urls-uniq.txt:3:https://www.example.com/test1?login_url=2¶m3=1 | |
| Use BurpSuite's passive scans | |
| cat urls.txt | parallel -j50 -q curl -x http://127.0.0.1:8080 -w 'Status:%{http_code}\t Size:%{size_download}\t %{url_effective}\n' -o /dev/null -sk | |
| Linkfinder | |
| # Basic usage | |
| # Get HTML report | |
| python linkfinder.py -i https://example.com/1.js -o results.html | |
| # Output to CLI | |
| python linkfinder.py -i https://example.com/1.js -o cli | |
| graburl -url domain | |
| gospider -s "domain" -o output -c 10 -d 1 | |
| gospider -s "domain" -o output -c 10 -d 1 --other-source --include-subs | |
| gospider -S sites.txt -o output -c 10 -d 1 | |
| python3 arjun.py -u https://api.example.com/endpoint --get -o result.js | |
| on | |
| python3 paramspider.py --domain hackerone.com | |
| Photon | |
| python photon.py -u | |
| Find hidden directories or files: | |
| ffuf | |
| # Basic usage | |
| ffuf -w wordlist.txt -u https://example.com/FUZZ | |
| # Automatically calibrate filtering options | |
| ffuf -w wordlist.txt -u https://example.com/FUZZ -ac | |
| # Fuzz file paths from wordlist.txt, match all responses but filter out those with content-size 42 | |
| ffuf -w wordlist.txt -u https://example.org/FUZZ -mc all -fs 42 -c -v | |
| python dirsearch.py -u -e conf,config,bak,backup,swp,old,db,sql,asp,aspx,aspx~,asp~,py,py~,rb,rb~,php,php~,bak,bkp,cache,cgi,conf,csv,html,inc,jar,js,json,jsp,jsp~,lock,log,rar,old,sql,sql.gz,sql.tar.gz,sql~,swp,swp~,tar,tar.bz2,tar.gz,txt,wadl,zip,.log,.xml,.js.,.json | |
| Gobuster | |
| for i in $(cat domain); do echo""; echo "Subdomain of $i";echo "";gobuster dir -w /usr/share/seclists/Discovery/Web-Content/common.txt -u $i -e -o tmp ;cat tmp >> final; echo ""; done | |
| Wordpress: | |
| wpscan --url example.com -e vp --plugins-detection mixed --api-token YOUR_TOKEN | |
rudSarkar
commented
Aug 25, 2020
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment