Skip to content

Instantly share code, notes, and snippets.

View rvvvt's full-sized avatar
💭
hail, traveler.

rvvvt rvvvt

💭
hail, traveler.
View GitHub Profile
@rvvvt
rvvvt / searx
Created January 18, 2019 07:19
import requests
import json
import csv
ENGINE_URL = "https://searx.me/?q=%s&format=json"
def loop():
with open('firstten.csv', 'r') as a:
reader = csv.reader(a, lineterminator='\n')
@rvvvt
rvvvt / google_search_bs.py
Created January 18, 2019 15:54 — forked from yxlao/google_search_bs.py
Google search with BeautifulSoup
import requests
from bs4 import BeautifulSoup
search_url_prefix = "https://www.google.com/search?q="
def get_first_result(search_str):
search_url = search_url_prefix + search_str
r = requests.get(search_url)
soup = BeautifulSoup(r.text, "html.parser")
return soup.find('cite').text
@rvvvt
rvvvt / cdh-scrape.py
Created January 26, 2019 09:22 — forked from meg-codes/cdh-scrape.py
A basic web-scrape script designed to look for bad links on a particular site
#!/usr/bin/env python
# Script to scrape all links from a site, compile counts of each link, status
# codes of access and output the results as a CSV
#
# There's absolutely no reason this shouldn't be pulled into an OOP paradigm
# per se, but I left it functionalized because that can be easier for multitasking.
#
# Requirements:
# requests, bs4
@rvvvt
rvvvt / requests.py
Created January 26, 2019 22:39 — forked from Chairo/requests.py
requests mutil-threading
# -*- coding:utf-8 -*-
import requests
from time import sleep
from threading import Thread
UPDATE_INTERVAL = 0.01
class URLThread(Thread):
def __init__(self, url, timeout=10, allow_redirects=True):
super(URLThread, self).__init__()
@rvvvt
rvvvt / dirty-igdl.py
Last active May 19, 2019 16:57
Dirty lil instagram image scraper - downloads images by tag name. Aw yis.
import re
import requests
tag = 'mpower'
r = requests.get('http://www.instagram.com/explore/tags/' + tag + '/')
html = r.text
# print(html)
img = re.compile('(?:\"display_url\":\")([^\"]+)\"')
@rvvvt
rvvvt / list_remove_duplicates.py
Created February 20, 2019 15:44
A quick way to remove duplicates from a Python list. Super easy!
the_list = ['a','a','b','b','c','c']
for each in the_list:
# prints our list with duplicates of course
print(each)
newlist=[ii for n,ii in enumerate(the_list) if ii not in the_list[:n]]
for each in newlist:
# and just like that - newlist has no duplicates!
print(each)
@rvvvt
rvvvt / vk_ip_async.py
Created February 25, 2019 15:11 — forked from colyk/vk_ip_async.py
async
import requests
import asyncio
from bs4 import BeautifulSoup
proxy_list = []
def get_html(URL):
r = requests.get(URL)
# print(r.request.headers)
if(r.status_code == 200):
@rvvvt
rvvvt / spamhaha.py
Created March 19, 2019 03:17 — forked from liveashish/spamhaha.py
Sarahah spam bot: A life saver! 🔥 💪
import time
import requests
users_to_attack = ['USER_NAME', ] #list of users to be attacked
def sarahah_post(user, msg):
s = requests.Session()
homeurl = 'https://' + user + '.sarahah.com/'
home = s.get(homeurl)
var express = require("express");
var app = express();
var port = process.env.PORT || 3700;
var io = require('socket.io').listen(app.listen(port));
var Instagram = require('instagram-node-lib');
var http = require('http');
var request = ('request');
var intervalID;
/**
@rvvvt
rvvvt / Linkedin_Login.js
Created April 10, 2019 06:26
Linkedin Login
STEPS:
1.- Create an app:
in linkedin: https://www.linkedin.com/developers/apps
2.- Create your linkedin login URL:
Fill the following URL with your developer information: https://www.linkedin.com/oauth/v2/authorization
example with response_type, client_id, redirect_uri, state:
https://www.linkedin.com/oauth/v2/authorization?response_type=code&client_id=86zig0sfofozk1&redirect_uri=https://localhost:8443/pro&state=aRandomString
3.- Get the user code from linkedin: