This doc has been replaced by the official Ordinals Handbook - https://docs.ordinals.com/guides/collecting/sparrow-wallet.html
// | |
// Uncrops Google Docs images by removing the clip path. | |
// Doesn't work in Webkit due to canvas elements not being | |
// in the dom. So Firefox only for now. | |
// | |
// Click the "raw" button and import the URL into your userscript | |
// manager - tested in tampermonkey but should work in whatever. | |
// | |
// ==UserScript== |
import tweepy | |
import time | |
import datetime | |
import requests | |
import uuid | |
import pymongo | |
from pymongo import MongoClient | |
def limit_handler(cursor): |
import os | |
import tweepy | |
from flask import Flask | |
# Authenticate to Twitter | |
auth = tweepy.OAuthHandler("API KEY", "API SECRET") | |
auth.set_access_token("TOKEN", "SECRET") | |
# Create API object |
-- Joins chrome_extension and users table, looks for Mega chrome identifier and specific version number; should also consider running without the version number, to find all users with Mega extension installed and then get it removed prior to it updating. | |
SELECT users.username,chrome_extensions.name,chrome_extensions.version,chrome_extensions.path FROM chrome_extensions JOIN users ON users.uid = chrome_extensions.uid where chrome_extensions.identifier = 'bigefpfhnfcobdlfbedofhhaibnlghod' and chrome_extensions.version = '3.39.4'; |
I had a heck of a time getting a Cuckoo sandbox running, and below I hope to help you get one up and running relatively quickly by detailing out the steps and gotchas I stumbled across along the way. I mention this in the references at the end of this gist, but what you see here is heavily influenced by this article from Nviso
- Setup a Ubuntu 16.04 64-bit desktop VM (download here) in VMWare with the following properties:
- 100GB hard drive
- 2 procs
- 8 gigs of RAM
// Open direct messages window, paste this into console. | |
function deleteNextConversation() | |
{ | |
if (!(dm = document.getElementsByClassName("DMInbox-conversationItem")[0])) { | |
clearInterval(tmr) | |
return; | |
} | |
dm.firstChild.click(); | |
setTimeout('document.getElementsByClassName("js-actionDeleteConversation")[0].click()', 1000); |
If you download your personal Twitter archive, you don't quite get the data as JSON, but as a series of .js
files, one for each month (there are meant to replicate the Twitter API respones for the front-end part of the downloadable archive.)
But if you want to be able to use the data in those files, which is far richer than the CSV data, for some analysis or app just run this script.
Run sh ./twitter-archive-to-json.sh
in the same directory as the /tweets
folder that comes with the archive download, and you'll get two files:
tweets.json
— a JSON list of the objectstweets_dict.json
— a JSON dictionary where each Tweet's key is itsid_str
You'll also get a /json-tweets
directory which has the individual JSON files for each month of tweets.
The purpose of this document is to make recommendations on how to browse in a privacy and security conscious manner. This information is compiled from a number of sources, which are referenced throughout the document, as well as my own experiences with the described technologies.
I welcome contributions and comments on the information contained. Please see the How to Contribute section for information on contributing your own knowledge.
# -*- coding: utf-8 -*- | |
# --------------------------- | |
# H. Sonesson, Atea | |
# --------------------------- | |
from pyPdf import PdfFileWriter, PdfFileReader | |
import os | |
import urlparse | |
import urllib | |
from bs4 import BeautifulSoup |