Skip to content

Instantly share code, notes, and snippets.

from lxml import html
import requests
import json
import argparse
from collections import OrderedDict
def get_headers():
return {"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"accept-encoding": "gzip, deflate, br",
@AliceWonderland
AliceWonderland / React-Redux-Resources.md
Last active August 22, 2024 07:15
A list of tutorials for beginner practice projects in React and Redux for those who like to learn by coding.

Hands-On Tutorials for React and Redux

Beginner practice projects in React and Redux for those who like to learn by coding.

There's so much out there and many ways to start out. This is a start, from here your path can branch out how you prefer.

All resources and references are free and/or open source.

React

First projects using Facebook's Create-React-App, a barebones React app. (Best for learning only). Avoid having to research, install, and configure a setup. Jump to being able to try it to get a better sense of it.

@aneesha
aneesha / display_closestwords_tsnescatterplot.ipynb
Last active January 31, 2021 20:11
Use TSNE to only plot similar words using Word2Vec
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@glocore
glocore / Instructions.md
Created March 18, 2017 17:40
Setup React Native on Ubuntu 16.04/16.10

1. Install Node.js

  • Follow the steps given here to install Node.js and NPM.
  • Verify whether NPM is installed by typing npm -v in a terminal window.

2. Install React Native CLI

  • npm install -g react-native-cli

3. Setup Android Development Environment

  • Download and install Android Studio as explained here.
  • Run Android Studio and open the SDK Manager. Under the SDK Platforms tab, check Show Package Details, expand Android 6.0 (Marshmallow) and check the following:
@scrapehero
scrapehero / yahoo_finance.py
Last active January 22, 2024 21:46
Python 3 code to extract stock market data from yahoo finance
from lxml import html
import requests
from time import sleep
import json
import argparse
from collections import OrderedDict
from time import sleep
def parse(ticker):
url = "http://finance.yahoo.com/quote/%s?p=%s"%(ticker,ticker)
@Darth-Knoppix
Darth-Knoppix / main.py
Created February 26, 2017 17:11
Twitter NLP with Spacy and Textblob
from twython import Twython, TwythonStreamer
from mitie import *
import spacy
from textblob import TextBlob
import sys, os, json, random
from nltk.corpus import stopwords
import markovify
# **********************************************************************************************************************
# Get Locations
@Gadgetoid
Gadgetoid / asound.md
Last active August 21, 2020 15:02
Working asound.conf for libphatmeter on Volumio. Very hacky and still needs tweaks to mpd.conf and /dev/i2c-1 owned by group "audio" =/

This is a horrible, hacky, proof of concept mess for Pi VU Meter on Volumio It details steps with the old phatmeter library, but should work with PiVuMeter too, simply change "ameter" to "pivumeter" and link the right lib accordingly

udev rules permissions

REQUIRED: Create /etc/udev/rules.d/60-i2c.rules with the contents:

KEREL=="i2c-1", GROUP="i2C", MODE="0777"
@rmoff
rmoff / 01_Spark+Streaming+Kafka+Twitter.ipynb
Last active September 17, 2020 17:41
Simple example of processing twitter JSON payload from a Kafka stream with Spark Streaming in Python
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import NMF, LatentDirichletAllocation
import numpy as np
def display_topics(H, W, feature_names, documents, no_top_words, no_top_documents):
for topic_idx, topic in enumerate(H):
print "Topic %d:" % (topic_idx)
print " ".join([feature_names[i]
for i in topic.argsort()[:-no_top_words - 1:-1]])
top_doc_indices = np.argsort( W[:,topic_idx] )[::-1][0:no_top_documents]
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.datasets import fetch_20newsgroups
from sklearn.decomposition import NMF, LatentDirichletAllocation
def display_topics(model, feature_names, no_top_words):
for topic_idx, topic in enumerate(model.components_):
print "Topic %d:" % (topic_idx)
print " ".join([feature_names[i]
for i in topic.argsort()[:-no_top_words - 1:-1]])