Skip to content

Instantly share code, notes, and snippets.

View yihyang's full-sized avatar
😰
kinda busy

Yih Yang yihyang

😰
kinda busy
View GitHub Profile
# Groups - allow access to different posts and etc
https://wordpress.org/plugins/groups/
# Worona - Turning blog into app
# However, full functionalities such as push notification need to be bought
# Probably better off if built own app
https://www.worona.org/extensions/

GIT Cheat Sheet

Command Description Example
git init initialize repository
git checkout <branch_name> Checkout a particular branch git checkout master / git checkout ui
git pull <branch_name> pull a branch from destination git pull origin ui
git push <branch_name> push a branch from destination git push origin ui
git add <file_name> include a file into the current commit git add config.xml
git add -A include ALL current changes into current commit
@yihyang
yihyang / json2csv.py
Last active July 27, 2016 09:00
Python script to read data stored in JSON format and convert into CSV format
import json
import csv
with open('data.json') as file:
data = json.load(file)
with open('data.csv', 'w') as file:
csv_file = csv.writer(file)
for item in data:
csv_file.writerow([item['item0'], item['item1'], item['item2'], item['item3'], item['item4'], item['item5'], item['item6'], item['item7']])
@yihyang
yihyang / web2csv.py
Created July 27, 2016 08:59
Python script to read JSON data from web (url) and convert it into a CSV file
import urllib, json, csv
url2read = '<YOUR URL HERE>'
response = urllib.urlopen(url2read)
data = json.loads(response.read())
with open('data.csv', 'w') as file:
csv_file = csv.writer(file)
for item in data:
csv_file.writerow([item['item0'], item['item1'], item['item2'], item['item3'], item['item4'], item['item5'], item['item6'], item['item7']])
<?php
// API access key from Google API's Console
define( 'API_ACCESS_KEY', 'YOUR-API-ACCESS-KEY-GOES-HERE' );
$registrationIds = array("YOUR DEVICE IDS WILL GO HERE" );
// prep the bundle
@yihyang
yihyang / scrapy-example.py
Created August 15, 2016 05:10
Example of crawling websites from scrapy
import scrapy
import os.path
class EmailSpider(scrapy.Spider):
name = "test"
allowed_domains = ["test.com"]
start_urls = ['test.com/1', 'test.com/2']
# the spider will crawl according to the 'start_urls' available
def parse(self, response):
# -*- coding: utf-8 -*-
import scrapy
import os.path
class DomainSpider(scrapy.Spider):
name = "domain"
allowed_domains = ["www.domain.com"]
max_num_of_page = 2
link = 'http://www.domain.com'
start_urls = []
@yihyang
yihyang / colored_terminal.MD
Last active July 3, 2017 06:07
Python Colored Terminal Output

Terminal Color

Having python to print output to terminals with different colors.

Execute

  • Place both files side by side and execute python test_print.py to see the result

Environment

@yihyang
yihyang / Simple Bootstrap v4 alpha Boilerplate
Created July 19, 2017 14:56
Simple Bootstrap v4 Alpha Boilerplate for your instantaneous plug n play need!
<!doctype html>
<html class="no-js" lang="">
<head>
<meta charset="utf-8">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<title>Boilerplate</title>
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Place favicon.ico and apple-touch-icon.png in the root directory -->
@yihyang
yihyang / truffle.js
Created July 19, 2017 16:16
Truffle configuration
var DefaultBuilder = require("truffle-default-builder");
module.exports = {
build: new DefaultBuilder({
"index.html": "index.html",
"app.js": [],
"app.css": [],
"images/": ""
}),
networks: {
development: {