Skip to content

Instantly share code, notes, and snippets.

@freelze
freelze / plurk_crawler.py
Last active September 4, 2018 11:56
slow plurk media(jpg,png,gif) crawler
#!/usr/bin/python
# -*- coding:utf-8 -*-
# API: https://github.com/clsung/plurk-oauth
# You can retrieve your app keys via the test tool at http://www.plurk.com/PlurkApp/
CONSUMER_KEY = ''
CONSUMER_SECRET = ''
ACCESS_TOKEN = ''
ACCESS_TOKEN_SECRET = ''
// if use global variable, it will call create() three times.
var sheetID = create() // 84行
/* 目前做到的是:利用create()新增一個Google試算表,並取得sheetID
* 因為我有3個Function(爬流量,重置已發訊息量,刪除特定Trigger)會使用到sheetID
* 但是執行主程式Start()後,
* 會產生多個sheets
* 是因為sheetID是global variable,所以create()就call了3次(?)
* 請問有什麼方法可以存下sheetID,並給其他function使用?
*/
@freelze
freelze / YzudormDataflowMonitor.gs
Last active June 20, 2018 06:53
元智大學宿舍網路流量提醒。(當你的流量超過limitedDataflow時,會發出LINE通知)
// reference:https://stackoverflow.com/questions/21621019/google-apps-script-login-to-website-with-http-request, https://gist.github.com/erajanraja24/02279e405e28311f220f557156363d7b
// Need to insert a library(Resources -> Library):M1lugvAXKKtUxn_vdAG9JZleS6DrsjUUV
var student_id = '帳號';
var password = '密碼';
var limitedDataflow = 0; // 當達到多少流量時,發出LINE的通知
var LineNotifyToken = "你的LINE Notify Token"
function lineNotify(token, msg){
url = "https://notify-api.line.me/api/notify"
headers = {
@freelze
freelze / MangaReminder.gs
Last active June 19, 2018 14:36
Scrape mangakakalot website , telegram you when the mangas update.
// reference:https://github.com/ocordova/gas-telegram-bot , https://github.com/we684123/Telegram_bot_example , http://fu7771.blogspot.com/2017/08/google-script-telegram-bot.html
// https://api.telegram.org/botKEY/setWebhook?url=https://...
var id = ""
var key = ""
var sheetID = ""
function doPost(e) {
var update = JSON.parse(e.postData.contents);
// Make sure this is update is a type message
if (update.hasOwnProperty('message')) {
var msg = update.message;
@freelze
freelze / YZU_Dorm_Dataflow_Reminder_scheduler.py
Last active June 19, 2018 15:38
定時爬宿網流量,若超過一定值將發出Line-Notify的提醒
# You need to change 4 values: studentID, password, LINE_TOKEN , DataFlow
# $pip install schedule
import requests
from bs4 import BeautifulSoup
from lxml import html
import re
import schedule
import time
def job():
@freelze
freelze / YZU_dorm_dataFlow_Reminder(Selenium).py
Last active June 19, 2018 09:08
定時爬宿網流量,若超過一定值將發出Line-Notify的提醒(使用Selenium)
# You Need to change 5 values: Chrome location, studentID, password, LINE_TOKEN , DataFlow
# Put chromedriver.exe in the same location with this python program.
# Download chromedriver.exe: http://chromedriver.chromium.org/downloads
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
import schedule # Python job scheduling for humans.
import time
@freelze
freelze / YZU_service_activity_scrape.py
Created June 16, 2018 12:45
Scrape the YZU activities which have service hours and output the information to LINE NOTIFY when the website updates.
# Need to create a YZUActivity.txt
from selenium import webdriver
import time
import requests
from bs4 import BeautifulSoup
URL='https://portalx.yzu.edu.tw/PortalSocialVB/FMain/PageActivityAll.aspx'
#https://portalx.yzu.edu.tw/PortalSocialVB/FPage/PageActivityDetail.aspx?Menu=Act&ActID=XXXX
driver = webdriver.Chrome(r"C:\Software\chromedriver_win32\chromedriver.exe") # depend on your driver's location
driver.get(URL)
@freelze
freelze / chrome.css
Last active November 7, 2019 07:43
Firefox chrome.css for auto-hiding tabs ,sidebar, navigator bar, bookmark bar , and using shadowfox. GIF: https://imgur.com/9ym87uj
/* GIF : https://imgur.com/9ym87uj
* From Line 4~147 & Line 332~End are generated by shadowfox
* Line 150~331 are for auto-hiding sidebar, tabs, navigator bar, bookmark bar
*/
:root {
--magenta-50: #ff1ad9;
--magenta-60: #ed00b5;
--magenta-70: #b5007f;
--magenta-80: #7d004f;
--magenta-90: #440027;
"""
scrape Yahoo Currency , output to Excel
請先新建一個Excel,命名為:Currency.xlsx
並增加22個sheets, 總共23個sheets , 名稱分別改為'美元', '澳幣', '加拿大幣',
'港幣', '英鎊', '瑞士法郎', '日圓', '歐元', '紐西蘭幣', '新加坡幣',
'南非幣', '瑞典克朗', '泰銖', '人民幣', '印度幣', '丹麥幣', '土耳其里拉',
'墨西哥披索', '越南幣', '菲律賓披索', '馬來西亞幣', '韓圜', '印尼盾'
40行請改成自己Excel的路徑 """
import requests
from bs4 import BeautifulSoup
@freelze
freelze / Yahoo_WeatherAPI_crawler.py
Last active June 13, 2018 06:47
export to excel
# 請先新建一個Excel,命名為:Temperature.xlsx, 並增加sheets以配合38行 並命名為 "桃園","內壢","中壢","高雄","台東"
# 36行請改成自己Excel的路徑
import requests
import urllib.parse
import time
import openpyxl
import os
def weather_yahooAPI(Fcity,workbook):
res=requests.get("https://query.yahooapis.com/v1/public/yql?q=SELECT%20woeid%20FROM%20geo.places%20WHERE%20text%20IN(%22"+urllib.parse.quote(Fcity)+"%22)%20AND%20country%20%3D%20%22Taiwan%22&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys")
data=res.json()