| /* Run this in console of dev tools in browser to make all text on rendered page editable (but retains style etc) */ | |
| javascript:document.body.contentEditable='true'; document.designMode='on'; void 0 |
| all: vtwebd | |
| request.o: request.c | |
| gcc -g -O2 -c request.c | |
| vtwebd.o: main.c | |
| gcc -pthread -g -O2 -c main.c | |
| vtwebd: vtwebd.o request.o | |
| gcc -pthread -g -o vtwebd main.o request.o |
| #!/usr/bin/env bash | |
| # Connect to the server.. | |
| # ssh username@server_ip | |
| mkdir ~/.local; | |
| # Download source and build libevent | |
| cd /tmp; | |
| wget https://github.com/libevent/libevent/releases/download/release-2.1.8-stable/libevent-2.1.8-stable.tar.gz; | |
| tar xvfz libevent-2.1.8-stable.tar.gz; |
As a freelancer, I build a lot of web sites. That's a lot of code changes to track. Thankfully, a Git-enabled workflow with proper branching makes short work of project tracking. I can easily see development features in branches as well as a snapshot of the sites' production code. A nice addition to that workflow is that ability to use Git to push updates to any of the various sites I work on while committing changes.
There are two ways - the first way is just one command run plainly in front of you; the second one runs in the background and in a different instance so you can get out of your ssh session and it will continue.
First make a folder to download the websites to and begin your downloading: (note if downloading www.SOME_WEBSITE.com, you will get a folder like this: /websitedl/www.SOME_WEBSITE.com/)
| #include <stdio.h> | |
| int main() | |
| { | |
| int count, n, t1=0, t2=1, display=0; | |
| printf("Enter number of terms: "); | |
| scanf("%d",&n); | |
| printf("Fibonacci Series: %d %d ", t1, t2); /* Displaying first two terms */ | |
| count=2; /* count=2 because first two terms are already displayed. */ | |
| while (count<n) | |
| { |
| #!/usr/bin/python | |
| # Title: Reddit Data Mining Script | |
| # Authors: Clay McLeod | |
| # Description: This script mines JSON data | |
| # from the Reddit front page and stores it | |
| # as a CSV file for analysis. | |
| # Section: Python | |
| # Subsection: Data Science | |
| want=["domain", "subreddit", "subreddit_id" "id", "author", "score", "over_18", "downs", "created_utc", "ups", "num_comments"] |
| -- | |
| -- open currently active Chrome tab with Safari | |
| -- forked from https://gist.github.com/3151932 and https://gist.github.com/3153606 | |
| -- | |
| property theURL : "" | |
| tell application "Google Chrome" | |
| set theURL to URL of active tab of window 0 | |
| end tell | |
| if appIsRunning("Safari") then |