Steps to keep a forked Github Repo Up-to-date.
After the forking, clone the repo in to your local system
Have you ever wanted to get a specific data from another website but there's no API available for it? That's where Web Scraping comes in, if the data is not made available by the website we can just scrape it from the website itself.
But before we dive in let us first define what web scraping is. According to Wikipedia:
{% blockquote %} Web scraping (web harvesting or web data extraction) is a computer software technique of extracting information from websites. Usually, such software programs simulate human exploration of the World Wide Web by either implementing low-level Hypertext Transfer Protocol (HTTP), or embedding a fully-fledged web browser, such as Internet Explorer or Mozilla Firefox. {% endblockquote %}
| #Generate one time login link. | |
| drush uli <<some-username>> | |
| #(Re)Set the password for the user account with the specified name. | |
| drush user-password <<someuser>> --password="new password" |
| #Initial Step is to install / Step Apache | |
| sudo apt-get install apache2 | |
| #below steps are done assuming that your website will be hosted under /var/www | |
| #Create a directory that will act as a document root for your site | |
| sudo mkdir -p /var/www/example.dev/public_html | |
| #Create a directory that will used for Log Storage | |
| sudo mkdir -p /var/www/example.dev/log |
| <IfModule headers_module> | |
| Header set X-Content-Type-Options nosniff | |
| </IfModule> |
| // Use Gists to store code you would like to remember later on | |
| console.log(window); // log the "window" object to the console |