Skip to content

Instantly share code, notes, and snippets.

View grim-reapper's full-sized avatar

Imran Ali grim-reapper

View GitHub Profile
<?php
/* vars for export */
// database record to be exported
$db_record = 'XXXXXXXXX';
// optional where query
$where = 'WHERE 1 ORDER BY 1';
// filename for export
$csv_filename = 'db_export_'.$db_record.'_'.date('Y-m-d').'.csv';
// database variables
@grim-reapper
grim-reapper / demo.php
Last active August 29, 2015 14:06 — forked from tracend/demo.php
<?php
$host = 'localhost'; // MYSQL database host adress
$db = ''; // MYSQL database name
$user = ''; // Mysql Datbase user
$pass = ''; // Mysql Datbase password
// Connect to the database
$link = mysql_connect($host, $user, $pass);
mysql_select_db($db);
/* Pre-Define HTML5 Elements in IE */
(function(){ var els = "source|address|article|aside|audio|canvas|command|datalist|details|dialog|figure|figcaption|footer|header|hgroup|keygen|mark|meter|menu|nav|picture|progress|ruby|section|time|video".split('|'); for(var i = 0; i < els.length; i++) { document.createElement(els[i]); } } )();
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="et">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>table</title>
<style type="text/css">
.odd { background-color: #808080; }
.generated_for_mobile { margin-bottom: 30px }
//EnhanceJS isIE test idea
//detect IE and version number through injected conditional comments (no UA detect, no need for cond. compilation / jscript check)
//version arg is for IE version (optional)
//comparison arg supports 'lte', 'gte', etc (optional)
function isIE(version, comparison) {
var cc = 'IE',
b = document.createElement('B'),
@grim-reapper
grim-reapper / php-webscraping.md
Created November 24, 2015 07:30 — forked from anchetaWern/php-webscraping.md
web scraping in php

Have you ever wanted to get a specific data from another website but there's no API available for it? That's where Web Scraping comes in, if the data is not made available by the website we can just scrape it from the website itself.

But before we dive in let us first define what web scraping is. According to Wikipedia:

{% blockquote %} Web scraping (web harvesting or web data extraction) is a computer software technique of extracting information from websites. Usually, such software programs simulate human exploration of the World Wide Web by either implementing low-level Hypertext Transfer Protocol (HTTP), or embedding a fully-fledged web browser, such as Internet Explorer or Mozilla Firefox. {% endblockquote %}

@grim-reapper
grim-reapper / .htaccess
Created December 3, 2015 12:25 — forked from johnmorris/.htaccess
Create a custom 404 page not found error page
<Files .htaccess>
order allow,deny
deny from all
</Files>
ErrorDocument 403 /errors/error.php
ErrorDocument 404 /errors/error.php
ErrorDocument 405 /errors/error.php
ErrorDocument 408 /errors/error.php
ErrorDocument 500 /errors/error.php
@grim-reapper
grim-reapper / ci-bootstrap-pagination-config.php
Created December 13, 2015 18:36 — forked from edomaru/ci-bootstrap-pagination-config.php
Codeigniter Pagination config to apply bootstrap style
<?php
$config["full_tag_open"] = '<ul class="pagination">';
$config["full_tag_close"] = '</ul>';
$config["first_link"] = "&laquo;";
$config["first_tag_open"] = "<li>";
$config["first_tag_close"] = "</li>";
$config["last_link"] = "&raquo;";
$config["last_tag_open"] = "<li>";
@grim-reapper
grim-reapper / web.config
Created February 22, 2016 14:02 — forked from jonahvsweb/web.config
How to Setup WordPress Permalinks on Windows IIS ======================================= Place this file into the base directory of your WordPress installation to allow permalinks (or "pretty URLs") on Windows IIS.
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<rewrite>
<rules>
@grim-reapper
grim-reapper / crawluniq.js
Created March 4, 2016 06:32 — forked from martincharlesworth/crawluniq.js
PhantomJS crawler written to detect Mixed Content
var uniqUrls = [];
var urlsToBrowse = [];
var browsedUrls = [];
function open(url, callback) {
var page = require('webpage').create();
page.settings.loadImages = true;
page.onResourceReceived = function (response) {
if (response.stage == "start" && response.url.substr(0, 4) === "http" && uniqUrls.indexOf(response.url) === -1) {