This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{% extends "admin/base_site.html" %} | |
{% load i18n admin_static admin_modify %} | |
{% load admin_urls %} | |
{% load url from future %} | |
{% block bodyclass %}{{ opts.app_label }}-{{ opts.object_name.lower }} change-form{% endblock %} | |
{% if not is_popup %} | |
{% block breadcrumbs %} | |
<ul> | |
<li><a href="{% url 'admin:index' %}">{% trans 'Home' %}</a></li> |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ALSO SEE THIS: http://eddychan.com/post/18484749431/minimum-viable-ops-deploying-your-first-django-app-to | |
AND THIS: http://bitnami.com/stack/django | |
These are my notes on how to quickly setup a python, virtualenv (use virtualenv-burrito FTW), and django. | |
Setup an EC-2 instance: | |
======================= | |
Use the quick launch wizard: |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def get_all_data(datasource): | |
#print "In get_all_data" | |
#datasource = datasource.replace('\n','') | |
start = datasource.find('\\textb') | |
#import pdb; pdb.set_trace() | |
#print start | |
if start == -1: | |
return None,0 | |
st_data = datasource.find('f', start) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Based on https://github.com/sass/libsass/wiki/Building-with-autotools | |
# Install dependencies | |
apt-get install automake libtool | |
# Fetch sources | |
git clone https://github.com/sass/libsass.git | |
git clone https://github.com/sass/sassc.git libsass/sassc | |
# Create configure script |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# This is the script download a page from the server and then search from it the required keyword | |
import urllib2 | |
def get_all_occurences(page): | |
length = len(keyword) | |
start_link = page.find(keyword) | |
if start_link == -1: | |
return None, 0 | |
end_quote = start_link + length |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!flask/bin/python | |
from flask import Flask, jsonify, abort, request, make_response, url_for | |
from flask.ext.httpauth import HTTPBasicAuth | |
app = Flask(__name__, static_url_path = "") | |
auth = HTTPBasicAuth() | |
@auth.get_password | |
def get_password(username): | |
if username == 'miguel': |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Write a web crawler | |
''' | |
A crawler is a program that starts with a url on the web (ex: http://python.org), fetches the web-page corresponding to that url, and parses all the links on that page into a repository of links. Next, it fetches the contents of any of the url from the repository just created, parses the links from this new content into the repository and continues this process for all links in the repository until stopped or after a given number of links are fetched. | |
''' | |
# urllib2 for downloading web pages | |
import urllib2 | |
# get_next_target() takes a page and checks for the positions of the links it finds from '<a href=' |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# this is the correct program | |
import re | |
import urllib2 | |
# get_next_target() takes a page and checks for the positions of the links | |
def get_next_target(page): | |
match=re.findall(r'[\w.-]+@[\w.-]+',page) | |
if match: | |
return match |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# The Following Program would work as Lexical Analyser | |
# | |
# Write a C/C++ program which reads a program written | |
# in any programming language (say C/C++/Java) and then perform | |
# lexical analysis. The output of program should contain the | |
# tokens i.e. classification as identifier, special symbol, delimiter, | |
# operator, keyword or string. It should also display the number of | |
# identifiers, special symbol, delimiter, operator, keyword, strings | |
# and statements |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# from dd/mm/yyyy to yyyy/mm/dd format of dates | |
f = open('testdate.txt') | |
i = f.read() | |
print i | |
date = i.split("\n") | |
print date | |
convDate = [] | |
for adate in date: | |
if '/' in adate: |