A personal diary of DataFrame munging over the years.
Convert Series datatype to numeric (will error if column has non-numeric values)
(h/t @makmanalp)
/* | |
* A rather simple way of testing out SwampDragon. | |
* Create a load of connections, connect, subscriber, update and count the number | |
* of published messages. | |
* | |
* SETUP: | |
* 1. npm install sockjs-client-node | |
* 2. curl https://raw.githubusercontent.com/jonashagstedt/swampdragon/master/swampdragon/static/swampdragon/js/swampdragon.js > swampdragon.js | |
* 3. Set router_name to the name of your router. | |
* 4. Change "sd.update_object(router_name, { value: val }, 'foo');" to call |
#!/usr/bin/env python | |
""" | |
Example of how you can test if strings are anagrams | |
""" | |
def is_anagram(str1, str2): | |
""" | |
Check if str1 is an anagram of str2 |
# post_loc.txt contains the json you want to post | |
# -p means to POST it | |
# -H adds an Auth header (could be Basic or Token) | |
# -T sets the Content-Type | |
# -c is concurrent clients | |
# -n is the number of requests to run in the test | |
ab -p post_loc.txt -T application/json -H 'Authorization: Token abcd1234' -c 10 -n 2000 http://example.com/api/v1/locations/ |
A personal diary of DataFrame munging over the years.
Convert Series datatype to numeric (will error if column has non-numeric values)
(h/t @makmanalp)
''' | |
Spider for IMDb | |
- Retrieve most popular movies & TV series with rating of 8.0 and above | |
- Crawl next pages recursively | |
''' | |
from scrapy.contrib.spiders import CrawlSpider, Rule | |
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor | |
from scrapy.selector import Selector |
#!/usr/bin/python | |
import re | |
# i think i missed a couple of things in 8601 but this should cover 98% of cases. | |
iso8601 = re.compile(r'^(?P<full>((?P<year>\d{4})([/-]?(?P<mon>(0[1-9])|(1[012]))([/-]?(?P<mday>(0[1-9])|([12]\d)|(3[01])))?)?(?:T(?P<hour>([01][0-9])|(?:2[0123]))(\:?(?P<min>[0-5][0-9])(\:?(?P<sec>[0-5][0-9]([\,\.]\d{1,10})?))?)?(?:Z|([\-+](?:([01][0-9])|(?:2[0123]))(\:?(?:[0-5][0-9]))?))?)?))$') | |
# to perform the actual date match | |
m = iso8601.match('2014-12-05T12:30:45.123456-05:30') |
class MyModelForm(ModelForm): | |
class Meta: | |
model = MyModel | |
fields = ('field2', 'field3', 'field4', 'field5') | |
def clean(self): | |
""" | |
Clean MyModel | |
""" | |
# Get the cleaned data |
- name: ensure file exists at path | |
shell: rsync -ci /source/path /destination/path | |
register: rsync_result | |
changed_when: "rsync_result.stdout != ''" |
--- | |
- hosts: all | |
sudo: no | |
tasks: | |
- shell: echo "Client= [$SSH_CLIENT] Sock= [$SSH_AUTH_SOCK]" | |
register: myecho | |
- debug: msg="{{myecho.stdout}}" | |
- shell: ssh-add -l | |
register: myecho | |
- debug: msg="{{myecho.stdout}}" |
'use strict'; | |
// # Globbing | |
// for performance reasons we're only matching one level down: | |
// 'test/spec/{,*/}*.js' | |
// use this if you want to recursively match all subfolders: | |
// 'test/spec/**/*.js' | |
// OBS: | |
// Replace the string with informations |