I hereby claim:
- I am dalelane on github.
- I am dalelane (https://keybase.io/dalelane) on keybase.
- I have a public key whose fingerprint is 0950 6D8F 8321 BEC7 CD01 8CF5 4872 0994 FE4A ECD8
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
| ########################################################################## | |
| # | |
| # xmldiff | |
| # | |
| # Simple utility script to enable a diff of two XML files in a way | |
| # that ignores the order or attributes and elements. | |
| # | |
| # Dale Lane ([email protected]) | |
| # 6 Oct 2014 | |
| # |
| # Converting roman numerals into numbers | |
| # | |
| # A homework helper by Dale and Grace Lane | |
| # 1-Nov-2014 | |
| # | |
| # http://dalelane.co.uk/blog/?p=3244 | |
| # | |
| # some simple helper functions to make |
| { entities: | |
| [ { type: 'PERSON', | |
| class: 'SPC', | |
| level: 'NAM', | |
| mentions: | |
| [ { mtype: 'NAM', | |
| role: 'PERSON', | |
| class: 'SPC', | |
| text: 'Dale Lane', | |
| location: { begin: 0, end: 8, 'head-begin': 0, 'head-end': 8 } }, |
| /** | |
| * Downloads the contents of a news story, then use the | |
| * Watson Relationship Extraction service to identify | |
| * the names of all the people mentioned in the story. | |
| * | |
| * @author Dale Lane | |
| */ | |
| var async = require('async'); | |
| var unfluff = require('unfluff'); |
| # Get these from Bluemix and set as environment variables | |
| # $BLUEMIX_WATSON_MACHTRANS_USER | |
| # $BLUEMIX_WATSON_MACHTRANS_PASS | |
| # $BLUEMIX_WATSON_MACHTRANS_URL | |
| export FORMAT="rt=text" # or json or xml | |
| export LANG_FROM="enus" | |
| export LANG_TO="frfr" |
| { | |
| "env" : { | |
| "node": true | |
| }, | |
| "rules" : { | |
| "strict": [2, "global"], | |
| "quotes": [1, "single"], | |
| "key-spacing": [1, { "beforeColon" : true, "afterColon" : true }], |
Inspired by https://medium.com/@samim/obama-rnn-machine-generated-political-speeches-c8abd18a2ea0 I tried training a recurrent neural network (RNN) for myself.
I exported every tweet I've ever posted as @dalelane, and used that as the training text. The idea was to see whether it would generate new tweets that looked like they could be things that I had written.
This is just a first quick attempt, with no attempts to tweak or tune the generation of the training model, or modify any other settings.
This is the kind of thing it's currently outputting:
Like https://gist.github.com/dalelane/f0a1b9ce75509875f91d but this time I tried tweaking some settings to "optimise" it.
Erm... it didn't go well.
@andypiper @andypiper @andypiper @hardillb @andypiper @andypiper @andypiper @andypiper @andypiper @andypiper @andypiper @andypiper @andypiper
@andypiper @andypiper @andypiper @andypiper @andypiper @andypiper @andypiper @andypiper @andypiper @andypiper @andypiper
| import requests | |
| import image_slicer | |
| # Gets the contents of an image file to be sent to the | |
| # machine learning model for classifying | |
| def getImageFileData(locationOfImageFile): | |
| with open(locationOfImageFile, "rb") as f: | |
| data = f.read() | |
| return data.encode("base64") |