This is a sample pulled from https://github.com/washingtonpost/data-homicides, columns re-ordered with csvcut -c 10,11,1-9,12 homicide-data.csv > dw_homicides.csv
.
UPDATE prisoners_in_florence | |
SET the_geom = | |
ST_Transform( | |
ST_Segmentize( | |
ST_MakeLine( | |
ST_Transform(prisoners_in_florence.the_geom, 953027), | |
ST_Transform(CDB_LatLng(38.3614873976, -105.1071292162), 953027) | |
), | |
100000 | |
), |
On Dreamost, running Nextcloud with sqlite3 storage, and a Let's Encrypt certificate.
I'm running php 7.0, and edited .php/7.0/phprc
to add the following line:
extension=fileinfo.so
On my phone I have both CalDAV-Sync and DAVdroid (also available through F-Droid running. I think I only need DAVdroid, but I had CalDAV-Sync first and it works so ¯_(ツ)_/¯
Step 1: Switch from "Maps" to "Datasets" and find the NEW DATASET button.
Step 2: The raw data that you want to import lives at https://gist.githubusercontent.com/amandabee/39a6db5da70f6c454a06d81d1759d0c2/raw/1a5475c0689e794a0c3c9620ebfc708c03ea2c1c/Subtotals_by_District.csv -- I found that by clicking the RAW button on the CSV (below)
Step 3: Rename your dataset to something civilized like "YTD Complaints" (actually, rename it exactly that. "YTD Complaints")
Step 4: You'll notice that there's a new, empty column called "The Geom" -- we need to fill that column. So find your way to https://amandabee.carto.com/viz/e2357457-bc5b-425a-9fec-a618743ce856/map and use the Create Map button to copy my shapefiles onto your account.
Step 5: When you're looking at the data view, on the lower right-hand side of the screen you should see a button that is labeled "merge datasets" -- select it. We're going to do a "Column Join". If you look at the original , it is a lot e
Run with...
funtimes = FollowerGetter("openlab", cks)
funtimes.get_followers()
TO DO: This works now. :)
I'm working through some puzzles, towards a larger problem.
{ | |
"name": "Contractors", | |
"children": [{ | |
"name": "First Parent", | |
"children": [{ | |
"name": "Child", | |
"size": 9330 | |
},{ | |
"name": "Child", | |
"size": 1000 |
One or the other of these is the scraper I used for the Philadelphia Court Debt project. I didn't realize until after I promised everyone in the room my code that I had never published it.
Not for nothing, I make no assertion that this is great programming or even remotely reflects best practices in python or scraping. There's plenty here that I would do differently if I did it again today.
I don't even know which of these was the final scraper. I could figure that out but then I'd never get around to throwing it online. So here it is, warts and all.
[general] | |
accounts = Velociraptor | |
pythonfile = ~/.offlineimap/pass.py | |
[Account Velociraptor] | |
localrepository = VelLocal | |
remoterepository = VelRemote | |
# In 7.0 sqlite is the only thing supported | |
# status_backend = sqlite | |
postsynchook = notmuch new |
(Seeing images? to get anything out of this snippet you'll want to skip right down to <a href="http://bl.ocks.org/amandabee/7002f106beadd8db1485#Images.html">the code</a>.) | |
Imagur gives you this: | |
<a href="http://imgur.com/P6IDE9U"><img src="http://i.imgur.com/P6IDE9U.jpg" title="source: imgur.com" /></a> | |
1: take off the anchor <br /> | |
<img src="http://i.imgur.com/P6IDE9U.jpg" title="source: imgur.com" /> | |
2: give it a real title <br /> | |
<img src="http://i.imgur.com/P6IDE9U.jpg" title="Clever Flag Graphic" /> |