-
-
Save boazsender/812921 to your computer and use it in GitHub Desktop.
{ | |
category : "", // Required Single category from (http://opencivicdata.com/#categories-wg) | |
properties : {} // Required Common bin for domain specific properties from from (http://opencivicdata.com/#properties-wg) | |
start : "", // Optional RFC3339DateTime optional start date | |
end : "", // Optional RFC3339DateTime optional end date | |
geometry : {} // Optional Geojson location feature (http://geojson.org/geojson-spec.html) | |
} |
This commit is where I started working on scraperwiki integration. The idea was to tell CouchDB about a ScraperWiki scraper and then it would pull in all the data from that scraper. I then later realized that this can happen as a push directly from ScraperWiki at the time of each scraping.
It does not seem like anyone is working on a common data model, and that's when the projects we are working on at Bocoup needs; a common national data model.
Scraperwiki looks great, so does your work. Looking more into it now.
CivicAPI is not really anything yet... mostly just ideas :)
not sure if that's related to your efforts and you might know it already, but Joe Ferreira from DUSP/MIT worked on a schema-translation-service. the idea was to have schema translation rules on a web service and leave the data at the original source I believe. an example is for instance if your app wants to access all parcel data from massachusetts: the same data layer comes from 351 sources and has 351 slightly different schemas. Joe's service would help to identify a common schema pattern, e.g. parcel-geometry, parcel-id, parcel-address, translate and serve harmonized data.
I've been working on https://github.com/maxogden/civicapi as the light software and on tools such as Google Refine + ScraperWiki as the interface for developers. CivicAPI is the generaliztion of my work on pdxapi.com. See this page some examples of '- Write parsers' and '- Get data'.
For "parse data into the couch (mapping fields to our model)" i've started integrating ScraperWiki and CouchDB but there is a lot of potential there.
All in all, this is a very fruitful space that lots of people are working on but nobody is collaborating between those groups.