InfluxData's T.I.C.K. stack is made up from the following components:
| Component | Role |
|---|---|
| Telegraf | Data collector |
| InfluxDB | Stores data |
| Chronograf | Visualizer |
| {"help": "https://catalog.data.gov/api/3/action/help_show?name=package_search", "success": true, "result": {"count": 48, "sort": "views_recent desc", "facets": {}, "results": [{"license_title": "License not specified", "maintainer": "New Media", "relationships_as_object": [], "private": false, "maintainer_email": "[email protected]", "num_tags": 5, "id": "59694770-b6b6-4ae0-a4b9-4ae69c0be2f6", "metadata_created": "2016-07-02T10:06:26.199575", "metadata_modified": "2016-07-02T10:06:26.199575", "author": null, "author_email": null, "state": "active", "version": null, "creator_user_id": "47303a9e-1187-4290-85a3-1fc02dc49e4a", "type": "dataset", "resources": [{"cache_last_updated": null, "package_id": "59694770-b6b6-4ae0-a4b9-4ae69c0be2f6", "webstore_last_updated": null, "id": "3a8a0ad1-19e7-4153-bb2f-d70cf88aaaf8", "size": null, "state": "active", "hash": "", "description": "", "format": "CSV", "tracking_summary": {"total": 32, "recent": 1}, "last_modified": null, "url_type": null, "no_real_name": "True", |
| '''This script goes along the blog post | |
| "Building powerful image classification models using very little data" | |
| from blog.keras.io. | |
| It uses data that can be downloaded at: | |
| https://www.kaggle.com/c/dogs-vs-cats/data | |
| In our setup, we: | |
| - created a data/ folder | |
| - created train/ and validation/ subfolders inside data/ | |
| - created cats/ and dogs/ subfolders inside train/ and validation/ | |
| - put the cat pictures index 0-999 in data/train/cats |
| #!/usr/bin/env python | |
| # -*- coding: utf-8 -*- | |
| from math import * | |
| # summ XY for 'both' common elements | |
| def scal(x,y,both): | |
| return sum(x[i]*y[i] for i in both) | |
| # summ X for 'both' common elements | |
| def one(x,both): |
| from IPython.display import display, HTML | |
| display(HTML(data=""" | |
| <style> | |
| div#notebook-container { width: 95%; } | |
| div#menubar-container { width: 65%; } | |
| div#maintoolbar-container { width: 99%; } | |
| </style> | |
| """)) |
| [{"place_id":"97994878","licence":"Data \u00a9 OpenStreetMap contributors, ODbL 1.0. http:\/\/www.openstreetmap.org\/copyright","osm_type":"relation","osm_id":"161950","boundingbox":["30.1375217437744","35.0080299377441","-88.4731369018555","-84.8882446289062"],"lat":"33.2588817","lon":"-86.8295337","display_name":"Alabama, United States of America","place_rank":"8","category":"boundary","type":"administrative","importance":0.83507032450272,"icon":"http:\/\/nominatim.openstreetmap.org\/images\/mapicons\/poi_boundary_administrative.p.20.png"}] | |
| [{"place_id":"97421560","licence":"Data \u00a9 OpenStreetMap contributors, ODbL 1.0. http:\/\/www.openstreetmap.org\/copyright","osm_type":"relation","osm_id":"162018","boundingbox":["31.3321762084961","37.0042610168457","-114.818359375","-109.045196533203"],"lat":"34.395342","lon":"-111.7632755","display_name":"Arizona, United States of America","place_rank":"8","category":"boundary","type":"administrative","importance":0.83922181098242,"icon":"http:\/\/nominatim.openst |
| # This is a really old post, in the comments (and stackoverflow too) you'll find better solutions. | |
| def find(key, dictionary): | |
| for k, v in dictionary.iteritems(): | |
| if k == key: | |
| yield v | |
| elif isinstance(v, dict): | |
| for result in find(key, v): | |
| yield result | |
| elif isinstance(v, list): |
| # LICENSE: public domain | |
| def calculate_initial_compass_bearing(pointA, pointB): | |
| """ | |
| Calculates the bearing between two points. | |
| The formulae used is the following: | |
| θ = atan2(sin(Δlong).cos(lat2), | |
| cos(lat1).sin(lat2) − sin(lat1).cos(lat2).cos(Δlong)) |
| #!/bin/sh | |
| # Converts a mysqldump file into a Sqlite 3 compatible file. It also extracts the MySQL `KEY xxxxx` from the | |
| # CREATE block and create them in separate commands _after_ all the INSERTs. | |
| # Awk is choosen because it's fast and portable. You can use gawk, original awk or even the lightning fast mawk. | |
| # The mysqldump file is traversed only once. | |
| # Usage: $ ./mysql2sqlite mysqldump-opts db-name | sqlite3 database.sqlite | |
| # Example: $ ./mysql2sqlite --no-data -u root -pMySecretPassWord myDbase | sqlite3 database.sqlite |