| Models | Examples |
|---|---|
| Display ads | Yahoo! |
| Search ads |
| # -*- coding: utf-8 -*- | |
| """ | |
| Builds epub book out of Paul Graham's essays: http://paulgraham.com/articles.html | |
| Author: Ola Sitarska <ola@sitarska.com> | |
| Copyright: Licensed under the GPL-3 (http://www.gnu.org/licenses/gpl-3.0.html) | |
| This script requires python-epub-library: http://code.google.com/p/python-epub-builder/ | |
| """ |
This script parses the git log and outputs Cypher statements to create a Neo4j database of your git history.
BEGIN
create constraint on (c:Commit) assert c.sha1 is unique;
COMMIT
BEGIN
CREATE (:Commit {author_email:'foo@bar.com',date_iso_8601:'2014-05-22 20:53:05 +0200',parents:['b6393fc9d5c065fd42644caad600a9b7ac911ae2'],refs:['HEAD', 'origin/master', 'master', 'in-index'],sha1:'934cacf9fe6cd0188be642b3e609b529edaad527',subject:'Some commit message',timestamp:'1400784785'});On July 22, Github announced the 3rd Annual Github Data Challenge presenting multiple sources of data available.
This sounded to me a good opportunity to use their available data and import it in Neo4j in order to have a lot of fun at analyzing the data that fits naturally in a graph.
As I work mainly offline or behind military proxies that do not permit me to use the ReST API, I decided to go for the Github Archive available here, you can then download json files representing Github Events on a daily/hour basis.
| import json | |
| from itertools import combinations | |
| import requests | |
| import networkx as nx | |
| def get_senate_vote(vote): | |
| # Year can be replaced to fetch votes from different years (e.g., 2013). 1989 is used | |
| # as an example. |
| Unusable on spinning disk, these results are from an SSD. | |
| 1M x create node+rel in 38s, 26k r/s | |
| 1M x create 2 nodes, 2 relationships, 2 properties in 47s, 21k r/s -> 80k records / s | |
| 1M x create 2 nodes with labels, 2 rels, 0 properties in 140s, 7k r/s | |
| 100k x create 100 rel + node -> 10M in 20s, 5k r/s | |
| 1M lookups by id in 22s, 43k r/s | |
| 1M lookup by id compiled runtime in 21s, 47k r/s | |
| import requests | |
| import json | |
| from os import makedirs | |
| from os.path import join, exists | |
| from copy import copy | |
| import requests | |
| LIST_ENDPOINT='http://api.viewers-guide.hbo.com/service/charactersList' | |
| FEATURED_ENDPOINT='http://api.viewers-guide.hbo.com/service/charactersFeatured' | |
| DETAIL_ENDPOINT='http://api.viewers-guide.hbo.com/service/characterDetails' | |
| DEFAULT_PARAMS = {'lang': 1} |
##Neo4j GraphGist - Marketing Recommendations Using Last Touch Attribution Modeling and k-NN Binary Cosine Similarity
#Part 1. Neo4j Marketing Attribution Models
Graphs are well suited for marketing analytics - a natural fit since marketing is principally about relationships. In this GraphGist, we'll take a look at how to use Neo4j to make real-time marketing recommendations.
Please note there is now a dedicated project for this: https://github.com/jexp/spoon-neo4j/
Adds these features to Neo4j Browser
-
DataTable (search, sort, paginage)
-
Zoom for graphs and query plans (Hold Alt- and drag / pan)
-
Charts (Currently Line-Charts)

