based on https://github.com/CartoDB/cartodb with additions as necessary
11/2013 to 02/2014
Install git
sudo apt-get install git-core
Clone project
(function($, deck, undefined) { | |
$(document).bind('deck.change', function(e, from, to) { | |
var $prev = $[deck]('getSlide', to-1), | |
$next = $[deck]('getSlide', to+1); | |
$[deck]('getSlide', to).trigger('deck.becameCurrent'); | |
$prev && $prev.trigger('deck.becamePrevious'); | |
$next && $next.trigger('deck.becameNext'); | |
}); | |
})(jQuery, 'deck'); |
.reveal .slides section { | |
padding: 10px; | |
} | |
.reveal .slides section iframe { | |
-webkit-transform: scale(0.5) translate(-50%, -50%); | |
-moz-transform: scale(0.5) translate(-50%, -50%); | |
transform: scale(0.5) translate(-50%, -50%); | |
min-width: 200%; | |
min-height: 200%; |
<!doctype html> | |
<html lang="en"> | |
<head> | |
<meta charset="utf-8"> | |
<link rel="stylesheet" href="css/reveal.css"> | |
<link rel="stylesheet" href="css/theme/default.css" id="theme"> | |
</head> |
based on https://github.com/CartoDB/cartodb with additions as necessary
11/2013 to 02/2014
Install git
sudo apt-get install git-core
Clone project
// Session | |
require(['models/session'], function(Session) { window.Session = Session }); | |
s = new Session(); | |
s.fetch(); | |
// Returns a Session model with the following: | |
// - attributes contain simple attributes | |
// - trips (TripList) a collection of Trip models; 'light' versions. | |
// - user (User) the logged in user; check its attributes for email and name. | |
// - expenseCategories | |
// - expensePaymentTypes |
<!doctype html> | |
<html lang="en"> | |
<head> | |
<meta charset="utf-8"> | |
<link rel="stylesheet" href="css/reveal.css"> | |
<link rel="stylesheet" href="css/theme/default.css" id="theme"> | |
</head> |
from scrapy.xlib.pydispatch import dispatcher | |
from scrapy import signals | |
from scrapy.exceptions import DropItem | |
from scrapy.utils.serialize import ScrapyJSONEncoder | |
from carrot.connection import BrokerConnection | |
from carrot.messaging import Publisher | |
from twisted.internet.threads import deferToThread |
$f("", {}, | |
{ | |
clip: { | |
onMetaData: function(c) { | |
var fd = c.duration; | |
//create a cue point for 25, 50 and 75% of player progress | |
var cues = [ | |
{ | |
time: fd * .25 * c.cuepointMultiplier, | |
name: "25%" |
The purpose of this sample is to show the power of EventEmitter2 in the context of a specific example centered around [DIRT][0] (Data-Intensive Realtime) and [ETL][0] (Extract, Transform, Load) applications in node.js. Given the clear limitations of the V8 heap-size doing any exceptionally large data processing in node will require such patterns, and it is in the interest of the community that we start solidifying them.
Lets suppose that you have an ETL that you need to run on a large set of logs which has already been partitioned into files of a size that will by themselves not overload the V8 heap. These kind of size-limited log or data files are common and should need no explaination.
This ETL runs with initial conditions (very common), and thus there may be many sets of worker processes analyzing the same data for different purposes. As an intelligent developer knowning the blocking nature of in-memory data manipulation you decided
/** | |
* Based off of the Lucene prolog parser in the wordnet contrib package within the | |
* main Lucene project. It has been modified to remove the Lucene bits and generate | |
* a synonyms.txt file suitable for consumption by Solr. The idea was mentioned in | |
* a sidebar of the book Solr 1.4 Enterprise Search Server by Eric Pugh. | |
* | |
* @see <a href="http://lucene.apache.org/java/2_3_2/lucene-sandbox/index.html#WordNet/Synonyms">Lucene Sandbox WordNet page</a> | |
* @see <a href="http://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/contrib/wordnet/">SVN Repository of the WordNet contrib</a> | |
* @see <a href="https://www.packtpub.com/solr-1-4-enterprise-search-server/book">Solr 1.4 Enterprise Search Server Book</a> | |
*/ |