Skip to content

Instantly share code, notes, and snippets.

@joshz
Created May 13, 2012 22:10
Show Gist options
  • Save joshz/2690462 to your computer and use it in GitHub Desktop.
Save joshz/2690462 to your computer and use it in GitHub Desktop.
Fisher's theorem functions, bad replicator dynamics, notes
# for 2x2 games
# PR_t+1(i) = (PR_t(i) * pi(i))/sum from j=1 to n (Pr_t(j)*pi(j))
# payoff = pi(i)
# proportion = Pr(i)
# w = weights
payoffs = [[[2,2], [2,0]],
[[0,2], [3,3]]]
payoffs = [[[2,2], [0,0]],
[[0,0], [1,1]]]
pr = (1/2., 1/2.)
#payoffs pr*payoff
def weights(pr):
if payoffs[0][0][0] == payoffs[0][1][0]:
w1 = pr[0]*payoffs[0][0][0]
else:
w1 = pr[0]*(payoffs[0][0][0] + pr[0]*payoffs[0][1][0])
if payoffs[1][0][0] == payoffs[1][1][0]:
w2 = pr[1]*payoffs[0][0][0]
else:
w2 = pr[1]*(payoffs[1][0][0] + pr[1]*payoffs[1][1][0])
return (w1, w2)
def repldyn(weights):
pr1 = weights[0]/sum(weights)
pr2 = weights[1]/sum(weights)
return pr1, pr2
w = weights(pr)
print w
print repldyn(w)
#1/n * sum(from i = 1 to n: (s_i-theta)^2) - 1/n * sum(from i=1 to n: (s_i-c)^2)
predictions = (45, 25, 56)
def diversity(pred, act):
n = len(pred)
s1 = sum(map(lambda x: (x - act)**2, pred))
#for p in pred:
#s1 += (p - act)**2
return float(s1)/n
print diversity(predictions, 39)
### END REPLICATOR DYNAMICS ###
### START FISHER'S THEOREM ###
print '#######'
def average(G):
return [sum(l)/float(len(l)) for l in G]
G = [[4, 6, 7],
[1, 5, 9],
[4, 4, 5]]
#G = [[3, 4, 5],
#[2, 4, 6],
#[0, 4, 8]]
avg = average(G)
def weightss(G):
w = []
for population in G:
w.append([1./len(population) * i for i in population])
return w
def pr(w):
pr = []
for weights in w:
pri = []
for i in weights:
pri.append(i/sum(weights))
pr.append(pri)
return pr
w = weightss(G)
p = pr(w)
def new_avg_fitness(G, p):
naf = []
for i, population in enumerate(G):
r = []
for j, v in enumerate(population):
r.append(p[i][j]*v)
naf.append(sum(r))
return naf
navg = new_avg_fitness(G, p)
def incr(avg, navg):
p = []
for i, v in enumerate(avg):
p.append(100-(avg[i]*100)/navg[i])
return p
def incr_by(avg, navg):
p = []
for i, v in enumerate(avg):
p.append(navg[i]-avg[i])
return p
def variation(G, avg):
var = []
for i, population in enumerate(G):
s = 0
for j, v in enumerate(population):
s += (v-avg[j])**2
var.append(s)
return var
v = variation(G, avg)
print 'increase in fitness:', incr(avg, navg), '%'
print 'increase by :', incr_by(avg, navg)
print 'variation :', v
'''\
Fisher's Theorem:
The change in average fitness due to selection will be proportional to the variance
* There is no cardinal - population of things that we call cardinal, fenotypic variation in the population of cardinals
* Rugged Peaks - place different cardinals on a landscape and they'll have different fitness
* Replicator dynamics - copy the more fit and ones that exist in higher proportion, helps choose ones that are higher up on the landscape
Idea:
Higher variance increases rate of adaptation
Intuition:
Low Variation - climb a little bit with a lot of pressure
High variation - can climb a lot faster, population adapts faster
Variation or Six Sigma
======================
Variation is not good, they're low fitness, low value, so you want to \
reduce those
Book: The Checklist Manifesto - Atul Gawande
Opposite Proverms
you're never too old to learn
you can't teach an old dog new tricks
don't change horses in midstream
variety is the spice of life
the pen is mighter than the sword
actions speak louder than words
Models have assumptions, decide what settings they're going to work.
If landscape moves and you're doing six sigma, there's no place to move.
If your landscape is changing you want variation
in equilibrium world, you want six sigma
Fixed Landscape: get to peak and six sigma
Dancing Landscape: maintain variation
Predictions
===========
Individuals and Crowds
Categories - reduce variation, R^2-how much variation we can explain
Linear Models
Markov Models - overcomes limitations of linear models
Diversity Prediction Theorem - many models better than one model - wisdom of crowds.
Linear Models
=============
"Lump to Live" - categories
CAtegories - reduce variation - less variation - the better prediction - R^2 - variation explained
Linear Models
=============
Z = ax + by + cz + ...
Z = dependent var
Q:
2 people are asked to estimate the population of New York City. The first person, Jonah, lives in Chicago, a densely populated city. He guesses 7.5 million. Nastia, the second person, lives in Bay City, Michigan, a relatively small town. She guesses 3 million. The actual population of New York City is 8,391,881. Which of the following may explain why Jonah's prediction was more accurate than Nastia's?
A:
To guess the population of New York City, Jonah and Nastia could have been using their own home cities as an analogy. Chicago, a large city (of population of about 2.9 million) is a better reference point than Bay City (of population about 2000).
Explanation:
Using appropriate analogies is important for accurately predicting things. Its natural to use one's hometown as an analogy when predicting the size of another city. This is much easier to do when the size of your city is closer to the size of the one you're predicting. For Jonah, his calculation could have been 'New York is probably just over 2.5x the size of where I live' and arrived at 7.5, which is still off, but closer than Nastia's prediction. Nastia, from Bay City, using her analogy had to consider 'New York is far larger than my town-- maybe 1500 times the size?' and was farther off. Categories could have worked too, just not the example given. An appropriate example would have been: Jonah considers New York a 'very big city' and considers Chicago a 'big city'. Chicago has a population of 2.9 million and is 'big', so New York must have more than that to be considered 'very big'. However, Nastia might consider both Chicago and New York very big and therefore might estimate their populations to be closer to eachother. B was wrong because there's no reason why Nastia should only consider Manhattan. That has nothing to do with the information given.
Diversity Prediction Theorem
============================
Relates wisdom of crowd to wisdom of individual.
More diverse crowd => more accuracy
more intelligent individuals => more wisdom of crowd
Diversity - variation in prediction
Crowd's Error = Average Error - Diversity
c - crowds prediction
theta - true value
s_i - individual i's prediction
n - number of individuals
(c-theta)^2 = 1/n * sum(from i = 1 to n: (s_i-theta)^2) - 1/n * sum(from i=1 to n: (s_i-c)^2)
Book: Wisdom of Crowds - James Surowiecki
Diversity seems to be the main component in wisdom of
crowds
how you get diversity?
different models
Crowd Error = Average Error - Diversity
Large = Large - small
like minded people who are all wrong - madness of crowd
#1 Intelligent Citizens
=======================
Growth Models - Countries can accumulate rapid growth rates just by investing in capital but at some point, when they get to the frontiers of what was known, they need innovation to crate growth.
Innovation has a doubling growth-direct effect, and more investment in capital
Blotto - add new dimensions to strategic competitions. war/terrorist tactics
Markov - history doesn't matter, intervention that changes the state doesn't necessarily do us any good, changing probabilities does
#2 Clearer Thinker
==================
Tipping pointes - if you see a kink in a graph, doesn't mean it's a tip, could be a growth model. Tip is a situation where likelihood of different outcomes changes at a point in time.
Path dependence - gradual change in what will happen as events unfold
Percolation models - give us tipping points
SIR models
Dissease models
#3 Understand and Use Data
==========================
Category models
Linear models
Prediction models
Markov models
Growth models
#4 Decide, strategize, design
=============================
Decision theory models
Game theory - prisoners dillema
Collective action
Mechanism design - write contracts, design policies so we get outcomes that we want, get people to reveal information
Auctions - Depends on how people behave
How people behave
=========
Rational
Rules
Biases
Race to the bottom - assumptions about behavior have huge effects on outcomes
exchange markets - behavior doesn't matter if they're reasonably coherent in the actions
Things don't often aggregate how we'd expect
equilibria
patterns
random systems
complex systems
'''
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment