Skip to content

Instantly share code, notes, and snippets.

View mschauer's full-sized avatar

Moritz Schauer mschauer

View GitHub Profile
@mschauer
mschauer / fit.jl
Last active February 21, 2025 12:09
Risk neutral derivative pricing
using DelimitedFiles, GLMakie, Distributions
# Black-Scholes formula
normcdf(x) = cdf(Normal(), x)
function blcall(S0, K, r, T, σ)
B = exp(r*T)
F = S0 * B
d1 = log(F/K) / (σ*sqrt(T)) + σ*sqrt(T)/2
(F*normcdf(d1) - K*normcdf((d1 - σ*sqrt(T))))/B
end
@mschauer
mschauer / intervention.jl
Created December 4, 2024 17:38
Z-Identification in Julia with CausalInference.jl
# Solving the problem in https://bsky.app/profile/p-hunermund.com/post/3lci6xojlmt25
using CausalInference, Graphs
# defining the graphical do-operator we need here
function do!(g, v)
for u in collect(inneighbors(g, v))
rem_edge!(g, u, v)
end
end
@mschauer
mschauer / Comp.jl
Last active December 4, 2024 11:55
Comparing CausalInference.pcalg with Associations.infer_graph
using CausalInference
using Associations: CorrTest, PC, Associations
using Test
using Graphs: SimpleDiGraph, Graphs, complete_graph
using StableRNGs
using LinearAlgebra, Random, Distributions
using CausalInference: pcalg, gausscitest, CausalInference
using Combinatorics
using Tables: table, istable
rng = StableRNG(123)
@mschauer
mschauer / bayesball.jl
Last active February 19, 2024 14:21
Bayes ball
using CausalInference, Graphs
V = [:U, :T, :P, :O]
ι = Dict(v=>i for (i,v) in enumerate(V))
g = digraph([1=>3, 2=>3, 3=>4, 2=>4, 1=>4])
# Can estimate total effect T=>O without observing U?
u = ι[:T]
v = ι[:O]
∅ = Set{Int}()
@mschauer
mschauer / kalmanmean.jl
Created August 23, 2023 17:58
Fancy mean with Kalman filter
using Kalman
using Kalman.GaussianDistributions
using Statistics, LinearAlgebra
# prior for time 0
x0 = 0.0
P0 = floatmax(x0)
# observation operator
H = 1.0
@mschauer
mschauer / mixture.jl
Created May 18, 2023 15:36
Hidden correlation
using GLMakie
using Random
# mixture distribution
X(c) = rand() < c ? sqrt(rand()) : 1 - sqrt(rand())
# sample
Random.seed!(1);
A = [0:0.01:1;; X.(0:0.01:1)];
@mschauer
mschauer / quasigaussian.txt
Last active March 2, 2023 08:42
Quasi Gaussian numbers
0.0 0.0
-0.032984387220723596 0.030216400626819087
0.005532055967160396 -0.06303492443179701
0.04716487617405366 0.06151822897388728
-0.08816371845816047 -0.01559492041364278
0.0844812855043274 -0.05374007858781831
-0.02848102330289795 0.10594798706320867
-0.054631054889518714 -0.10518877181328493
0.1190544569384441 0.04347849672678073
-0.12429436989984163 0.05130690904254036
@mschauer
mschauer / detailedbalance.md
Last active December 7, 2022 10:39
Being a bit didactic with ChatGPT.
  • You must have heard the term detailed balance in the context of Markov chains before.

Yes, I have heard of the term detailed balance in the context of Markov chains. In the context of Markov chains, detailed balance refers to a condition that must be satisfied in order for a given system to be in equilibrium. This condition states that the product of the transition probabilities between any two states in the system must be equal to the product of the reverse transition probabilities between those same two states. This ensures that the system will not tend to move towards one particular state over another, and will instead remain in a state of equilibrium.

  • That is not quite correct, let's see. Lets denote the transition probability from state x to state y by p(x → y). Detailed balance holds if there is a probability distribution π(x) over the states such that u(x,y) = π(x)p(x → y) is a symmetric function, hence u(x,y) = u(y,x).

You are correct. My previous explanation was not entirely accurate. Deta

@mschauer
mschauer / dual.jl
Created November 4, 2022 15:14
Dual numbers
struct Dual <: Real
value
δ
# w and Y and tag
end
Base.show(io::IO, x::Dual) = print(io, x.value, " + ", x.δ, " ϵ")
Base.:+(x::Dual, y::Dual) = Dual(x.value + y.value, x.δ + y.δ)
Base.:*(x::Dual, y::Dual) = Dual(x.value*y.value, x.δ*y.value + x.value*y.δ)
Dual(x) = Dual(x, zero(x))
Base.promote_rule(::Type{Dual},::Type{<:Real}) = Dual
@mschauer
mschauer / introx.jl
Last active October 17, 2022 07:02
Adam to maximise expectation or variational inference with https://github.com/gaurav-arya/StochasticAD.jl
using StochasticAD, Distributions, Optimisers, GLMakie
import Random # hide
Random.seed!(1234) # hide
# Function we want to maximize the expectation of
function X(p)
a = p*(1-p)
b = rand(Binomial(10, p))
c = 2 * b + 3 * rand(Bernoulli(p))
return a * c * rand(Normal(b, a))