-
-
Save bwengals/90e79d64fe9e2d496a048f0b2c6d33da to your computer and use it in GitHub Desktop.
This is cool. Is there a way to avoid a loop for gp_replicates? By passing an array of (n_gps, cov_dim, cov_dim)
? As a "batch gp"? https://forum.pyro.ai/t/gaussian-processes-fit-using-plates/1742
yes.. but unfortunately not with batching like that. there is a way since jax's sparse matrix multiply is in pytensor now. I'll add that implementation
In case anyone else is exploring using this as part of a linear model, I've forked it to show a version using prior_linearized
instead: https://gist.github.com/kforeman/692542430469d016f84a17f230d8500b
From there @theorashid might be able to change the size of beta_f
to (n_gps, gp_f._m_star)
to use the same basis function but independent coefficients for each of the child GPs. I haven't yet worked out how to get the delta
deterministic working yet, though - I imagine it may have to first construct a sparse matrix (kronecker of a 3x3 identity matrix with phi_f
or something like that based on @bwengals comment)?
Yup that's right @kforeman. I have it implemented but it's slower, I think I'm doing something wrong but haven't looked into it more yet.
Interesting, I am currently using that approach but didn't actually benchmark - will find some time to see if it's actually faster or I've just slowed myself down.
Ah thanks, good catch! Fixed