Skip to content

Instantly share code, notes, and snippets.

@wdevazelhes
Last active January 16, 2019 09:08
Show Gist options
  • Save wdevazelhes/6ce454a73ef9ed1edc2951244c447b15 to your computer and use it in GitHub Desktop.
Save wdevazelhes/6ce454a73ef9ed1edc2951244c447b15 to your computer and use it in GitHub Desktop.
This gist explains why the two equivalent ways to compute the loss for sdml can be slightly different because of numerical uncertainties
import numpy as np
from scipy.sparse.csgraph import laplacian
from sklearn.utils import check_random_state
from scipy.sparse import coo_matrix
from numpy.testing import assert_allclose
RNG = check_random_state(0)
def test_loss_sdml_uncertainties():
n_samples = 10
n_dims = 5
X = RNG.randn(n_samples, n_dims)
c = np.array([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]) # we build a
# dataset where each point is linked (leaving and arriving edges) to only
# one other points, to avoid uncertainties due to factorization like 2*a -b
# - c may differ from (a - b) + (a -c). The following code needs this
# condition to write the expressions in equivalent terms up to some zeros.
pairs = X[c]
y = np.array([ 1, 1, -1, 1, 1])
# original computation of the expression
def new_lap(A, normed=False): # a laplacian that sums on lines instead
# of columns (in the original snippet
# (https://gist.github.com/wdevazelhes/3c349e13976613d15ebc46178c942474),
# we worked on a symmetric matrix (adj + adj.T), so the original
# snippet is still valid)
return laplacian(A.T, normed=False).T
adj = coo_matrix((y, (c[:, 0], c[:, 1])), shape=(n_samples,) * 2)
expr_1 = (X.T.dot((new_lap(adj, normed=False).dot(X))) # term 1
+ X.T.dot((new_lap(adj.T, normed=False).dot(X)))) # term 2
# equivalent way to do it with already formed pairs
pairs_0, pairs_1 = pairs[:, 0], pairs[:, 1]
expr_2 = ((pairs_0.T).dot((pairs_0 - pairs_1) * y[:, np.newaxis]) # term 1
+ (pairs_1.T).dot((pairs_1 - pairs_0) * y[:, np.newaxis])) # term 2
assert_allclose(expr_1, expr_2, 1e-10) # the two expressions are almost
# equivalent
# Looking at the first term for each expressions (before the left dot
# product with X/pairs_0, they are really equal, up to filling zeros.
assert (new_lap(adj, normed=False).dot(X)[1:6] ==
(pairs_0 - pairs_1) * y[:, np.newaxis]).all()
# if we remove theses zeros, the first terms, this time including the dot
# product, are still equal
assert (X[1:6].T.dot((new_lap(adj, normed=False).dot(X)[1:6])) ==
(pairs_0.T).dot((pairs_0 - pairs_1) * y[:, np.newaxis])).all()
# But if we let the dot product as is, they become different, probably
# because numpy will do the matrix product in a slightly different way.
assert (X.T.dot((new_lap(adj, normed=False).dot(X))) ==
(pairs_0.T).dot((pairs_0 - pairs_1) * y[:, np.newaxis])).all()
if __name__ == '__main__':
print(test_loss_sdml_uncertainties())
@wdevazelhes
Copy link
Author

With theses versions, it returns an assertion error for the last assertion, as shown.

('Python', '2.7.15 |Anaconda, Inc.| (default, Oct  9 2018, 15:30:22) \n[GCC 7.3.0]')
('NumPy', '1.15.2')
('SciPy', '1.1.0')

@wdevazelhes
Copy link
Author

However it does not fail with these new versions:

Linux-4.4.0-141-generic-x86_64-with-debian-stretch-sid
Python 3.7.1 (default, Dec 14 2018, 19:28:38) 
[GCC 7.3.0]
NumPy 1.15.4
SciPy 1.2.0
Scikit-Learn 0.20.2

If it does so consistently for different random seeds, it would mean that the zeros in the computation do not introduce uncertainties anymore with the new versions.

If so, then we should investigate the effect of the factorization in the first expression, i.e. when some points are connected to several other points.

If it still gives the exact same result, we should investigate the uncertainties introduced in the final factorization between the expressions (the two expressions from the original snippet: https://gist.github.com/wdevazelhes/3c349e13976613d15ebc46178c942474)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment