Created
March 15, 2018 08:04
-
-
Save clungzta/69d3a279e366f8b7ec6e7abfe5658b30 to your computer and use it in GitHub Desktop.
Compute similarity of two fastText word embedding vectors using dot product
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import fastText | |
| import numpy as np | |
| ftModel = fastText.load_model("WordEmbedding/ftWord.bin") | |
| v1 = np.array(ftModel.get_word_vector("injury")) | |
| v2 = np.array(ftModel.get_word_vector("injuries")) # 0.872767 | |
| def get_similarity(v1, v2): | |
| return np.dot(v1/np.sqrt(np.sum(np.square(v1))), v2/np.sqrt(np.sum(np.square(v2)))) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment