Skip to content

Instantly share code, notes, and snippets.

@jbochi
Last active November 12, 2024 11:32
Show Gist options
  • Save jbochi/2e8ddcc5939e70e5368326aa034a144e to your computer and use it in GitHub Desktop.
Save jbochi/2e8ddcc5939e70e5368326aa034a144e to your computer and use it in GitHub Desktop.
Recommending GitHub repositories with Google Big Query and implicit library: https://medium.com/@jbochi/recommending-github-repositories-with-google-bigquery-and-the-implicit-library-e6cce666c77
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@jbochi
Copy link
Author

jbochi commented Aug 22, 2017

Hi @juanremi. Sorry for the late reply. Your code looks fine, but 50 factors for a dataset so small is probably way too much. You should do some validation to make sure the results make sense. You can, for instance, compute the mean squared error for ratings in the validation set.

@ptiwaree
Copy link

Hi jbochi, in your recommendations.ipynb, could you please explain this part?

confidence = 40
model.fit(confidence * stars)

I am not sure why we are setting confidence to 40 and what confidence even means? Based on this, it looks like you are filling in the matrix with values of 40 for the items that are not sparse. Is that true? Can we instead fill it with actual number of stars (a number between 10 and 150)?

@jbochi
Copy link
Author

jbochi commented Feb 9, 2018

@ptiwaree stars is a sparse matrix with 0s and 1s. By multiplying by confidence, we are just giving positive examples a fixed high weight. The best value should be determined by cross-validation. You can also try this heuristic: benfred/implicit#74 (comment)

@antonioalegria
Copy link

antonioalegria commented Jun 14, 2018

Hi @jbochi , i'm trying to use this code to evaluate on another dataset but I'm getting a bunch of Index out of bounds errors because the factors in the train data are not the same as in the test data. This is probably because the Users/Items in train are different than the ones seen in test.

How would you adapt this code to face this challenge? Or is my theory incorrect? /cc @antonioalegria

@joddm
Copy link

joddm commented Nov 2, 2018

I also have problems with index out of bounds, did you figure it out @antonioalegria?

Do you know what version of the libraries you ran this with @jbochi? Running now with

pandas: 0.23.4
numpy: 1.15.2
scipy: 1.1.0
implicit: 0.3.8
sklearn: 0.20.0

My original dataset is of shape:

<20210x4324 sparse matrix of type '<class 'numpy.float64'>'
	with 116992 stored elements in COOrdinate format>

and the truth variable in ndcg_scorer transforms the test split to shape (20206, 4324), while the predictions variable is of shape (20210, 4310).

So this is what's causing the index out of bounds error.

Edit: By changing the p variable, I managed to correct the predictions shape, but I don't understand why the truth variable is of shape (20206, 4324). My guess is the same as yours @antonioalegria, that in LeavePOutByGroup, in one of the splits there are users that don't have purchased some products, hence the full dimensions are not restored in truth

Okay, by filtering out purchases with fewer customers than x (trying out different values), I managed to get a correct truth shape, but now the predictions shape is off. Aah... :) Do you know of any heuristic @jbochi?

@seb799
Copy link

seb799 commented Jan 17, 2019

@joddm @antonioalegria
From my understanding, p in LeavePOutByGroup() should be <= to the (minimum number of items per user)/2.
For exemple, if your dataset has a user with activity for only 4 items, p should be <= 2.

Either you rebuild your dataset to include only users with activity for more products, or you filter out users with less than p*2 products from the test sets.

Hope that makes sense

It resolved the index out of bound error on my end.

See also

@DaStapo
Copy link

DaStapo commented Aug 23, 2020

If my dataset is mostly just 2 items per users, I assume LeavePOutByGroup is not the way to go? Because if I understand correctly, this would mean that each split would have mostly 1 item per users and therefore the model has nothing to learn.

@kylemcmearty
Copy link

@jbochi what is the license on this gist?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment