Skip to content

Instantly share code, notes, and snippets.

@Steake
Last active October 16, 2023 07:27
Show Gist options
  • Save Steake/8f5029c098e9a91f9152cd6a3a87ba2b to your computer and use it in GitHub Desktop.
Save Steake/8f5029c098e9a91f9152cd6a3a87ba2b to your computer and use it in GitHub Desktop.

DiffNet++: A Comprehensive Overview

Architecture Overview

graph TD;
    A[User-Item Interaction History] -->|Matrix Factorization| B1[Embedding Layer];
    B1 -->|Combine with Features| C1[Fusion Layer];
    C1 -->|Context-Aware Attention| D1[Context-Aware Attention Mechanism];
    D1 -->|Graph Convolution| D2[Graph Convolutional Network GCN Layer];
    D2 -->|Dot Product| E1[Prediction Layer];
    E1 --> F1[Output: Predicted Preferences];
    Z1[User-User Social Network] -.->|Influence| D2;
    Z2[User-Item Interest Network] -.->|Interest| D2;
Loading

Tabular Data for DiffNet++

Layer Description Example Data
Embedding Layer Converts users and items into vector representations Alice: [0.8, 0.2]
Fusion Layer Combines embeddings with other features Alice's vector with Location: [0.8, 0.2, 0.5]
Context-Aware Attention Mechanism Assigns weights to features based on relevance Attention weights for Alice: [0.6, 0.3, 0.1]
GCN Layer Updates embeddings using both social and interest networks Updated Alice's vector: [0.75, 0.15, 0.45]
Prediction Layer Outputs the final predicted preference score Predicted score for Alice liking 'Star Wars': 0.92

Description

DiffNet++ is an advanced recommendation system architecture that provides a deep understanding of user preferences. It begins with an embedding layer that transforms user-item interactions into vectors. The fusion layer then incorporates additional features. What sets DiffNet++ apart is its context-aware attention mechanism which evaluates the importance of each feature. This is followed by a Graph Convolutional Network (GCN) layer that integrates insights from both user-to-user social connections and user-to-item interest networks. Finally, the prediction layer produces personalized recommendations based on the refined embeddings.

1. Embedding Layer:

Given:

  • ( u ) is the user's embedding vector.
  • ( i ) is the item's embedding vector.
  • ( W_{1u} ) and ( W_{1i} ) are the weight matrices.
  • ( b_{1u} ) and ( b_{1i} ) are the bias terms.

The transformations are given by:

$$ L1u = \text{ReLU}(W_{1u} \times u + b_{1u}) $$ $$ L1i = \text{ReLU}(W_{1i} \times i + b_{1i}) $$

2. Fusion Layer:

Given:

  • ( W_{2u} ) and ( W_{2i} ) are the fusion weight matrices.
  • ( b_{2u} ) and ( b_{2i} ) are the fusion bias terms.

The fusion embeddings are computed as:

$$ \text{Fusion} = \text{Tanh}(W_{2u} \times L1u + W_{2i} \times L1i + b_{2u} + b_{2i}) $$

3. Influence and Interest Diffusion Layers:

Given:

  • ( W_3 ) is the weight matrix for diffusion.
  • ( b_3 ) is the bias term for diffusion.
  • ( S ) is the social network matrix.

The influence diffusion is modeled as:

$$ L2u = \text{ReLU}(W_3 \times \text{Fusion} + b_3 + \text{Diffusion}(S)) $$

Where (\text{Diffusion}(S)) is a function that aggregates the influence from neighboring users in the social network ( S ).

4. Prediction Layer:

Given:

  • ( W_4 ) is the weight matrix for prediction.
  • ( b_4 ) is the bias term for prediction.

The prediction is made as:

$$ \hat{y} = \text{Sigmoid}(W_4 \times L2u + \text{Item Bias} \times L1i + b_4) $$

The model is then trained using a pair-wise ranking-based loss function which evaluates the difference between the predicted ratings (( \hat{y} )) and the actual ratings. The Adam optimizer with learning rate ( \alpha = 0.001 ) is used to adjust the parameters ( W ) and ( b ) in order to minimize the loss.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment