Skip to content

Instantly share code, notes, and snippets.

View radi-cho's full-sized avatar

Radi Cho radi-cho

View GitHub Profile
@radi-cho
radi-cho / gpt4-SHAP.txt
Created March 28, 2023 08:35
Prompt: "Explain in detail what the SHAP Gradient Explainer is and how it works. Use LaTeX math. Provide intuition."
The SHAP (SHapley Additive exPlanations) Gradient Explainer is a method for interpreting the output of machine learning models by assigning feature importance values to input features. The idea behind SHAP values is to fairly distribute the contribution of each feature to the prediction for a specific instance by considering all possible feature combinations. SHAP values are based on the concept of Shapley values from cooperative game theory, which ensures that the contributions are fairly distributed among the features.
The Gradient Explainer specifically targets Deep Learning models, such as neural networks, and uses the gradients of the model's output with respect to its input features to approximate SHAP values. The Gradient Explainer is computationally efficient and can handle large neural networks with high-dimensional inputs. It works by combining the gradients with the expected values of the model output to provide an explanation for the local behavior of the model around a specific input instance.
const int buttonPin = 4;
const int ss = 10;
const int mosi = 11;
const int miso = 12;
const int sclk = 13;
int lastState = LOW;
int ledState = LOW;
void setup()
@radi-cho
radi-cho / log.sh
Created April 16, 2025 00:24
Assignment experiments config
# INITIAL
uv run train.py \
--train_data_path ../archive/tiny_train.npy \
--val_data_path ../archive/tiny_valid.npy \
--vocab_size 10000 \
--context_length 256 \
--d_model 512 \
--d_ff 1344 \
--num_layers 4 \