Skip to content

Instantly share code, notes, and snippets.

View RobinDong's full-sized avatar

Robin Dong RobinDong

View GitHub Profile

10 Scala One Liners to Impress Your Friends

Here are 10 one-liners which show the power of scala programming, impress your friends and woo women; ok, maybe not. However, these one liners are a good set of examples using functional programming and scala syntax you may not be familiar with. I feel there is no better way to learn than to see real examples.

Updated: June 17, 2011 - I'm amazed at the popularity of this post, glad everyone enjoyed it and to see it duplicated across so many languages. I've included some of the suggestions to shorten up some of my scala examples. Some I intentionally left longer as a way for explaining / understanding what the functions were doing, not necessarily to produce the shortest possible code; so I'll include both.

1. Multiple Each Item in a List by 2

The map function takes each element in the list and applies it to the corresponding function. In this example, we take each element and multiply it by 2. This will return a list of equivalent size, compare to o

@giuseppebonaccorso
giuseppebonaccorso / svd_recommender_tensorflow.py
Last active February 12, 2023 11:07
SVD Recommendations using Tensorflow
import numpy as np
import tensorflow as tf
# Set random seed for reproducibility
np.random.seed(1000)
nb_users = 5000
nb_products = 2000
nb_factors = 500
max_rating = 5
@L0SG
L0SG / freeze_example.py
Last active October 12, 2023 05:02
PyTorch example: freezing a part of the net (including fine-tuning)
import torch
from torch import nn
from torch.autograd import Variable
import torch.nn.functional as F
import torch.optim as optim
# toy feed-forward net
class Net(nn.Module):
def __init__(self):
@thomwolf
thomwolf / gradient_accumulation.py
Last active October 21, 2024 08:08
PyTorch gradient accumulation training loop
model.zero_grad() # Reset gradients tensors
for i, (inputs, labels) in enumerate(training_set):
predictions = model(inputs) # Forward pass
loss = loss_function(predictions, labels) # Compute loss function
loss = loss / accumulation_steps # Normalize our loss (if averaged)
loss.backward() # Backward pass
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
optimizer.step() # Now we can do an optimizer step
model.zero_grad() # Reset gradients tensors
if (i+1) % evaluation_steps == 0: # Evaluate the model when we...