Skip to content

Instantly share code, notes, and snippets.

View gabrieleangeletti's full-sized avatar
📖

Gabriele Angeletti gabrieleangeletti

📖
View GitHub Profile
@farrajota
farrajota / multiple_learning_rates.lua
Last active April 10, 2018 16:47
Example code for how to set different learning rates per layer. Note that when calling :parameters(), the weights and bias of a given layer are separate, consecutive tensors. Therefore, when calling :parameters(), a network with N layers will output a table with N*2 tensors, where the i'th and i'th+1 tensors belong to the same layer.
-- multiple learning rates per network. Optimizes two copies of a model network and checks if the optimization steps (2) and (3) produce the same weights/parameters.
require 'torch'
require 'nn'
require 'optim'
torch.setdefaulttensortype('torch.FloatTensor')
-- (1) Define a model for this example.
local model = nn.Sequential()
model:add(nn.Linear(10,20))
@MohamedAlaa
MohamedAlaa / tmux-cheatsheet.markdown
Last active November 15, 2024 09:51
tmux shortcuts & cheatsheet

tmux shortcuts & cheatsheet

start new:

tmux

start new with session name:

tmux new -s myname